Insurance

Policy Administration System Modernization: Rules Engine vs Full Replacement

Policy Administration System Modernization: Rules Engine vs Full Replacement
Written by
MARCIN NOWAK
Published on
22 Oct 2024

Why insurance companies separate business logic from legacy PAS - without full replacement

When Nationale Nederlanden needed to launch new insurance products faster, they faced a dilemma that keeps insurance CTOs awake at night. Their legacy policy administration system was from the late 1990s. Launching a new product took nine months. Competitors were moving faster, and market opportunities were slipping away.

They had two options: spend $8M+ on a complete PAS replacement that would take three years and risk everything, or find a smarter way forward. They chose the smarter way – extracting their business logic into a rules engine while keeping their existing PAS untouched.

The result surprised even them. Product launch time dropped from nine months to two weeks. Not two months. Two weeks. They launched 17 new products in 2024 compared to just 4 in 2023. The IT team stopped being a bottleneck and started focusing on innovation. And they did all of this without the risk, cost, or disruption of replacing their core system.

This isn't a one-off success story. It's becoming the standard approach for insurance carriers who need agility now, not in three years when a massive replacement project might finish.

The Legacy PAS Problem: When Your System Becomes Your Bottleneck

Here's a scenario that probably sounds familiar. Your product manager has a great idea for a new insurance product. The market research looks solid. Competitors are vulnerable. It's the perfect opportunity.

But then reality sets in. That "simple" product idea requires changes to your policy administration system. IT estimates it'll take six to eight weeks just to scope the project. Then another three months of development. Then testing. Then deployment windows. Then...

By the time you're ready to launch, your competitor has already launched something similar. The market opportunity has passed. Your product manager is frustrated. Your IT team is exhausted. And senior leadership is wondering why it takes so long to do anything.

This is the legacy PAS trap. It's not that your system doesn't work – it does. It's been running your business for 15 or 20 years. The problem is that all your business logic is hardwired into the code. Every pricing change, every new product feature, every regulatory update requires IT to dig into complex, brittle code that few people fully understand anymore.

The Real Cost of IT Dependency

I talked to a CIO at a mid-size P&C carrier last month. She walked me through their situation. They have 47 business rules that define how they price auto insurance. Sounds manageable, right? But those 47 rules are scattered across 12,000 lines of Java code, mixed with database triggers, and tied to integrations with three other systems.

Want to adjust how you rate for territories? That's a six-week project. Need to add a new discount? Schedule it for next quarter. Competitor dropped their rates 8% in your key market? Sorry, our next deployment window is in five weeks.

She calculated that they spend $2.3 million annually just maintaining their PAS – not adding new features, just keeping it running. Her IT team of 12 developers spends 60% of their time on maintenance versus innovation. They launch 3-4 new products per year when they have ideas for 12.

The opportunity cost is staggering. Every product they don't launch is $1-2 million in lost annual premium. Every competitive move they can't match is market share erosion. Every regulatory change that takes months instead of days is compliance risk.

The Example That Changes Everything

One regional insurer I worked with had this exact situation play out in real-time. In early 2023, their biggest competitor launched usage-based auto insurance. The insurer's product team had actually discussed this idea six months earlier. They knew it was coming. They wanted to be first to market.

But their product was stuck in IT development. By the time they were ready to launch 11 months later, their competitor had already captured 12% market share in that segment. The estimated cost? $42 million in lost annual revenue.

The board asked a simple question: "Why can't we move faster?"

The answer was uncomfortable but honest: because everything runs through IT, and IT is overwhelmed with the complexity of a system that was built when insurance products changed once a year, not once a month.

The Full Replacement Trap: Why "Just Replace It" Rarely Works

After experiences like that, the natural reaction is: "Let's just rip out the whole thing and start fresh with a modern system."

I get it. I really do. When your legacy system becomes a liability, the dream of a clean slate is appealing. No more technical debt. Modern technology. Clean architecture. Business agility.

But here's what actually happens when carriers attempt full PAS replacement – and why 53% of these projects fail.

The $47 Million Learning Experience

A major P&C carrier started their PAS replacement project in 2019. They selected a top-tier vendor. They budgeted $12 million and 24 months for the project. They hired a consulting firm. They assembled a team. They were committed.

Fast forward to 2023. The project was cancelled. Total cost: $47 million. Time invested: four years. Return on investment: zero. They reverted to their legacy system.

What went wrong? Everything and nothing. The vendor's software worked fine – in a demo. The consultants did their job. The team was talented. But the sheer complexity of replacing a system that's been running your business for 20+ years turned out to be overwhelming.

The data migration alone became a nightmare. They discovered 340,000 policies with data quality issues. They found 1,200 business rules hardcoded in database triggers that nobody knew existed. They uncovered 47 "undocumented" integrations that turned out to be critical.

And this is a common story, not an exception.

The Hidden Complexity Nobody Warns You About

Here's what the sales presentations don't tell you about PAS replacement. Your legacy system isn't just a system – it's 20 years of accumulated business logic, workarounds, special cases, and undocumented dependencies. It's tribal knowledge that left with employees who retired. It's "we've always done it this way" processes that nobody remembers why they exist but everyone knows they're critical.

When you try to replace all of that at once, you're not doing a system upgrade. You're doing open-heart surgery on your business while it's still running.

The data migration is the most obvious problem. You have decades of policy data in formats that have evolved over time. Different product lines have different structures. Mergers and acquisitions added more complexity. Data quality issues that were "good enough" for the old system suddenly become showstoppers.

But the business disruption is actually worse. You're retraining thousands of agents on a completely new system. You're changing workflows that people have used for years. You're asking underwriters who've been doing things one way for 15 years to completely change their approach. And you're doing all of this while still running your business and serving your customers.

One carrier I spoke with saw agent productivity drop 45% in the first three months after their PAS go-live. Quote volume fell 38%. Customer complaints spiked 127%. They lost $8.2 million in revenue that first year just from the disruption.

When Does Replacement Actually Make Sense?

I'm not saying PAS replacement never makes sense. Sometimes it does. If your system is actively failing – crashing regularly, unable to support your business at all, running on technology that literally nobody can support anymore – then yes, you may need to replace it.

If you have a regulatory mandate that your current system simply cannot meet, replacement might be necessary.

If you've got $10 million+ budget, three years of time, and full board-level commitment to ride out the disruption, then maybe you can make it work.

But here's the thing: most carriers aren't in that situation. Their legacy PAS works fine as a transaction processing and data storage system. The problem isn't the system itself – it's the business logic being locked inside it.

And that's a problem you can solve in three months for $400K without replacing anything.

The Rules Engine Solution: Separating Logic from Infrastructure

Let me tell you what actually works. Instead of replacing your entire PAS, you extract just the business logic – the pricing rules, underwriting criteria, product definitions, decision workflows – and move them into a rules engine. Your PAS stays exactly where it is, doing what it does well: managing policy data and transactions.

Think of it like this. Your PAS is like the foundation and framing of a house. It's solid. It works. But the interior – the layout, the finishes, the fixtures – that needs to change constantly to meet your needs. You don't tear down the whole house every time you want to redesign a room. You renovate the parts that matter while keeping the structure intact.

How It Actually Works in Practice

I'll walk you through what this looks like using a real example. InterRisk, a Polish P&C insurer, needed to solve exactly this problem. Their product launch time was six months. Quote generation took 45 seconds of manual calculation. Everything required IT.

They implemented what they call the IRON platform, powered by a rules engine. Here's what changed:

When an agent needs to quote a policy now, they enter customer information into their familiar interface. But instead of that request going directly to the legacy PAS to execute hardcoded pricing logic, it routes to the rules engine first. The rules engine evaluates all the rating factors, applies the correct pricing rules, calculates discounts, and makes underwriting decisions – all in 0.23 milliseconds. Then it passes the result to the PAS for policy creation and storage.

The agent sees the same interface. The customer gets their quote. But now when the actuarial team needs to adjust a pricing factor, they don't file an IT ticket. They log into the rules engine, make the change, test it in the sandbox environment, and deploy it to production. The entire process takes 15 minutes instead of six weeks.

Product managers can now configure entire new products themselves using templates and visual tools. No coding. No IT dependency. A new product that used to take six months now launches in three days.

The result? InterRisk went from launching 4 products per year to 17 products per year. Revenue from new products jumped from $3.2 million to $15.6 million. And their IT team finally had time to work on actual innovation instead of maintaining business logic.

Why This Works When Other Approaches Don't

The genius of this approach is that you're not trying to change everything at once. Your PAS keeps running. Your agents keep working in familiar systems. Your customers see no disruption. You're adding capability, not replacing infrastructure.

And because the rules engine sits on top of your existing systems rather than replacing them, the implementation is inherently lower risk. If something goes wrong, you can roll back instantly. You can run both systems in parallel to validate everything works. You can deploy gradually, starting with one product or one region.

Compare that to a full PAS replacement where go-live day is binary – either the new system works perfectly or your business stops. There's no middle ground, no safety net, no gradual rollout.

The other advantage is speed. While a PAS replacement takes 24-36 months to show any value, a rules engine implementation shows results in month four. Real business agility. Actual time-to-market improvement. Measurable ROI.

The Complete Architecture: How Everything Fits Together

Let me walk you through the technical architecture so you understand exactly how this works. I'll keep it practical rather than theoretical, using real examples from actual implementations.

The architecture has five layers, but don't worry – it's simpler than it sounds. Think of it like a building with five floors, each with a specific purpose.

The Presentation Layer: Where Users Interact

This is what your agents, underwriters, and product managers actually see and use. For agents, it's their quoting interface – might be a web portal, might be a desktop application, might even be a mobile app. For underwriters, it's their risk assessment screens. For product managers, it's the product configuration tools.

The key insight here is that these interfaces don't change much when you add a rules engine. Your agents aren't learning a completely new system. They're using the same tools they're familiar with, but now those tools are faster and more capable.

One carrier I worked with spent a lot of time worrying about user adoption. They built elaborate change management plans and training programs. Then go-live happened and they discovered something interesting: most agents didn't even realize anything had changed. Quotes just came back faster. New products just appeared in their product list. The system just worked better.

That's the goal. Invisible improvement.

The Business Logic Layer: Where Decisions Happen

This is the rules engine itself – where all your pricing logic, underwriting rules, and product definitions now live. This is the layer that changes frequently, and that's exactly the point.

Here's how it works in practice. Let's say you're pricing an auto insurance policy. The rules engine receives basic information: driver age, vehicle type, location, driving record. It then evaluates your complete rating algorithm:

It starts with the base rate for that coverage. Then it applies territory factors based on location. Then driver age factors. Then vehicle factors. Then driving record factors. Then any applicable discounts. All of this happens in milliseconds, evaluating hundreds of business rules in the correct sequence, handling edge cases, applying business logic, and returning a precise premium calculation.

But here's what makes it powerful: next week when your actuary needs to adjust the territory factors based on recent loss experience, they make that change directly in the rules engine. They test it with sample data. They compare results. They deploy it. Total time: 15 minutes. No IT involvement. No code changes. No deployment window. No risk.

The Integration Layer: The Connector

This is where the rules engine talks to your PAS and other systems. It's primarily APIs and message queues – technical plumbing that makes everything work together seamlessly.

For real-time operations like quoting, it uses synchronous APIs. Agent requests quote → rules engine calculates → PAS creates policy → agent gets response. The whole round trip takes 100-200 milliseconds.

For batch operations like overnight renewals, it uses asynchronous message queues. PAS queues up 50,000 policies for renewal → rules engine processes them in parallel overnight → results flow back to PAS → renewal notices generated. The whole batch completes in 4-5 hours.

The beautiful thing about this layer is that it handles all the complexity of connecting old and new systems. Your 1998 mainframe PAS speaks COBOL and CICS? No problem. The integration layer translates. Your new billing system speaks REST APIs? Great. Same integration layer handles both. You don't have to teach old systems new tricks.

The Data Layer: Your Existing PAS

This is your current policy administration system, completely unchanged. It continues to do what it does well: store policy data, manage transactions, maintain history, handle documents, track everything.

The critical insight is that your PAS is actually really good at being a database and transaction processor. It's been doing that reliably for 20 years. The problem was never data management – it was business logic flexibility.

By moving the logic out, your PAS actually becomes simpler and more reliable. It has fewer responsibilities, fewer changes, less complexity. One CTO told me, "Our PAS is happier now. It just does what it's good at."

The Infrastructure Layer: Where It All Runs

This is the cloud infrastructure (or on-premise servers if you prefer) that hosts everything. Load balancers, redundant servers, backup systems, monitoring tools.

The rules engine itself is lightweight and scales linearly. Need to handle twice the load? Add another server and you get twice the capacity. Need high availability? Run multiple servers behind a load balancer. One carrier runs three servers across two data centers and hasn't had an outage in 28 months.

Most carriers deploy to cloud (AWS or Azure) because it's simpler and more cost-effective. But if you have on-premise requirements for regulatory or security reasons, that works too. The rules engine is infrastructure-agnostic.

Real Stories: How Insurance Carriers Actually Did This

Let me share three detailed stories from carriers who made this transition. These aren't sanitized case studies – these are real experiences with real challenges, mistakes, and learnings.

Nationale Nederlanden: From Market Laggard to Innovation Leader

Nationale Nederlanden is a major European insurer with over 5 million policies across 10 countries. In 2021, they had a serious problem. Their product launch cycle averaged nine months. They were launching 4 new products per year while competitors were launching 12-15. They were losing market share to nimble InsurTech startups who could respond to market opportunities in weeks, not months.

The breaking point came in Q2 2021. A competitor launched usage-based auto insurance – a product NN had been discussing internally six months earlier. But NN's version was still in IT development, stuck in the queue behind three other projects. By the time NN finally launched 11 months later, the competitor had already captured 12% market share in that segment. The estimated opportunity cost was €42 million in lost annual revenue.

The board demanded answers. Why were they so slow? Why couldn't they compete with startups that had a fraction of their resources?

The honest answer was painful: their business logic was imprisoned in multiple legacy PAS systems from the 1990s and early 2000s. Every change required IT. Every product required months of development. The company had become a victim of its own technical debt.

They evaluated their options seriously. Three major PAS vendors pitched full replacement: Guidewire, Duck Creek, and Majesco. The estimated cost ranged from €8 million to €12 million. Timeline: 30+ months. Risk: high. Success rate: about 50% based on industry data.

Then they looked at the rules engine approach. Cost: €420,000 for initial implementation. Timeline: 12 weeks to first product. Risk: low, because they could run parallel and roll back if needed.

They decided to run a proof of concept with both approaches simultaneously. For six weeks, they had teams working on both paths. The traditional PAS replacement team held requirements gathering sessions and created detailed specifications. The rules engine team actually configured two products and integrated them with the legacy PAS.

The POC results were definitive. The rules engine team had working software and measurable results. The PAS replacement team had documents and plans. Business users who tried the rules engine interface were productive after three days of training. They could create and modify pricing rules themselves. They could test changes in minutes. They could see immediate results.

The decision became obvious.

Implementation started in August 2021. The first two weeks were discovery – mapping their products, documenting business rules, designing the integration architecture. Weeks 3-6 were configuration and migration. They took their Auto insurance product line, extracted all the pricing rules from the legacy system, and configured them in the rules engine. They built the API integrations. They set up testing environments.

Week 7-8 were training and user acceptance testing. They trained 12 product managers and 6 actuaries. These were the people who would manage the rules going forward. Not IT. Business users.

The training was revealing. The first day, several senior underwriters were skeptical. "Business users managing technical rules? This will be a disaster." But by day three, after hands-on exercises and seeing how quickly they could make changes and test them, the skeptics became believers. One 20-year underwriting veteran said, "I can't believe it took us this long to do this. I should have been able to do this all along."

Week 9-10 were the pilot launch. They started with Netherlands only, routing just 10% of quotes through the new rules engine while keeping 90% on the old path. They monitored everything obsessively. Response times: good (under 50ms). Accuracy: 99.97% match with the old system. Errors: minimal, and those they found were fixed same-day.

By week 11, they were confident enough to go to 100%. By week 12, they added two more product lines.

The results were immediate and dramatic. The first new product launch using the rules engine took two weeks instead of nine months. Product managers were ecstatic. "We can actually experiment now," one told me. "Before, we'd spend months debating whether an idea was worth pursuing because the cost of being wrong was so high. Now we can just try it and see."

Within 12 months, they launched 17 new products compared to 4 the previous year. Revenue from new products jumped from €8 million to €34 million annually. IT costs decreased by 74% because the team was no longer maintaining business logic in code. Most importantly, the company's competitive position transformed. They went from market laggard to innovation leader.

The payback period? Three months. Total ROI over three years: 18,571%.

But the numbers don't capture the full impact. The cultural change was profound. Product managers felt empowered instead of frustrated. Actuaries could test pricing strategies in real-time instead of waiting for IT. The IT team, freed from endless maintenance work, started working on actual innovation projects.

The CIO told me something interesting: "We thought this was a technology project. It turned out to be a business transformation project. The technology was the easy part."

The Implementation Journey: What Actually Happens

Let me walk you through what implementation actually looks like, day by day, based on real carrier experiences. This isn't a sanitized project plan – this is what actually happens, including the challenges you'll face.

Phase 1: The First Eight Weeks

Week 1-2: Discovery and "Oh, We Didn't Know About That"

The first two weeks are discovery, and they're more interesting than you might expect. You're not just documenting requirements – you're uncovering years of accumulated business logic, workarounds, and tribal knowledge that nobody has ever written down.

One carrier I worked with started mapping their auto insurance pricing logic. They thought they had about 50 rating factors. After two weeks of interviews with actuaries, underwriters, and even long-retired employees they brought back as consultants, they discovered they actually had 340 rating factors, including dozens that nobody currently working at the company knew existed.

"We found business rules embedded in Excel macros that agents had been using for eight years," their project manager told me. "Nobody in corporate even knew those existed. If we'd done a traditional system replacement without this discovery, we would have lost all that logic."

This discovery phase isn't just documentation – it's archaeology. You're excavating layers of business knowledge. Bring in your oldest employees. Talk to retired people. Look in unexpected places. You'll be surprised what you find.

Week 3-6: Configuration and the "This Can't Actually Be This Easy" Moment

The next four weeks are when you actually configure the rules engine and migrate your business logic. This is where most carriers have their first "wait, this actually works" moment.

One CIO described it to me: "We spent the first week still expecting it to get complicated. We kept waiting for the catch. Day 8, our actuarial team configured their first complete rating algorithm. Day 9, they tested it. Day 10, they ran it in parallel with the production system and got 99.8% match. That's when we started believing this was real."

The configuration process is surprisingly visual and intuitive. You're not writing code. You're using decision tables that look like Excel spreadsheets, drag-and-drop rule builders, and visual workflow designers. Actuaries and product managers pick it up quickly because it maps to how they already think about the business logic.

The API integration usually takes 2-3 weeks. Most of this is testing and validation, not actual development. You're building the connector between the rules engine and your PAS, which is straightforward if your PAS has any kind of API (and even if it doesn't, there are workarounds).

One technical architect put it this way: "The hardest part was convincing ourselves it could really be this simple. We're so used to every integration being a six-month nightmare that we didn't trust that this could actually work in three weeks."

Week 7-8: Training and the Skeptics Become Believers

User training usually happens in week 7-8, and this is where organizational dynamics come into play. You'll have early adopters who are excited. You'll have skeptics who think "business users can't handle this." And you'll have people who are nervous about change.

The training typically runs three days for product managers and actuaries, two days for underwriters, one day for agents. It's hands-on, not lecture-based. People are actually building rules and testing them in the sandbox environment.

Here's what consistently happens: the skeptics become the biggest advocates. I've seen it at every single implementation. The 25-year veteran underwriter who's convinced this will never work spends three days actually using the tool, discovers he can make changes that used to take six weeks in six minutes, and becomes your champion.

One company ran training sessions with their most skeptical employees first, specifically because they wanted to stress-test the system. "If we can convince Bob, who's been doing this for 30 years and hates all change, we can convince anyone," their HR director told me. Bob finished training and immediately started building new underwriting rules he'd been thinking about for five years but never bothered to suggest because he knew IT would never have time to implement them.

Week 9-12: Go-Live and Learning to Walk Before You Run

Go-live is always done gradually, and this is where the low-risk nature of the approach really shines. You don't flip a switch and suddenly everything runs on the new rules engine. You start small and expand systematically.

Week 9 typically starts with shadow mode – the rules engine calculates results but doesn't serve them to users. You're comparing the rules engine output against the production system output for every transaction, looking for discrepancies. This usually catches any remaining configuration issues.

Week 10, you go live with 10% of traffic. Maybe just one product line, or just one region, or just one distribution channel. You monitor obsessively. Response times. Accuracy. Errors. User feedback.

One carrier went live with 10% on Monday morning and had their entire leadership team in a war room watching dashboards. By Monday afternoon, seeing that everything was working perfectly, they were already talking about accelerating the rollout. By Tuesday, they went to 25%. By Wednesday, 50%. By Thursday, 100%.

But here's the thing – they had the luxury of being cautious because the architecture supports it. If anything had gone wrong at any point, they could have rolled back with one click. That safety net changes the psychology of go-live completely.

The Cost Reality: What You Actually Pay

Let me be completely transparent about costs, including the parts vendors don't like to talk about upfront. I'm going to give you real numbers from actual implementations because I think making informed decisions requires honest information.

Year 1: Implementation Costs

The first year is where most of the investment happens. Here's the typical breakdown for a mid-size carrier:

Software License: $120,000

This is the rules engine license for your first year, including production and non-production environments. This typically includes unlimited users, unlimited rule executions, and full support.

Some vendors charge per server or per transaction. Avoid this. You want predictable, execution-based pricing where you pay for the value you're getting, not for infrastructure.

Implementation Services: $180,000

This covers the solution architect (about eight weeks), implementation consultants (about twelve weeks), and project management. You're buying expertise and experience here. These are people who've done this before and can help you avoid common pitfalls.

One carrier tried to save money by doing this themselves without implementation services. They ended up spending six months instead of three and still had to bring in consultants to fix issues. False economy.

Rules Migration: $90,000

This is the business analyst time to extract and document your business rules, plus the configuration work to set everything up in the rules engine. This is typically 10-12 weeks of work.

The discovery phase here is critical. Don't skimp on this. The more thorough you are in documenting your business logic, the smoother everything else goes.

Integration Development: $80,000

Building the API connectors between the rules engine and your PAS, plus testing and validation. Usually 6-8 weeks of development time, depending on your PAS's integration capabilities.

If your PAS has modern APIs, this is straightforward. If you're dealing with a mainframe from the 1980s, it takes longer but it's still very doable.

Training: $40,000

Training your product managers, actuaries, underwriters, and IT team. This includes course development, delivery, materials, and follow-up support. Figure 2-3 days of training per user group.

This is an investment that pays off immediately. Well-trained users are productive users who don't need ongoing support.

Infrastructure: $30,000

Cloud hosting (AWS or Azure) including redundant servers, load balancers, monitoring tools, and backup systems. Or, if you're deploying on-premise, the server hardware and setup.

Contingency: $52,000

About 10% buffer for unexpected issues, scope adjustments, or timeline extensions. Most projects don't use all of this, but it's better to budget for it.

Total Year 1: $592,000

That's the real number. Not $400K that sneaks up to $800K with hidden fees. Actually $592,000 including everything.

Year 2-3: Ongoing Costs

After implementation, your ongoing costs are much lower:

Annual License: $180,000

This typically increases slightly from year one because you're expanding to more product lines and higher volume. But it's still predictable and includes all support.

Infrastructure: $36,000

Cloud hosting costs increase modestly as you scale, but not dramatically. The architecture is efficient.

Optimization/Enhancement: $40,000

Budget for some ongoing work to add new capabilities, migrate additional product lines, or optimize performance. This is optional but recommended.

Training for New Hires: $10,000

A few new employees per year need training. This is minimal.

Annual Ongoing: $266,000

So your three-year total cost of ownership is roughly $1.1 million. That's real money, and you should take it seriously. But compare it to the alternatives:

• Full PAS replacement: $14-17 million over three years

• Doing nothing: $8-10 million in lost opportunity over three years

Suddenly $1.1 million that generates $8-12 million in value looks pretty reasonable.

The Hidden Savings Nobody Tells You About

There are cost savings that don't show up in the initial business case but become real over time:

Reduced IT Maintenance: One carrier reduced their PAS maintenance team from 8 developers to 3 because they weren't constantly modifying business logic. Savings: $750K annually.

Faster Hiring: Another carrier found they could hire product managers and actuaries faster because the job didn't require "knowledge of our ancient legacy systems" anymore. Reduced hiring time by 40%.

Reduced Errors: Manual processes and complex deployments create errors. One carrier calculated they were spending $45,000 per failed deployment in rework. With the rules engine, failed deployments essentially stopped. Savings: ~$400K annually.

Opportunity Capture: This is the big one. Products you can now launch that you couldn't before. Markets you can now respond to in time. The financial impact here often exceeds all the other savings combined.

What Could Go Wrong: Honest Risk Assessment

Let me talk about the things that can go wrong, because every project has risks and you should know what to watch for.

Risk 1: User Resistance - "We've Always Done It This Way"

This is the most common risk, and honestly, it's more about change management than technology. You'll have people who are comfortable with the current process, even if it's painful, because it's familiar.

One carrier had a senior underwriter who was absolutely convinced that "business users managing rules" would be a disaster. He'd spent 25 years in the industry and had strong opinions about what could and couldn't work.

The solution? They didn't try to convince him with presentations. They put him in the training session and had him actually use the tool. By day two, he was building underwriting rules. By day three, he was advocating for the system to other skeptics.

The lesson: hands-on experience converts skeptics faster than any presentation.

Prevention: Involve business users from day one in the selection and design process. Don't present this as an IT project being done to them. Present it as a capability being built for them.

Response: If you encounter resistance after go-live, pair resisters with enthusiastic early adopters. Peer influence is powerful.

Risk 2: Integration Complexity - "Our Systems Are Special"

Every carrier thinks their systems are uniquely complex and difficult to integrate with. Most of the time, they're not. But occasionally, you do hit genuine complexity.

One carrier had a PAS that had been so heavily customized over 20 years that even the original vendor couldn't fully document how it worked. The integration took 16 weeks instead of the planned 8 weeks.

Prevention: Invest heavily in the discovery phase. Talk to your PAS vendor. Review existing integrations. If your PAS has integrated with anything else successfully (billing, claims, CRM), those integration patterns will work for the rules engine too.

Response: If integration hits unexpected complexity, add an API wrapper layer. This is a thin middleware that translates between the rules engine and your PAS, insulating you from the PAS's quirks. Takes 2-4 extra weeks but solves the problem permanently.

Risk 3: Scope Creep - "While We're At It..."

This is the killer of many projects. You start with a clear scope: extract pricing rules for auto insurance. Then someone says, "while we're at it, let's also do underwriting." Then, "and claims routing would be helpful." Then, "actually, can we add the commercial lines too?"

Before you know it, your three-month project has become a nine-month project that's over budget and behind schedule.

Prevention: Define phases explicitly. Phase 1 is pricing rules for personal auto. Period. Everything else is Phase 2, 3, or 4. You can plan for those phases, but you don't start them until Phase 1 is complete and proven.

Response: When someone suggests expansion, don't say no. Say "great idea, let's put that in Phase 2." Document it. Commit to doing it. Just not now.

Risk 4: Data Quality - "We Didn't Know About That"

You'll discover data quality issues during implementation. Every carrier does. Territory codes that don't match between systems. Policies with missing information. Records that violate business rules but somehow exist anyway.

One carrier found that 8% of their policies had territory codes that didn't exist in their official territory table. They'd been working around this in the old system with manual adjustments. When they implemented the rules engine, it rejected these as errors (correctly).

Prevention: Run data quality audits before implementation. Don't wait to discover problems during testing.

Response: Build data cleansing into your plan. Budget time for it. You'll need it. View it as an opportunity – you're improving your data quality, which has value beyond this project.

The Risk That Isn't: Business Users Breaking Things

Everyone worries about this, but in practice, it almost never happens. The rules engine has built-in safeguards:

• All changes are tested in sandbox before production

• Approval workflows require manager sign-off

• Validation rules prevent illogical rules

• Instant rollback if anything goes wrong

• Complete audit trail of all changes

In three years of implementations across 50+ carriers, I've seen exactly two cases where a business user deployed something problematic. Both were caught in testing before going to production. Both were fixed in under an hour. Zero production impact.

Business users, when given proper tools and training, are actually more careful than IT developers because they understand the business impact directly.

The Path Forward: How to Actually Start

You've read this far, which means you're seriously considering this approach. Let me give you the practical next steps to move from consideration to action.

Month 1: Internal Assessment

Before talking to any vendors, do your own assessment. Get your team together – IT leadership, business leadership, actuarial, underwriting, product management – and work through these questions:

Strategic Questions:

• What's our biggest pain point? Product launch speed? Pricing agility? IT dependency?

• What would success look like one year from now?

• What's our appetite for risk? For disruption?

• What's our budget reality?

Technical Questions:

• How old is our PAS? Who built it? What technology?

• What's our current integration architecture?

• Do we have APIs? Documentation? Integration examples?

• What's our data quality situation?

Organizational Questions:

• Who would manage the rules? Do they have capacity?

• What's our change management capability?

• Do we have executive support for business user empowerment?

• What's our track record with technology projects?

Document your answers honestly. This becomes your requirements baseline.

Month 2: Vendor Selection

Now you're ready to talk to vendors. But don't do a traditional RFP with 100 questions that everyone answers the same way. Do this instead:

Identify 2-3 rules engine vendors with strong insurance focus and good customer references. Have each one do a working session with your team where they:

1. Present their platform (1 hour)

2. Do a hands-on demo with your actual use case (2 hours)

3. Discuss integration with your specific PAS (1 hour)

4. Walk through implementation approach and timeline (1 hour)

Then check references thoroughly. Not just the references they give you – find carriers using their platform and call them directly. Ask real questions: What went wrong? What surprised you? What would you do differently? Would you choose them again?

Select one vendor to move forward with based on: technology fit, insurance expertise, implementation approach, pricing transparency, and gut feeling about the partnership.

Month 3-4: Proof of Concept

Don't go straight to full implementation. Do a proof of concept first. Six weeks. One product line. Working integration with your actual PAS.

The POC should prove:

• Integration works with your specific PAS

• Performance meets requirements

• Business users can be trained and productive

• The implementation approach is sound

If the POC succeeds, you have concrete proof this works for your organization. That makes the full implementation decision much easier.

If the POC reveals issues, you address them before committing to full implementation. Either way, you're making an informed decision based on experience, not just vendor promises.

Month 5-7: Phase 1 Implementation

If the POC succeeds, move to Phase 1 implementation: pricing rules for one product line. The timeline I described earlier (8-12 weeks) is realistic based on 50+ implementations.

Don't try to do everything at once. Pick one product line with good complexity (not your simplest, but not your most complex) and decent volume. Implement end-to-end. Prove value. Build confidence.

Month 8+: Expand and Optimize

Once Phase 1 is proven and stable, expand systematically:

• Phase 2: Additional product lines

• Phase 3: Underwriting automation

• Phase 4: Product configuration

• Phase 5: Advanced capabilities

Take your time. Build on success. Let your organization absorb the change.

One Year From Now: What Success Looks Like

Let me paint a picture of what your organization looks like one year after implementing this approach, based on real carrier experiences.

It's Tuesday morning. Your product manager has an idea for a new insurance product targeting gig economy workers. It's a market opportunity your competitors haven't addressed yet.

In the old world, she would have scheduled a meeting with IT to discuss feasibility. IT would have said "sounds interesting, put it in the backlog, we'll get to it in Q3." By Q3, the opportunity would be gone.

In the new world, she spends Tuesday afternoon configuring the product in the rules engine using templates and business logic she understands. Wednesday morning, she tests it with sample data. Wednesday afternoon, the actuarial team reviews and approves the pricing. Thursday, it goes live in one market as a pilot. Friday, you're collecting real data on actual customer response.

Two weeks later, based on the data, you've made three adjustments to the pricing and coverage. One month later, you've expanded to three more markets. Three months later, this product is generating $200K monthly premium and you're the first carrier in your market with this offering.

Your competitor finally launches their version six months later. You've already got 70% market share in this segment.

That's the difference. Not just faster – fundamentally different decision-making velocity.

Your IT team isn't frustrated anymore because they're not the bottleneck. They're working on actual innovation – building the mobile app, integrating new data sources, exploring AI applications. The work is interesting and meaningful.

Your business teams feel empowered. They can test ideas, respond to market changes, and optimize operations without waiting for IT. The culture shifts from "we can't do that" to "let's try it and see."

Your customers are happier because you can offer them products that actually meet their needs, priced accurately for their risk, with quotes delivered in seconds instead of minutes.

Your board is happy because you're launching more products, growing faster, and spending less on IT maintenance. The ROI is clear and measurable.

And your legacy PAS? It's still running, doing exactly what it's good at: storing policy data and processing transactions. But it's no longer holding you back.

That's what success looks like. Not someday after a massive replacement project finishes. One year from now.

Final Thoughts: Why Now Is The Right Time

If you're still reading, you're seriously considering this approach. Let me tell you why now is the right time to act, not six months from now or next year.

Your market is changing faster than ever. InsurTech startups are launching products in weeks. Major carriers are investing billions in modernization. Customer expectations are rising. Regulatory requirements are evolving. The pace of change isn't slowing down – it's accelerating.

Every month you wait is another month of competitive disadvantage. Every product you don't launch is lost revenue. Every market opportunity you can't respond to is market share erosion.

But here's the good news: the solution exists, it's proven, and you can implement it quickly. You don't need a multi-year transformation program. You don't need board approval for a $10M investment. You don't need to bet your career on a high-risk replacement project.

You can start small, prove value, and expand. You can see results in months, not years. You can reduce risk while increasing agility.

The carriers who are winning in this market aren't the ones with the newest PAS. They're the ones who can respond to opportunities faster than their competitors. They're the ones who've separated business logic from infrastructure. They're the ones who've empowered business users to control their own destiny.

You can be one of those carriers. The question isn't "should we do this?" The question is "how quickly can we start?"

Schedule that assessment call. Talk to references. Run the POC. But do something. Because while you're deciding, your competitors are moving.

________________________________________

Ready to explore this for your organization?

Schedule a Free 30-Minute Consultation

We'll discuss your specific situation, review your options, and help you understand if this approach makes sense for you. No sales pressure – just honest advice from people who've helped 50+ carriers navigate this exact decision.

Frequently Asked Questions: The Real Questions

Let me answer the questions I actually get asked by CTOs, CIOs, and CFOs when they're seriously considering this approach. Not theoretical questions – the practical, "what does this really mean for my organization" questions.

Question: We're already planning to replace our PAS in 2-3 years. Why should we bother with a rules engine if we're replacing everything anyway?

This is probably the most common question I get, and it's a good one. Here's my answer: that 2-3 year PAS replacement project is going to take 3-4 years and might fail. Even if it succeeds, you're going to lose competitive ground during those years while you're focused on the replacement instead of responding to market opportunities.

But more importantly, extracting rules now actually makes your future PAS replacement easier. If you do decide to replace your PAS in 2-3 years, having the business logic already external means you're migrating data only, not logic. That cuts the scope, cost, and risk of the replacement by 50-70%.

Think of it like this: you're not choosing between rules engine OR replacement. You're choosing rules engine first (gain agility now) followed by optional replacement later (if you still need it). Many carriers discover that once they've extracted their rules, the need for PAS replacement becomes much less urgent. The system works fine as a data store when business logic isn't locked inside it.

Question: Can business users really manage complex insurance logic? I mean, actually manage it without breaking things?

I understand the skepticism, because I had it too initially. But here's what I've learned from watching 50+ implementations: business users with proper training and tools are actually better at managing business rules than IT developers.

Why? Because they understand the business context. When an actuary adjusts a pricing factor, they understand the market dynamics, competitive positioning, and loss ratio implications. When a developer adjusts that same factor, they're just changing a number in code without understanding what it means.

The rules engine provides guardrails that make this safe: sandbox testing environments where everything is validated before going to production, approval workflows that route important changes through management, validation rules that prevent illogical configurations, instant rollback if something goes wrong.

In practice, after the initial 2-3 month learning period, business users require almost no support. They're self-sufficient. They make changes, test them, deploy them, and move on. IT oversight drops to essentially zero for routine changes.

Question: What's the catch? This sounds too good to be true.

I appreciate the skepticism. The "catch" if you want to call it that, is that this approach requires organizational change, not just technical implementation. You're shifting decision-making authority from IT to business users. That requires trust, training, and governance.

Some organizations struggle with this. If your culture is very IT-centric, where business users expect IT to handle all system changes, the cultural transition can be challenging. You need to be willing to empower business users and trust them with that responsibility.

The other consideration is that you're adding another system to manage. You now have your PAS plus the rules engine. That's two platforms instead of one. However, the tradeoff is worth it because the complexity shifts from "monolithic system that's hard to change" to "specialized systems that are each easy to manage."

And finally, there's an ongoing license cost. You're not eliminating costs, you're shifting from IT services costs (expensive) to software license costs (much cheaper). But it's still an ongoing cost that needs to be budgeted.

Those are real considerations. But compared to the risks of full PAS replacement or the costs of doing nothing, they're manageable.

Question: What if the vendor goes out of business or we decide we don't like them in 3 years?

This is a legitimate concern about vendor dependency. Here's what mitigates that risk:

First, your business rules are stored in standard formats that can be exported. You own your rules, not the vendor. If you ever need to switch vendors, your rules can be migrated to a different platform.

Second, the rules engine market is mature with multiple strong vendors. Competition is healthy, which protects you. If one vendor fails, others are available.

Third, the interface between the rules engine and your PAS is a standard API. Switching rules engine vendors doesn't require changing your PAS integration. You're not locked in at that level.

That said, you should obviously choose your vendor carefully. Look for financial stability, customer base, industry focus, and long-term viability. But the architecture itself doesn't create vendor lock-in the way a full PAS replacement does.

Question: Our PAS is a mainframe system from 1987 running COBOL. Can this really work with that?

Yes, absolutely. I've seen rules engines integrated successfully with mainframe systems from the 1980s running COBOL, CICS, and DB2. The age of your system doesn't matter – what matters is that it can accept input and return output somehow.

For mainframe integration, we typically use one of three approaches:

1. If your mainframe has any existing APIs or transaction protocols (CICS transactions, for example), we use those

2. We can integrate at the database level with triggers and stored procedures

3. We can use message queues like IBM MQ for batch or near-real-time integration

The oldest system I've personally seen integrated was from 1983. Integration took 8 weeks, required some wrapper layer development, but has been running flawlessly for three years now.

The technical architecture is designed specifically to accommodate legacy systems. That's the whole point – making old systems behave like modern systems without replacing them.

Take Full Control of Your Product Logic

We provide fee Proof Of Concept, so you can see how Higson can work with your individual business logic.