Lesson 05

Beazley

AI widgets are easy. Building AI into core business processes is not.
background placeholder

Lots of companies try to bring AI into their businesses. Most fail. Specialty insurer Beazley is blazing a trail in making sure that AI investments take root, and offering a blueprint for how firms everywhere can use AI to transform their operations.

󠀠

󠀠
In July 2024,

󠀠

The Economist diagnosed a new corporate ailment. AI ‘pilotitis’, it wrote, is ‘an affliction where too many small AI projects make it hard to identify where to invest.’ Afraid of being left behind by the AI hype, companies were launching a flurry of ballyhooed AI initiatives, then quietly abandoning them when the promised transformation failed to magically appear. ‘The incorporation of AI into business processes,’ The Economist drily noted, ‘remains a niche pursuit.’

󠀠

One company determined to remain immune to ‘pilotitis’ was Beazley, one of the world’s leading specialty commercial insurers. Specialty insurance, though often overlooked in financial services, plays a vital part in keeping the world’s commerce moving. From oil tankers running aground, to Taylor Swift concerts being cancelled, specialty insurers are there to mitigate the most complex risks of operating in the global economy.

Beazley has been doing it for over 40 years. The FTSE 100 business is widely recognised as one of the most innovative, sustainable and successful operators in its industry. From their offices in London’s Bishopsgate and around the world, they write billions of dollars in insurance coverage every year across almost every sector of the economy. They insured the first private lunar lander on its journey to the moon, and Ukrainian grain ships making the perilous passage through the Black Sea. They’re particularly known for their offerings in cyber risk.

But although specialty insurance involves managing cutting-edge risks, the sector has not typically been seen as a hotbed of innovation. While life assurance or general insurance might generate quantities of structured information that are easily amenable to digital processing, in Beazley’s field every contract is different, and every claim is a unique set of circumstances. 

GenAI and the risk of ‘pilotitis’

The rise of Large Language Models (LLMs), with their ability to process, analyse, and organise vast amounts of unstructured data, presents specialty insurers like Beazley with an opportunity to upgrade their businesses. They can see their way to automating parts of underwriting, customer service and claims processes, and enriching the careers of their highly skilled teams.

But it needs to be done with care. The user-friendly, extremely customisable nature of tools like ChatGPT makes it easy for companies to try out AI-powered proofs-of-concept for their business. Demonstrating such a process is one thing; actually embedding it in operations, making it both useful and usable, is quite another. Beazley took a considered, thoughtful approach, based on its track record of pioneering new areas of insurance and risk management (like cyber), and realised that to effectively use AI they would need to pilot, understand, test, risk manage and then build it out comprehensively.

Setting a commercial mandate for AI 

The role of specialty insurance is to help clients mitigate their toughest problems, their biggest risks. At Beazley, that emphatically includes technology. So they set about managing the risks of the project in a systematic and methodical way.

󠀠
󠀠

The main risk was one common to any serious change programme: a lack of clear leadership and direction. Muddled thinking can derail any project, but AI projects are particularly susceptible to it. Heads can get turned by the latest shiny demo. Experimentation happens in pockets around the business, driven by the cool things the technology can do, rather than the important problems that leadership care about, or that staff need solving.

󠀠

Beazley set things up so there would be strong leadership and direction from the off. Staff were given a clear commercial mandate that was unequivocal about the priorities for using AI to enrich the careers of Beazley’s highly talented employees.

󠀠

1 - Increase the throughput of core operations.

󠀠

2 - Support the teams’ decision-making.

󠀠

3 - Help reduce the number of risk incidents across the business.

The simplicity and clarity of these three objectives provided a north star that guided decisions about everything from the overall shape of the programme down to day-to-day task prioritisation in project teams. Every day they asked themselves, ‘Does the thing I am planning to do today increase throughput or accuracy, or reduce risk?’ If so, do it. If not, don’t.


Identifying the right use cases

That took care of direction. The leadership half of the equation was addressed by centralising all of Beazley’s internal AI efforts into a single process. There would be one program, with one set of priorities, captured in one roadmap reporting into one leader.

That leader was Troy Dehmann, the genial South Carolinian who’s Beazley’s COO. He came to the world of insurance after a career mainly oriented towards finance, and admits, ‘I hadn’t heard of Beazley before I first came to interview. But I was drawn to the firm’s ambition to grow and modernise, their willingness to put investment into infrastructure and things like AI that were coming down the pipeline. It's also very employee driven, and puts the employees first.’

Troy makes the point that AI didn’t suddenly drop into the world with the advent of ChatGPT. ‘We were already doing data science three years ago when I joined, and it wasn’t something new. So it was never a question of “doing” AI or starting to use data science. It was actually about scaling it, and we’ve ramped that up over the past three years, and certainly since we started to work with Faculty.’

Testimonials

“This isn’t about cost-saving for us, it’s about enabling us to scale our business faster and more effectively - allowing our specialist teams to focus on the work they are best at and enjoy the most.”

Troy Dehmann
COO, Beazley

Troy knew he had to get the program off with a bang, in order to start the process of winning hearts and minds across the organisation. His first objective was to deliver four operational AI processes into the hands of users as soon as possible. Though as it turned out, finding use-cases was the easy part. The difficulty was narrowing the choices down to four.

Faculty investigated where AI could deliver on the business’s priorities, and found there were literally hundreds of places that could benefit across the core workflows of underwriting, claims, and operations. In many respects, that’s not surprising. Ever since history’s first insurance policy was carved onto a Babylonian obelisk around 1750 BC, insurers have been processing written and numerical information. Nearly four thousand years later, as functional language models have developed to complement AI’s already mature numerical capabilities, the insurance sector is well placed to benefit from the technology.

But a longlist of hundreds is ultimately unhelpful if you’re trying to run a tight, focussed process that actually achieves something. So the Faculty team filtered the longlist using Beazley’s three North Star priorities, coupled with factors like technical feasibility, data availability, and the degree of change that would be needed to implement a given option. That brought them down to about forty options, any of which could have delivered substantial efficiency and quality benefits. That still wasn’t good enough.

A roadmap for collective AI capability

Troy was clear that the AI project shouldn’t be about automating narrow slices of existing processes. He understood that the real opportunity was to completely rethink how workflows should operate end-to-end, in light of what the AI could do. ‘If you don’t have a roadmap of where you’re going to use it, and you’re just deploying it haphazardly across your processes, you’re not going to recognise the full value.’ That meant instead of simply rushing ahead with the most compelling of the forty shortlisted use-cases, he wanted the Faculty team to connect up all the disparate AI threads, so that the value they generated was greater than the sum of the individual savings.

󠀠
󠀠

To take one example, there’s a narrow use-case for using AI to extract information from the many different types of document that the Beazley underwriting team receive from their brokers. On its own, it creates significant savings by reducing the need for the Underwriting and Ops teams to manually review all the submission application documents.

󠀠

But that’s just scratching the surface. The real benefits come when you view that process as one a series of interconnected AI use-cases, and reimagine the way the whole underwriting pipeline works. Once you start thinking along those lines, you can incorporate automation far more widely to produce a quote that is quicker than the old way of doing things, more accurate, and that brings lower chances of mispricing the risk.

󠀠

As this type of process re-engineering played out across all the core workflows of the business, conversations moved away from individual AI ‘use-cases’ and towards collective AI ‘capabilities’ and ‘workflows’. Instead of bolting on shiny widgets, Beazley was thinking about how to redesign the whole machinery that made its operations run.

And it turned out that a lot of that machinery used similar parts. Even across very disparate parts of the business, the underlying mechanisms involved similar types of generalised algorithmic tasks: functions like processing e-mail, extracting information from unstructured forms, or parsing long documents. Because each separate use-case had slightly different requirements, there was a temptation to build a custom version of the AI each time. But that would have created enormous duplication of effort, and a huge technical debt: it would have been hard, for instance, to roll out new LLM model improvements, because each custom implementation of the technology would have needed updating and testing separately.

‘That was the main challenge, but also the biggest success,’ says Laura Palacio Garcia, the senior data scientist on the project. ‘In the early versions of the utilities, updating them all could get messy. But we were able to build a unified front end, and shared repositories for both the front- and back-end code.’

AI applications - the good ones, at least - can’t be built in isolation from the wider infrastructure and security needs of the organisation. To really embed AI into the business, at the pace and scale that Beazley wanted, the existing systems had to be opened up for the digital equivalent of open-heart surgery.

󠀠

Faculty helped Beazley break down the broad, general-purpose tasks into modular, reusable AI ‘utilities’. This provides a living library of technical components - custom to Beazley’s needs, but common across their AI program - that can be tailored for each AI application. Data science teams can quickly stitch together use cases based on templates and pre-existing modules, rather than building from scratch. Not only has this made the rollout much faster, but it’s also made for better quality, greater simplicity, and further economies of scale across the whole AI catalogue. Which is great. But that wasn’t what kept Troy awake at night.

Getting to a ‘yes’

AI applications - the good ones, at least - can’t be built in isolation from the wider infrastructure and security needs of the organisation. To really embed AI into the business, at the pace and scale that Beazley wanted, the existing systems had to be opened up for the digital equivalent of open-heart surgery. And that created risks.

󠀠
󠀠

Of course, Beazley knows all about IT risks. As a leading cyber insurance specialist, they advise clients on how to deal with those risks every day. As a specialty insurer, they know the very worst that can happen, because they’re on the hook when it does. And on top of all that, they’re a regulated financial services business. Security is paramount to everything they do.

󠀠

As a result, the first AI use-case took Beazley months to move from concept to deployment. Most of the time was taken up in establishing new technical infrastructure, evaluating security requirements and assessing the right hosting architecture, all based on existing approaches for building traditional software applications.

󠀠

‘This was a scary moment for us,’ Troy says. And he should know: as well as being COO, he also has responsibility for the Chief Information Security Officer. ‘There was a real possibility that the only way to deliver on our AI commitments would be to create an unacceptable level of security risks and exceptions - which is something we would never even entertain.’

To scale at pace - safely - Beazley needed to create an entirely new technical infrastructure. One that codified best practice security and governance into intuitive, one-click methods that made sure that the easiest and fastest way to deploy an AI application was also the most secure way. Here, the approach of creating a library of reusable AI components paid dividends again: once a module had been shown to be safe, it could be rolled out again and again for different use-cases without creating additional risk.

‘There’s a natural tension between security technology, and a group of AI engineers and data scientists who just want to run as fast as they can and do amazing things,’ says Troy. ‘But we’ve leveraged Faculty’s expertise to help us think through how we can remain secure and protect ourselves, but also take advantage of generative AI. So although it’s a challenge, it’s actually been a healthy tension between the two.’

But it’s one thing to engineer technology and infrastructure to be safe. It’s not going to matter, unless you can convince the executives who are ultimately accountable, to sign it off. Here again, Beazley’s integrated approach avoided some common pitfalls.

In many organisations, Risk and Governance functions sit outside of the core AI program team. At best, this slows things down; at worst, it creates an adversarial dynamic where the delivery teams feel thwarted in trying to get stuff done, and the Risk teams feel they have to rein in their gung-ho counterparts. At Beazley, Troy avoided this trap by creating a cross-functional group of leaders from the company’s Executive Committee. It brought together the people responsible for technology, modernisation, operations, risk and security, chaired by Troy, and put them at the very heart of the AI program. 

Their remit was clear: they were part of the team getting AI rolled out, not outsiders whose job was to govern it from a safe distance. Each member of the group was empowered to say ‘no’ until they were satisfied that their area of risk was properly managed, but ultimately their mandate was ‘getting to a yes’. Processes, procedures and technology governance were also modernised, to allow them to appropriately address the iterative, uncertain and flexible nature of AI projects in comparison to traditional software development.

As a result of this transformation, Beazley can now take a new AI use-case from concept to production in a matter of weeks - and be confident that it’s been implemented safely and securely end-to-end. But can they be equally confident that people are using it?

Enriching day-to-day work

All of this technological innovation would mean nothing if it wasn’t genuinely supporting Beazley staff to improve the way they do their jobs. Ensuring that happens takes more than just building some fancy tools and rolling them out to users. In AI, ‘build it and they will come’ never works. Users always need to be part of the solution, or else the company faces a new variant of ‘pilotitis’.

󠀠
󠀠

To solve this, the AI programme at Beazley has a dedicated Business Engagement team that works daily with sponsors, testers and users across all departments to identify and design new AI opportunities, and to support the adoption and usage of in-flight ones. This team has been a critical part of making sure that the company builds the right things with AI - not just strategically, but also in terms of how these things integrate with user workflows.

󠀠

One example of this, in the Claims department, involves the review and processing of the long legal documents that can form part of an insurance claim. The AI model that analyses the documents works well, but it isn’t perfect (no model is), and that threatened to cause major issues with trust and adoption. If Claims managers had to review the whole document to find out if the model had come up with the right answers, it wouldn’t be worth them using the system at all.

󠀠

So the Business Engagement team worked hand-in-hand with the Claims team to design an approach that instantly highlights where in the document the model has sourced each answer from. This is done in a way that integrates directly into the workflow, and was extensively tested with users to get it right. And because all the apps have a common interface, an employee who’s used to using one of them can quickly and intuitively pick up another.

As a result of this and numerous examples like it, users feel listened to. The technology works for them, because it’s been built to fit with how they get stuff done. ‘A lot of times, when you have new technology or sparkly things, people sort of engage, and then they disengage,’ says Troy. ‘But we haven’t had people disengage from it. We’ve had a hunger for more, people wanting to understand it better.’

Since Troy asked for four operational use-cases, hundreds of people across Beazley have started using dozens of interconnected AI applications in their day-to-day work. Hundreds more employees will be joining them by the end of 2024. 

AI supports Claims Managers to review and analyse risk in complex legal documents. It’s used by the Operations team to automatically filter and triage incoming submissions from brokers in real-time, and it helps underwriters quickly understand coverage requirements based on myriad unstructured data inputs. Troy estimates that AI has created millions of dollars’ worth of capacity for Beazley as a business, but says that this is just part of the value.

‘This isn’t about cost-saving for us,’ says Troy. ‘It’s about enabling us to scale our business faster and more effectively - allowing our specialist teams to focus on the work they are best at and enjoy the most. It's about enriching our workforce, the day to day work that our people are doing. It's supercharging them, and making us better at what we want to do.’ And he’s confident that there is much more to come. 

‘Ultimately, AI is not just changing how we deliver our offering to our customers efficiently, but also making us fundamentally reimagine what the future of those offerings could be, to help us deliver the best service in the market.’

With the work Beazley has done to rewire their own business around AI, they’re perfectly poised to succeed.

The lesson in summary
AI widgets are easy. Building AI into core business processes is not.
  • AI party tricks, like meeting summaries and document search, can seem exciting. But they tend to be limited to the periphery of what matters. They are not going to change the course of your organisation.
  • Real impact comes when AI is set up to push forward some element of your overall business strategy and optimise the core processes that define your organisation. If you can’t articulate clearly how your AI investment is going to contribute towards one of the top level KPIs of the business, it is unlikely to keep people’s attention long enough to make a difference.
  • Building AI into the core of a business requires much more than technology. To achieve your objectives you need to consider the wider question of how change is driven in your organisation, and set things up accordingly. You also need to calibrate expectations around serious change, rather than technology quick fixes.
  • Cross-cutting executive sponsorship is important. AI solutions need to be pulled by business functions and the users working inside them at least as much as they are pushed by the technology organisation.
  • At the working level, well-run AI programmes keep a number of elements in sync. They require focus on users from the start to the finish. They require governance to be carefully set up and navigated. They require infrastructure foundations to be laid, business cases to be robust and benefits measured. And they require the development and maintenance of technology.
background placeholder

Did you enjoy this story? There are nine others just like it, told from the perspectives of nine of our other inspiring customers, in the full book 'Ten Lessons From Ten Years of Applied AI'. Just leave your details below to get instant access to your copy of the book.

󠀠

󠀠

󠀠󠀠With contributions from:

Ten Lessons From Ten Years of Applied AI

Download the eBook

Get instant access to ten examples of AI solving the world's biggest challenges, told through the stories of ten of our most brilliant customers.