Today’s risk isn’t science. It’s clinical trial execution under real world constraints

Steph Skeet, Solutions Director for Faculty Frontier™, explains what keeps life sciences organisations trapped in POC graveyards, and the opportunity areas to counter these failure modes.

2026-02-17
Frontier

The life sciences industry is facing an investment paradox. Its AI market is projected to grow from around $4 billion in 2025 to over $25 billion by 2030. But almost all of that (~95%) is concentrated on drug discovery to help speed up identifying targets and generating compounds. While development will get about ~5%. 

The absurdity here is that development remains the largest bottleneck on R&D ROI. In the US, the attrition-adjusted cost to develop a single novel asset is estimated to be as high as $2.8 billion – with 60-70% of total costs tied up in development. 80% of trials fail to finish on time. And of course, 90% of assets entering clinical trials fail entirely.

Without more appropriate investment rebalancing across the asset lifecycle we face a “Ferrari engine with bicycle wheels” problem. Where you have a supercharged discovery engine feeding into a 1990s execution system. A system that runs on many thousands of interconnected decisions across vast development organisations, currently marked by a huge amount of fragmentation and manual, inefficient workflows. 

Looking ahead, we cannot continue to underestimate how much poor execution and methodology failures (e.g. design decisions) harms good science. Pushing even more assets through a broken pathway will just amplify cost and congestion. We need to build up execution muscle and operational discipline. Knowing that when the rubber hits the road, site performance matters just as much as the biology. 

This is exactly what we are concentrating on with Faculty Frontier. Using in silico decision simulation, we work with pharma organisations to build computational twins of their clinical trial programs and portfolios. Focussed entirely on operational excellence. 

Recognising that development remains at once the biggest constraint as well as our single most controllable lever for bringing new molecules to market, faster. 

Ok. So let’s use AI to fix Development. But why aren’t we making progress?

Despite the investment imbalance, many of us are still very excited about the opportunity for AI to solve lots of problems in development. To reduce uncertainty, speed up trials, minimise supply wastage, optimise allocation of resources and so on. 

But more often than not, AI initiatives fail and transformation leaders find themselves staring at a graveyard of failed PoCs. 

So why is this?

Over the past couple of years I’ve spoken to people in development at each of the top 20 global big pharma and I’ve developed some hypotheses. Centred specifically around three common failure modes that keep popping up. 

Failure mode one: starting with the technology, not the business problem

This is probably the most common cause of failure. 

I can’t count the number of times I’ve heard “we need an agent” or “we want a GenAI tool” absent of any clarity on whom the technology is for; which specific decision process it would support; which KPI they are seeking to influence; or how they would measure ROI. 

Sometimes, the initiative is even framed as part of a broader “GenAI strategy” or “Agentic Strategy”. Strategies that are seemingly set up to identify all the different ways one can release Gen AI / agentic swarms into a business to check an innovation box.

But just because we can do something doesn’t mean we should. These strategies are often totally disconnected from the central business strategy and as a result, business value. Hence the phrase “AI is everywhere except the bottom line.”

Failure mode two: deploying point solutions that reinforce silos

Across the industry, there is a proliferation of point solutions that are deployed into discrete parts of the organisation. While many solve a local problem, this pattern ends up reinforcing silos and fragmentation. 

This fragmentation typically shows up across three axes.

  1. Across time: with no way of coherently tracking a study’s progression across phases and from planning into execution. For example, I’ve often seen different recruitment rate logics used in study design vs execution so you can’t even do apples-to-apples comparisons of actuals vs plans.   

  2. Across functions: clinical development, clinical operations, regulatory, safety, supply, and other groups often operate with different systems, different incentive structures, and different logics. Making it hard to manage tradeoffs between KPIs that are often inherently in tension with one another. The classic example in study design is optimising inclusion / exclusion criteria to drive clinical outcomes – but not being able to understand in real time how this may negatively impact operational outcomes and the available patient population.

  3. Across hierarchies: portfolio, programme, trial, country, site, and patient level decisions are interconnected. But predictions often don’t roll up into a coherent picture of performance. Meaning we can’t work out how decisions trickle down from portfolio → trial, or across hierarchical groups (e.g. how intervening on a trial might impact its peers).

Unsurprisingly, all this fragmentation creates incredible coordination challenges, driving end-to-end bottlenecks.

Failure mode three: implementing technology without rewiring the process

Analogous to the points I’ve made about trials failing – even if the science is good –  powerful AI can fail not because the technology doesn’t work. But rather because in order for the technology to make a difference in the real world (vs in POC land), it requires careful execution and roll out. 

This often necessitates a fundamental change in the underlying decision process it’s trying to support. For example, people's roles and responsibilities may need to change; handoffs between teams might look different; decision rights might need to shift; maybe linear processes now need to become non-linear and iterative. And all of this change needs to be managed deliberately, and carefully.

But too often this business re-wiring isn’t done, or is left as an afterthought. 

And so what invariably happens in these complex systems is that organisational antibodies rise up and reject the technology as a foreign organ. The initiative fails, while the organisational body is left fatigued and frustrated.

There are three specific opportunity areas to counter the three failure modes

  1. Decision intelligence as a discipline (to counter starting with the tech)

In highly competitive, uncertain and unpredictable environments like clinical trials, the determining factor for success is our ability to make good decisions.

Decision Intelligence is a discipline that fundamentally moves us away from the world of Business Intelligence. From static BI dashboards, that tell us what is happening or has happened in the past, towards a world of what will happen in the future, why and what we can do about it. The objective is to help end users complete decision loops, improving both the quality and velocity which we spin through these cycles. 

It provides us with a robust methodology for identifying what technology is required to drive business value. Rather than asking “what data do we need” or “what model should we build”, we work back from the business decision we are trying to optimise. The steps to follow are:

  • Start with the KPI that matters, for example trial duration;

  • Identify the decision process that most influences that KPI, for example early stage study design;

  • Break that decision process down into its anatomical parts of observation, analysis, decisioning and execution;

  • Determine what logic is required to power the loop, which may include AI models, optimisation, rules, heuristics, and human judgment;

  • Finally, define the data required and the shape it needs to be in.

This process allows us to be precise about exactly what data and AI work is required, tightly coupling all effort to business value. It also marks a shift towards composite AI. An approach that encourages us to combine multiple AI techniques rather than relying on a single method like GenAI, so that we’re always selecting the right tool for the job.

Opportunity two: computational twins as an enabler (to resolve fragmentation caused by point solutions)

To power a decision loop end-to-end, we likely require a number of AI models that we need to connect together in a representation of the decision system. 

This is where a computational twin comes in. 

A computational twin is a virtual representation of an operational decision process, such as a clinical trial. It brings together data, predictive models, optimisation algorithms and business rules into a coherent dynamic simulation of how work and decisions flow, end-to-end. 

It provides us with a virtual testing environment that allows us to test out the impact of different decision strategies in silico. The core problem that it solves is that it moves us away from disconnected point solutions, towards interconnected decision simulation. Helping us understand how different decisions might propagate across complex value chains (across time, functions, hierarchies).

There are a number of areas where computational twins are gaining traction in particular. These include:

  • Trial planning and design: simulating the impact of different study design candidates to optimise trade-offs across clinical, operational, and commercial outcomes.

  • Trial execution and next best action: building a twin of a live trial, simulating market dynamics, forecasting performance, and evaluating intervention strategies when delivery goes off track.

  • Portfolio management: simulating a portfolio across phases and therapeutic areas to identify systemic underperformance and test portfolio level decisions such as resource allocation and prioritisation.

While each decision application should prove valuable in its own right, the real unlock comes when these are orchestrated within a common framework. Underpinned by a shared simulation environment and logic for describing the progression of an asset.

Opportunity 3: Rewiring the business as a practice (to drive adoption) 

At its core, this is an unsolved problem. No pharma organisation has yet fully reinvented its operating model to truly capitalise on AI’s progress. With over 10 years of experience in AI at Faculty, we’ve learnt some lessons about what it might take to rewire the machinery successfully:

  • Engage in radical first principles thinking: Given the exponential pace of technical development, we need to explore totally reinventing how we think, do work and collaborate with each other. Breaking down decision components to their most basic fundamentals before piecing them back together within a new, more efficient AI-powered vehicle.

  • Build in increments that are individually valuable but collectively transformative: Trust is established bit by bit. Identify a single decision process to rewire first, or even a tractable subsection of that decision, where value can be demonstrated quickly and credibly. Each increment should stand on its own. But it must be built with the explicit intention of connecting it to other up or downstream decisions, so that over time entire end-to-end processes can be rewired.

  • Create cross-cutting executive sponsorship: AI programs can no longer be stuck in silos / in IT. They must be supported by a cross-functional group of executives. Bringing together leaders from across business, IT, data science and putting them at the heart of an AI program. Their remit is to “get AI rolled out” and not just to watch or govern it from a safe distance. 

Final thoughts

If we are to make this a numbers game, the measure of success when it comes to building successful AI systems shouldn’t be the number of POCs launched. Or the number of AI models built. But rather the number of high priority business decisions that have been optimised. Raising the average distribution of decision-making in the places that matter the most.

The key is to drown out the noise, isolate the decisions you need to get better at, and focus on these with intention. 

If you’re exploring Decision Intelligence and AI-powered simulation for life sciences, you can access my latest keynote and see why we’ve been recognised in Gartner’s inaugural Magic Quadrant for Decision Intelligence platforms here

Steph Skeet
Solutions Director for Faculty Frontier™
Steph Skeet is Solutions Director for Faculty Frontier. Her experience in healthcare and lifesciences includes working with large pharma organisations to build computational twins for clinical development; leading Faculty’s award-winning COVID-19 Early Warning System; and helping to develop the UK Government’s strategy for a National Biosurveillance Network. She holds an MA from the University of Cambridge.