The problem isn't AI. It’s that decisions are invisible.
Our Head of Product, Tom Oliver, explains why Decision Intelligence is now emerging as a foundational discipline for enterprise - and how the Faculty Frontier™ platform is bringing this into practice.
Over the past decade, Faculty has helped deploy applied AI systems inside some of the most complex enterprises in the world - pharma, healthcare, supply chain, national infrastructure. Despite the differences in sector and scale, we’ve found the underlying issue is remarkably consistent.
Take a large pharmaceutical company I recently worked with.
They had invested heavily in AI. Analytics platforms, forecasting models, copilots, dashboards. A world-class data science team. And yet, when I asked a simple question - “Did the decisions you made last quarter on trial design actually improve your time-to-market?” - nobody could answer it.
Not because the data doesn't exist. Because the decision itself was never captured.
There's no record of what was decided, what alternatives were considered, what actions followed, or whether those actions produced the outcomes that were expected. The decision disappeared.
This is a common pattern I see. The bottleneck isn't AI-powered intelligence. It's that decisions - the moments where we put our hand on the tiller, commit resources, and set a specific course of events in motion - remain entirely invisible.
Everyone wants to be Palantir. Almost nobody will be.
A recent a16z piece, "The Palantirization of Everything" captured something I’ve been close to for years. Startups and established players alike are copying Palantir's forward-deployed model: embed engineers with customers, build bespoke solutions, sell outcomes not software licenses. Job postings for "forward-deployed engineers” are increasingly common, and pitch deck du jour reads: "We're basically Palantir, but for X."
I understand the appeal. Enterprise AI has a production problem. A huge fraction of AI projects stall before production, and the Palantir model - small elite teams parachuting into messy environments to make things actually work - is a compelling response.
But as the a16z piece argues, most companies copying this model will fail. The reason is structural: without real platform primitives underneath the bespoke work, you aren't "Palantir for X." You're an expensive services business with a software valuation multiple - while it lasts.
The services trap is real, and it catches nearly everyone who copies the aesthetic without building the underlying architecture and infrastructure. These are the only things that compound. To win, you need them.
What I find more interesting though, is what the good versions of this model are missing.
Forward-deployed teams can help solve the deployment problem. They get models into production. They understand the bespoke needs and complexities of the enterprise environment. They wire together and plug gaps in messy and sparse data. They write and ship custom workflows and components where needed. That matters…but it's not the core.
The truly hard question is: “what happens after a data & AI intensive application produces an insight”?
The missing layer
In all organisations, the true chain from analysis to outcome looks like this:
Analysis → Decision → Action → Event → State → Metrics
Analysis produces insight. A decision commits the organisation to a course of action. That action triggers events in the real world. Those events change the state of things. And that changed state shows up - eventually - in the metrics the business cares about.
The problem is that almost every tool in the enterprise stack addresses one end or the other. Analytics and AI handle the left side. KPI dashboards and reporting handle the right. The middle - the decision itself, the actions that follow, whether those actions were executed as intended, whether they produced the expected effect - is all largely invisible.
It lives in people's heads, the air of boardroom conversations, and in scribbled meeting notes. It's these fragmented traces of decisions and their logic that we need most.
This is why that pharma company can't answer my question. They have brilliant analysis. They have comprehensive metrics. They have no infrastructure connecting the two.
This isn't a niche problem. It's the central coordination challenge of any complex organisation. In life sciences, when commercial launch strategy influences trial design, which shapes supply chain’s manufacturing scale, which in turn affects regulatory submission timing - and none of those connections are visible or tracked - you get organisations that are locally optimised and globally incoherent.
A category emerges
This is why Gartner's inaugural Magic Quadrant (MQ) for Decision Intelligence Platforms matters. Not because analyst validation defines reality. But because it signals that the market is starting to recognise this missing layer as a distinct category - not just another feature bolted onto analytics or workflow tools.
It's worth noting what the MQ reveals by omission, too. Palantir - the company everyone in enterprise AI is trying to emulate - isn't in it.
That's not a slight on Palantir; they've built something extraordinary. But it tells you something important: forward-deployed AI and decision intelligence are different things. You can be brilliant at getting models into production in complex environments, and still not have infrastructure for making the decisions those models are meant to inform.
The Palantirisation trend and the DI category are addressing different layers of the same problem. They overlap - including with what we’re doing with Faculty Frontier™ - but they’re not the same thing.
What we built, and why
This is where I should declare my interest. Faculty Frontier, the Decision Intelligence platform for which I am Head of Product, was named a Visionary in that MQ. But the reason I'm writing this piece isn't to celebrate the recognition. It's because the thesis behind Frontier is the argument I've been making above, and it's one I think matters well beyond our own product.
We built Frontier around a few convictions that run against the grain of how most enterprise AI gets built:
Decisions, not models, are the unit of value.
Most AI platforms organise around data pipelines, model training, or agent orchestration. Frontier organises around the decision. What's being decided, what analysis informs it, what actions follow, and whether those actions produced the intended effect.
The Analysis → Decision → Action → Event → State → Metrics chain isn't a diagram in our pitch deck. It's the architecture which means organisations can test decisions before committing to them; simulate downstream impact across functions and time horizons; deploy decisions into workflows; and measure whether they actually moved priority KPIs.
Opinionated beats bespoke.
The a16z piece makes this point well: without real platform primitives, you end up rebuilding from scratch for every customer.
We've deliberately built opinionated archetypes - reusable frameworks for common decision patterns in drug development - rather than offering a blank canvas. This is harder to sell in the short term, and far more valuable in the long term. The discipline of resisting unlimited customisation, of contributing back to core, can be painful. But it's the difference between a platform and a permanent consulting engagement with a nice front-end.
The decision system of record builds real advantage.
Analytics tools evolve. Models improve. Agents get more capable.
But the structured representation of how your organisation makes decisions - how it tests alternatives, commits to action, measures impact, and refines its approach - becomes an enduring asset. One that grows in value as more decisions flow through it.
Over time, it reveals patterns, aligns functions, and increases decision velocity and quality. That layer barely exists in most enterprises today. We think it will become foundational for organisations that want to scale faster and continuously improve how they operate.
What this means
The nature of consulting, and of enterprise tech more broadly, is being fundamentally reshaped.
CEOs and their boards still need help navigating complexity. But they need a different kind of help: not more slides and advice, but infrastructure that turns an organisation’s decision making into scalable digital artefacts. A compounding asset that captures the context, outcomes, and differences between decisions and reality; the ultimate fuel for continual learning as to how to make the best possible decisions.
The companies that win in this next phase won't be the ones with the best models or the most engineers on planes. They'll be the ones that crack the decision layer - that build the connective tissue between analysis and outcomes at enterprise scale.
That's what we've built. The wave is here.