We’ve built a lot of AI technology over the years at Faculty. This has included very many triumphs; highly sophisticated systems that had huge impacts for our clients. We’re not ashamed to say that we have had our fair share of misses too. Everyone who has been doing AI as long as we have has at least some scars on their back.

When we reflected on the standard failure modes, some familiar archetypes emerged: Projects where the right use case proved hard to pin down. Others in which the scope felt loose, or kept shifting. And those that delivered analysis that was “interesting”, but not obviously essential for anything particular.

We try hard to build learning into everything we do (the clue is in the name…) and have spent a lot of time figuring out why projects underdeliver for these reasons, and how to fix that.

This is how we came up with what we call “Decision Loops.”

Decision Loops are the methodology we use to make sure that the technology we build is tightly scoped, and connected to business outcomes.

Developing and refining this methodology, and being disciplined about applying it to our work has been very beneficial to us. It makes it easier for us to cut through client problems quickly, and design solutions that work. And it has all but eliminated projects that end up underdelivering because they weren’t set up properly to begin with.

The development of this methodology is built on three axioms:

  1. The first is that when you are looking to improve business performance, it never works to start with a shiny new technology and then look for places to apply it. Instead, you start with a business problem that matters to you, and then work back from that to the technology that you need to build.
  2. Second, you need a methodology that maps out the transmission mechanism by which the technology you build effects the outcome you seek.  This needs to be grounded in an understanding of the cause and effect relationships that drive the outcomes and how you can intervene to affect them. Unless you can lay this out step by step, you are not ready to start building technology.
  3. The third is that human decision making is the most impactful point of focus for AI Technology.  Decisions sit at the core of most business processes:  Should I buy or sell?  Which patient should I prioritise?  Do I need to give this customer a discount?  Which machine should I fix next?  Decision making is amongst the most important elements of most roles.  Technology that can improve the speed, quality and execution of decision making will reliably increase the performance of more or less any organisation. And the ability of AI to turn data into analysis makes it the perfect decision support tool.  Our view is that augmenting human decision making with intelligent decision support is a far more impactful path for the technology than trying to focus on replacing humans with automation.

Bringing these three things together; we needed a methodology that connects technology to business outcomes, and does so by focusing on the decision making processes.

Thus the ‘Decision Loop’ was born.

So what is a Decision Loop?

Decision Loops are a variation of the OODA Loop that was developed by the US Air Force to provide a decision making framework that could be used in combat situations.

We adapted their phases ‘Observe, Orient, Decide, Act’ to a sequence that is relevant in the context of a modern enterprise; Data, Analysis, Decision, Action.

This simple framework allows us to map out the process by which a decision making agent in a system aims to achieve a specific and measurable business objective, through a set of actions they can take.

And once we can map it out, we can be precise about the ways in which AI can enhance that process.

The order in which we run this mapping process matters a lot:

Step 1: What is the objective?

Everything should start with a business objective that matters.  Never start with technology.  Never start with data.

Ideally this objective is expressed in the form of a single, or small set of KPIs.  

The process of selecting target KPIs is clarifying, and relentless focus on them sharpens prioritisation.  In much the same way as the British Olympic rowing team famously asked of everything they did, ‘will it make the boat go faster?’, so too should project teams ask themselves of everything they do ‘will it improve my target KPI?’

Examples:

Step 2: Who is the decision maker?

It is well understood that user centric design delivers the best outcomes when developing software. 

For AI systems, the user is typically a decision maker.  As such, it is key that we are clear about who makes the decisions that drive the business objective in question.

Examples:

Step 3. What actions can they take to achieve their objective?

Once we know the business objective, and who can make decisions that affect it, we need to understand what actions are available to them to achieve that objective, and how they are executed. 

Examples:

Step 4. What decisions do they need to make to know what action to take?

We then need to understand what decisions they need to make in order to choose the right action, and what options are available to them.

Examples:

Step 5. What analysis do they need to conduct in order to make the right decision?

Once we are clear on the decisions to be made, we need to identify what analysis needs to be done to support the decision maker to make the right choice.

It is important to be precise about this.  The prevailing paradigm of  “Data Led Decision Making” has too often just drowned people in data and analytics.  

Most people today have access to many times more data than they ever use, in dashboards and reports and spreadsheets.  The volume of information can feel overwhelming.  And despite all of this data, there will often be times where they lack the specific pieces of analysis that would actually help them feel confident they’re making the right call.

The power of AI is not to throw data at people, and leave them to drown in dashboards.  But to give them exactly – and only – the analysis they need.

To make sure you get to the most parsimonious analysis, it can be helpful to break this down into the set of questions that people tend to ask when thinking through important decisions they need to make, and the set of analytical capabilities that help them find answers.

QuestionAnalytical capabilityFor example…
“What happened?”Historical ReportingRetail Sector: What impact did variations in budget / offer targeting / copy / channels have on LTV in the past?

Energy Sector: What was the outcome of previous buy/sell decisions taken and what would have happened if a different decision had been made?
“What is happening?”Realtime dataRetail Sector: For my live campaigns, how are variations in budget / offer targeting / copy / channels affecting sales right now?

Energy Sector: Where are there price discrepancies? (i.e. where do current market prices vary from what would normally be expected).
“What will happen?”ForecastingRetail Sector: What impact will variations in budget / offer targeting / copy / channels have on LTV in for my next campaign?

Energy Sector: What is the probabilistic distribution of future prices in each market?
“Why?”Root cause analysisRetail Sector: Which marketing interventions increased spend that would not have otherwise happened (rather than gave discounts to people who would have bought anyway)?

Energy Sector: What underlying factors appear to drive future changes in price and what context does that give us for the decision we should make and the associated risks?
“What if…?”Scenario planningRetail Sector: How will different configurations of budget / offer targeting / copy / channels perform?

Energy Sector: What are the potential impacts on P&L and VaR (value at risk) of any given trading decision and what are the probabilities of these different outcomes?
“What is best?”OptimisationRetail Sector: What is the optimal configuration of offer targeting / copy / channels for a given budget envelope?

Energy Sector: Which trading strategy best balances potential risk and reward given trading constraints?

Not all decision making processes will necessarily require all of these analyses.  For example, root cause analysis, scenario planning and optimisation tend to be more applicable in complex systems which have a slightly lower volume of higher impact decisions.  But almost all decisions will benefit from historical reporting, realtime data and forecasting.  And many struggle today because they don’t get much further than historical reporting.

Step 6: What data do they use to conduct that analysis?

Finally, we need to understand what data is required to conduct the specified analysis.

This should always include data that captures the outcomes of the system, which is fed back into the analysis to create a feedback loop that drives continuous performance improvement.

Incidentally, I have written previously about how much effort is currently wasted in data functions trying to synthesise and manage data before they have figured out what to do with it. Data is, of course, essential to AI. But it is useful only when connected to a specific problem in this way.  Data is where the problem solving logic ends, not where it begins.

Examples:

Designing the target state Decision Loop

Once we have mapped out the way a decision loop works today, using the steps above, we then design how we want it to work, and diagnose the ways in which it falls short.

 Three types of failing tend to be most common:

In contrast, the design of the target state decision loop:

  1. Organises everything around the needs of the user – the decision maker
  2. Uses AI modelling to provide as many of the analytical capabilities laid out at step 5 as are needed to support the case in question.
  3. Seeks to close the loop and speed it up.

Below are visualisations of our retail and trading examples; both show a simplified ‘current state’ Decision Loop that has fairly typical shortcomings, followed by a ‘target state’ version that seeks to address them.

Retail Sector Example
Energy Sector Example
Applied (Data) Science

If all of this seems simple and intuitive, it’s because it is. Any good methodology should be. But being rigorous about applying it is nevertheless very powerful.

Part of this power comes from the fact that this methodology connects two tribes of people; those who try to understand the world, and those who try to act upon it.

Scientists belong in the first camp. The scientific method itself is about forming hypotheses that posit explanations of the world, and gathering data that tests them. But science seeks to understand, not to shape and intervene. In the language of the Decision Loop, it is a mini-loop between Data and Analysis, which does not seek to extend to Decisions or Actions.

By contrast, the leaders of organisations have often been forced to operate at the other end of the loop, making and acting upon decisions without always having the right kind of analysis to support them.

Analytics teams have attempted to bridge that gap. “Data Led Decision Making” was the great promise of expensive data transformation programmes. But the 65% of leaders who think that, despite the deluge of data, decision making has become harder not easier, point to the failure of this movement to bridge the gap to date.

Our considered view is that this failure stems in very large part from the lack of the kind of methodology we outline above.  And our experience of applying Decision Loops to our work shows us how impactful it is when that gap is bridged.

I’d encourage anyone thinking about how to solve problems with AI to give this approach a try.


Recent Blogs

Subscribe

Subscribe to our newsletter and never miss out on updates from our experts.