Blog
AI alchemy: how to scope the successful delivery of an AI solution
In one of our recent blog posts, John Gibson spoke about the use of our “Decision Loops” methodology for translating business problems into well designed artificial intelligence (AI) solutions. The article shared a framework for defining your current state and target state Decision Loops, and how the latter can help you:
- Organise your solution design around the needs of the user – the decision maker
- Understand what AI modelling is needed to support the analytical capabilities to inform a decision
- Seek to close and speed up the Decision Loop, ensuring modelling outputs are effectively translated into optimised KPIs.
Once you’ve understood your target Decision Loop – the key question is then, where do you start with executing on this vision? Taking the use case of maximising customer lifetime value (LTV) used in the previous article shown below, there are a number of capabilities within this target state that need to be built out.
For example, the Decision Loop includes a combination of forecasting, uplift and propensity models, as well as the potential to integrate generative AI for sending out promotions to customers. Planning and prioritising the sequencing of this activity is a crucial step required before rushing into the development phase. A careful balance needs to be struck between ensuring your scope and plan focuses on technical de-risking, whilst also rapidly testing the value of the solution and iterating the approach accordingly.
1. Prioritise an initial set of assumptions to test
It’s almost impossible to perfectly plan the entire build of an AI-driven solution prior to commencing the work, as there’s many unknowns that you won’t discover until you start exploring the data and testing out the models. It’s therefore crucial to prioritise an initial focus area to test for impact with a feasible route to achieving this.
In order to identify which areas to prioritise for impact, it’s essential to engage with a range of stakeholders and subject matter experts in the context of the above Decision Loop framework to explore questions such as:
- How would you weigh the need for each of the analytical outputs in the context of the decision being informed?
- Does certain analysis carry more weight in informing a decision than others?
- Do you have any historical examples of where the absence of analytical tools, or poor performance of existing analytical tools, has had a significant impact on the KPIs you’re looking to optimise?
- What is the most important area to focus on in the context of your overall business strategy and organisational goals?
Understanding the answers to these questions will help you prioritise the areas that will have the most impact on the outcome. In doing so, you can quickly progress to testing your assumptions and understanding if a solution is able to achieve the expected impact.
2. Define what success looks like
As you are exploring the areas to prioritise for impact, it’s critical to determine upfront how success will be viewed, and through what metrics. Success can be defined in a number of ways, from performance metrics or a new capability made available, to substantial cost or resource efficiencies.
The targets you set should be based on current data and take into account organisational context. Typically we’ve seen data science projects fall down when arbitrary targets are set, such as achieving 90% accuracy. The ability to deliver this metric is highly uncertain prior to a project commencing due to the unknowns we mentioned above.
A better way to measure your success is to identify a benchmark that you can compare your solution with. For example, you could compare your solution with baseline naive models that replicate simple assumptions representative of how a decision is reached in the current state. In addition, mapping out a baseline for the time, effort and resource that is currently dedicated to making a decision, and tracking how these metrics shift once your solution is in place is key.
3. Differentiate between stakeholders versus end-users
We all recognise that being ‘user-led’ is a widely recommended discipline. However, it’s one of the most common pitfalls in poorly scoped AI solutions. A starting point to get this right is by differentiating who your stakeholders are versus who your end-users are during the scoping stages.
Stakeholders are typically those who have a vested interest in the project’s success, such as executives and project sponsors. They will provide the strategic vision and resources necessary for the project and will be concerned with high-level outcomes like ROI. You should consult your stakeholders during the scoping phase on the success measures you target, as well as their risk appetite for the solution. Do they want to pursue something more innovative and transformative, with the understanding that this comes with a higher risk of the project succeeding? Or do they want a simple solution that can achieve a degree of improvement?
On the other hand, end-users are the individuals who will interact with the solution on a day-to-day basis, such as analysts, customer service representatives and field technicians. In the Decision Loop context, they are typically those who are consolidating the insights needed to inform a decision. They require a tool that is intuitive, reliable and enhances productivity. Once you work out who the end users will be, start to define and prioritise high level user needs the initial scope should address.
Understanding the distinct needs and expectations of stakeholders and end-users allows you to align user and project requirements with business goals. And to create user-centric functionalities that ensure the solution is both strategically sound and operationally effective.
4. Understand your critical technical dependencies, requirements and risks
The solution, success measures and user needs you choose to prioritise should be underpinned by a technical feasibility assessment during the scoping stages. Throughout this exercise, it’s imperative to identify:
- The critical dependencies that will block development of the solution if they are not met, as well as the likelihood of blockages and feasibility of mitigations.
- The riskiest elements of the technical build that need to be tackled as soon as practically possible within the project timeline.
The above should involve identifying all of the known technical requirements and dependencies that underpin the solution. Examples include data availability and accessibility, the outputs the analysis needs to provide (e.g. probabilistic confidence intervals, explainable outputs) and the modelling approach required, as well as the technical architecture (ensuring this supports the needs of end-users and integrates into their workflows). It’s also important to carefully plan the time and resources you may need to navigate complex governance requirements, such as security, data protection and architecture sign-offs. This needs to be thoughtfully considered when planning your project to ensure your timelines account for meeting critical dependencies.
In the context of your target state Decision Loop, you may have existing models you want to work on or you might want to build new models from scratch. Understand and research common approaches to solving the modelling problem. If there are different levels of complexity available in these approaches, agree on what level is required for the outcome your solution is looking to achieve. Open source models can also often be a great starting point to adapt and refine solutions from. These can quickly be tested on dummy data during early scoping stages to assess their utility and inform your modelling approach.
Use the above to inform your technical scope, plan and ensure you are tackling the critical dependencies and riskiest elements early on in the project timeline. This will allow you to iterate the approach if you need to without having overly invested in your original idea.
5. Start simple and build in a way that is incrementally valuable
On one end of the spectrum there will be a simple solution that may achieve a good level of performance, on the other there will be an advanced solution that may deliver exceptional performance. Starting with a simple solution will be easiest to develop, test and implement, minimising the risk of project delays or failures. It will also deliver a functional product earlier to stakeholders, and provide the opportunity for user feedback to inform subsequent development priorities, as well as allow for greater flexibility in adapting to changes. This will allow the project to pivot without the burden of complex, unchangeable systems.
Taking the example of the Decision Loop above relating to maximising customer LTV, you might want to start with a simple rule-based model. These models will use basic customer segmentation to apply straightforward promotion strategies to gather initial insights on how this impacts decision making, before progressing to more complex uplift and propensity models that build on the value of the solution incrementally. The approach you decide to take should be heavily informed by the success measures you’ve defined, and maximise for improving on these iteratively.
6. Adopt a phased approach to allow for iteration
Developing a solution that transitions from ideation through to live deployment is a continuous process, but it’s important to break down the delivery into manageable steps to test the value of the solution, pivot your strategy, and maximise outcomes. This gives your stakeholders the assurance that they are not investing in a long and expensive software build that takes a long time to yield results. And it also gives your delivery team security and clarity in having a manageable scope and expectations within each phase.
The Double Diamond framework is a useful overarching model for how to approach this:
Image source: Productboard. Accessed via https://www.productboard.com/blog/double-diamond-framework-product-management/.
- Discover – diverging to explore the problem statement and solution space
- Define – converging to validate and concretely define what the scope of the solution needs to achieve (routed in user needs)
- Develop – diverging to develop a series of prototypes to test and iterate with customers, learning and improving through regular feedback loops
- Deliver – converging on a solution that is proven to add value for the customer, is technically feasible to deliver, and therefore should be deployed at scale
This model can be repeated any number of times during the scoping and delivery of a project to varying levels of detail. In the earlier stages to inform and refine the scope, and at the latter stages to refine and prioritise product enhancements.
Underpinning all of this should be frequent stakeholder and user interaction to demonstrate quick incremental value and iterate on the approach via regular feedback.
Conclusion
The Decision Loops methodology is valuable for translating business problems into well designed AI solutions. But it’s vital this is then developed into a viable project scope which your teams can feasibly execute to rapidly start testing the solution’s impact.
Doing this well involves narrowing down the scope to start with a simple, measurable, and technically assured approach. This approach can quickly produce tangible outputs so users can test the impact, and the solution can be optimised iteratively based on feedback.
The ultimate goal is to create an AI solution that is not only technically sound but also drives meaningful business impact via improved decision making. By following these structured steps, you can navigate the complexities of AI implementation and transform your vision into a reality that delivers material value to your organisation.
Contact our team today to explore how we can help you execute your AI vision.