Building an AI-ready organisation: Learnings from HALO

In this first article of a three-part series, Senior Data Scientist Jaymin Mistry and Associate Francesca Federico, draw on our work with HALO. They uncover the characteristics and qualities that set apart organisations ready to leverage the power of AI.

2025-04-03DefenceAI Strategy
Jaymin Mistry & Francesca Federico
Senior Data Scientist | Associate

Organisations everywhere are pouring time, money and resources into adopting AI. But like earlier investments in data science and business intelligence, the results of these projects vary. The stark reality is, most of them fail. Some estimates place the failure rate as high as 80%. Although the technology is evolving rapidly, it’s usually down to lack of clear strategy and poor execution. 

So how can you make sure you don’t contribute to that statistic? How can your organisation be AI-ready? And what does an AI-ready organisation actually look like? 

In this three-part series, we’ll take a closer look into HALO, the international humanitarian NGO whose mission is to protect lives and restore livelihoods through the clearance of landmines and other explosive remnants of war. HALO trains and employs local residents to safely clear landmines in their communities, and today operates programmes in 30 countries and territories across the world, including Africa, Latin America, Central and South Asia, and the Middle East. 

Whilst interacting with HALO, they have stood out as an organisation prepared for new technology due to their strategy, culture and approach. They provide a role model for organisations looking to make the most of AI.

In this first installment, we’ll focus on strategy alignment. We’ll cover why it’s important, how HALO has applied it, and the mistakes that commonly trip up other organisations. 

Start with the why – business strategy trumps AI strategy

In our ‘Ten Lessons From Ten Years of Applied AI’ book, lesson nine is ‘business strategy trumps AI strategy’. Unclear business strategy leads to unclear execution. Whilst what can be achieved with AI might be uncertain, being clear on the business goal gives clarity even if the technology changes. Focusing on the business problems, rather than technology or the data you have is the start of the road to success.

HALO stands out as one of the few organisations that focused on their business problems from day one. One of the major challenges in demining is pinpointing the locations where mines are so they can be prioritised for clearance – and just as crucially, where they aren’t. HALO is very aware of the tradeoffs between the risk to human life (both their operators and civilians) versus the scarce resources they have to clear minefields.

In Ukraine, unmanned aerial vehicles (UAVs) have become a key tool for accelerating the surveying process. UAVs provide a bird’s-eye view, making it possible to detect landmines on the surface and spot other critical clues, such as trench lines and craters, that would be near-impossible to spot from the ground. But sifting through the vast amounts of imagery with human analysts can take up to five days for a single minefield, creating a serious bottleneck. So HALO knew they needed a solution that could deliver human level precision at speed. 

Once HALO established their biggest problem, they translated it into a well-defined data science problem – and looked to see if AI could help solve it. One of their mantras that they stuck by is to “start and scale iteratively, optimise later”. HALO knew they didn’t need all the answers straight away, but they were clear on what they wanted to achieve – they started with the why.

Strike a balance between impact and feasibility

Once you have identified your biggest challenges, there are likely a myriad of potential solutions, varying in size and complexity. When deciding between which path to take, it's useful to step back and assess what’s both impactful and feasible. The sketch below can help you get started.

It’s simple, but it can help you quickly weigh up benefits versus feasibility so you prioritise the right projects. Some AI initiatives deliver quick wins – high-value, easy-to-implement solutions that show results fast. Others take more time but offer transformative impact. The key is focusing your efforts where they matter most, rather than getting caught up in shiny, low-impact tech. 

To help you decide what’s feasible, we suggest you use all the expertise your teams have to offer. HALO is always looking to improve their process on how they clear landmines and they see technology as a key enabler. As technology evolves including AI, new opportunities arise. But even as new technologies emerge, they must be grounded in the real-world context of where and how they operate. A solution that works perfectly in theory or in a lab but isn’t practical in the field is not useful. That’s why HALO involves both domain experts (who understand the field) and technical experts (who understand the tech) in designing and tendering their R&D. We encourage other organisations to combine the expertise across their teams to identify the right problems to solve.

Aligning business and technical teams on the why and collaborating on the how helps assess project feasibility. Domain experts may not know what’s technically possible, while technical teams may miss domain nuances – neither can succeed alone.

Reduce the distance between developers and your business/users to enable a better understanding of your problems and faster iteration.

Your technical staff must be able to articulate how they measure success and how they align with your business goals. This ensures time and effort isn’t spent developing solutions that don't actually solve the right problems or make the appropriate tradeoffs. Technical staff can get focused on delivering a “perfect” solution, when they may have already developed a valuable solution. Delaying implementation might have a high opportunity cost and clear success metrics will help you make the case for deployment. 

Too often, ideas get lost in layers of management or buried in documentation-heavy processes. Add in slow communication, and suddenly, projects are either delayed or delivered in a way that doesn’t fit the original need. Close this gap with direct lines of communication between your teams.

Where organisations get it wrong (and how to avoid the same mistakes)

Taking a strategic approach may seem obvious and intuitive, but we’ve seen first-hand organisations pursuing AI initiatives as a badge of innovation – chasing hype, ticking a box or developing AI-powered tools with little path for execution or benefit to the organisation. 

Some of this might be due to the excessive marketing hype that characterises AI as an unknowable intelligence that solves problems by “magic”. While AI is a powerful technology, without understanding your challenges, you won’t know if AI is the right tool.  

In particular, deep seated organisational, personal or governance issues are almost certainly not going to be solved by technology alone.

To avoid this, be clear about your business outcomes. We use the decision loops framework. Map out the decisions you need to make, to define the set of actions you should take. This will help you achieve a specific and measurable business objective.

Another common (and often overlooked) mistake is not defining success metrics upfront. Without clear metrics, the goalposts can change and the direction of a project can meander. To be clear, goals and metrics can change as your understanding of the situation evolves but, without defining success metrics upfront, you are creating the conditions for confusion and uncertainty later. If you can't, there's no way to tell whether the AI programme you’ve implemented is delivering real value. And it’s impossible to build the feedback loops and improve the performance over time.

Before choosing which AI metrics to focus on, you must understand the trade-offs your organisation is willing to make – this helps define what “success” looks like. AI systems aren’t perfect – they’ll make mistakes, and you need to decide which types of mistakes are more acceptable in your context. A false negative is when an AI system says something won’t happen, but it actually does.

  • Example 1: A retailer might wrongly predict a customer won’t buy something – the impact is small (maybe a lost marketing opportunity).

  • Example 2: A doctor wrongly tells a patient they don’t have cancer – the impact is serious, even life-threatening.

Because of this, different sectors need to prioritise different trade-offs:

  • A retailer may accept more false negatives to gain efficiency.

  • A healthcare organisation will aim to minimise false negatives, even if it means having more false positives (e.g. false alarms).

By being clear about these trade-offs from the start, you can choose the right metrics and know how to evaluate your AI system effectively.

For example in HALO’s case, they are particularly concerned about false negatives. If they miss landmines, this could result in people being injured. Whereas if they predict there are landmines present but there aren't, some resources are wasted but this is better than lives risked.

This is why aligning your business/operational teams with your technical team is so important. By consistently tracking your project against success metrics, you can ensure your project delivers something valuable. You’ll have a clear picture of what works and what doesn’t for continuous improvement.

Takeaways

To identify the projects which are both valuable and possible to solve with AI, we suggest you do the following:

  1. Start with high value business problems and work out what technology you need.  This may mean the best solution is a simple non AI approach.

  2. Consider the feasibility/complexity of implementation. A medium value project which is technically simple to deliver, probably trumps a high value, highly complex project that fails.

  3. Reduce the distance between your users and the technical team. Your technical team should have a clear understanding of the problems you’re aiming to solve, be able to explain why the metrics they’re using matter and iterate on feedback quickly.

  4. AI isn't a single, all-encompassing solution – it’s a set of tools, techniques, and systems. When communicating internally or externally, be clear about the specific techniques you’re using, doing so will enable you to have nuanced, informed conversations about why this AI project delivered value, is or isn’t appropriate, or why another failed. 

Stay tuned for the next instalment of the blog series, where we’ll cover data infrastructure, how to identify the data you need and data governance.