Under the incredible strain of a pandemic, a looming financial crisis, Brexit and a host of other pressures, public and private sector organisations began applying AI to more new uses than ever before in 2020.
The COVID-19 pandemic drove the rapid adoption of AI across the health and care sector, launching a data response strategy that arms key decision-makers with the most up-to-date and accurate information about how the pandemic is evolving. It’s helped predict demand for hospital beds and ventilators, anticipate which regions would be next to see a rise in infections, and develop a deep understanding of a novel disease in months rather than years.
But 2020 showed us the lower limits of AI too: for example, a sudden shift to remote working and online shopping exposed flaws in some of retail’s biggest AI models; without a true understanding of causality, AI trained in ‘normal’ times frequently failed to adapt their insights to suit this huge shift in the underlying data.
It’s clear that, while there is still much to be discovered about AI’s present use and future evolution, AI is no longer an unknown quantity for organisations. And this new familiarity with the possibilities and the dangers of the technology has helped throw new light on machine learning for businesses: what it is, what it can do, what it can’t do, and what it takes to turn it into a businesses advantage.
Organisations are beginning to realise that operational AI – the kind that delivers real value to society and businesses – isn’t contained within one machine learning model.
If we’re going to ask AI to help improve our health and care systems, improve the lives of citizens and create new competitive advantages for businesses, we can’t just expect to deploy one algorithm and walk away. AI that works is delivered through AI systems, not AI models – a holistic approach made up of three interwoven elements:
- Technology – The platforms and algorithms that ingest and transform data into actionable insights must be strong, fit for purpose, safe and ethical.
- Methodology – AI isn’t software – users can’t just download a trial, run through a tutorial and begin using it. To get ROI from machine learning, organisations need a carefully considered implementation methodology that considers data availability, business processes, areas where customisation will be needed, the maintenance of the models, and more.
- Expertise and experience – There’s a reason that AI is still a topic of academic study: it’s complex, ever-evolving and difficult to get right. Without expert advice from people experienced in implementing AI outside of a lab, even the best technologies and methodologies are all but useless.
The experiences of 2020 helped some businesses begin to understand the complexities of these AI systems and the role an integrated approach plays in delivering ROI in the year ahead. The AI of 2021 will be shaped by the organisations that put these learnings into practice.
To help organisations envisage how this works in practice, we’ve created the Faculty Forecast: an in-depth examination of how real-world AI application will evolve over the next 12 months, based on our experience using AI across the private and public sectors. The Faculty Forecast divides AI evolution in 2021 into three themes:
- Organisations make getting quality data a priority – As more organisations try their hand at data science, it’s becoming increasingly clear that good data collection, management and maintenance is what separates the companies getting real business value from the ones quietly brushing their failed algorithms under the rug.
- Organisations think in terms of AI systems, not AI models – Models should simply be the core component of a much wider network of technology, implementation methodology, expertise and experience – an AI system – to ensure they are adding the most value.
- Ethics and public trust in AI remain critical – In 2021, we’ll see concerns over the dangers of bad AI evolve into informed wariness as governments, industry bodies and regulators take steps to protect against these harms and build public trust in AI.