Artificial intelligence (AI) is (probably) the most important technology of our age. Over the course of 2018, ASI worked with many organisations to help them build and implement AI, and we saw the huge potential of the technology to bring value to organisations firsthand. From this experience, we’ve picked out a few trends for 2019. Here are three trends to look out for that will shape AI this year.
1. Deployed data science models over proof-of-concepts
In 2018 everyone talked about AI’s potential. While it’s still early days for the market, this year we believe there will be fewer proof-of-concepts, and we will start to see more real world deployments as organisations look for a return on their investment.
In the course of deploying models into business-critical environments, teams will increasingly come to recognise that the machine learning code and feature engineering are just a small part of the overall requirements for real-world AI, as the picture adapted from a Google paper shows, below. More and more organisations will discover that the fastest way to deploy AI is to focus their team on the parts of the problem that are custom to their application, and leave the rest to a general-purpose data science platform.
2. In-house AI capability, not just outsourced expertise
Until recently, large organisations have typically outsourced AI and data science. In 2019, we will see businesses start to view AI capability as a core competence they must develop themselves in order to gain a competitive advantage.
This is largely down to the fact that the performance of black-box AI has generally proved to be a source of disappointment rather than a beguiling mystery. The distinct and specialist nature of so many businesses, and the quirks of their data sets, means that they require an in-house AI capability to get value. These organisations do face problems, though, because they need to build three separate streams (AI strategy, AI skills, AI software) concurrently. Failure on any one of these three streams can bottleneck progress, and prevent the organisation from mastering AI.
3. AI safety becomes a concern
More real-world deployments of AI will mean that safety and trust will rise to the top of the agenda. If AI is being deployed more readily in our everyday lives, then we need to know that it’s safe and understand how it might affect our future.
Discrimination and bias were huge topics for the AI industry in 2018. Google made headlines when it banned pronouns on its AI-enabled smart compose technology after it was shown to recognise men’s voices better than women’s. MIT Media Lab research released last year also showed that facial recognition algorithms detect white men with fewer errors than darker-skinned women.
Equally problematic is that machine learning algorithms are being used to undermine individuals and institutions. AI has the potential to exacerbate this trend by making it possible to create fake, but indistinguishably realistic videos of public figures making statements that were never said – what are known as ‘deepfakes’.
On the other hand, AI can offer a solution to these potential threats. New AI algorithms are being developed to detect and flag fake content, to stop it from being uploaded and spread in the first place. This has been a focus of our work with the Home Office in order to develop software that can detect terrorist propaganda and our ongoing work with the Alliance of Democracies Foundation to protect against political disinformation.
Even though businesses understand the value of AI, many organisations are still unable to use the latest machine learning techniques, simply because they judge them to be unsafe. As a result, addressing public concerns about data use, fairness and transparency will be important both to AI’s development in 2019 and to its longer-term adoption.