The role AI could play in the UK’s future became a little clearer this week with the release of two key reports. First, the AI Council and The Alan Turing Institute published its AI ecosystem survey, a study of over 400 responses from those researching, developing, working with, or using AI technologies that’s designed to inform the development of the UK’s National AI strategy.
Today, the government also launched its 146-page data reform proposals, with the aim of creating a trusted data regime that drives growth and helps businesses make the most of their data. The proposals include considering relaxing rules around the use of AI in its post-Brexit data regime, a move that would represent a significant break from the EU.
These proposals and insights aren’t just academic; getting these reforms right is critical to ensuring that the UK remains a global leader in AI. In turn, remaining a global leader in AI will determine everything from the strength of our economy to the ability to defend our country.
Oliver Dowden MP, Secretary of State for The Department for Culture, Media & Sport, summed the stakes up perfectly: “Now that we have left the EU, we have the freedom to create a new world-leading data regime that unleashes the power of data across the economy and society. These reforms will keep people’s data safe and secure, while ushering in a new golden age of growth and innovation right across the UK, as we build back better from the pandemic.”
As CEO of Faculty, I’ve often found myself on the frontier of AI implementation in the UK; we’ve taken on more then 400 AI challenges across Healthcare, Government, Consumer Industries and a wide range of other sectors, and seen the effect AI can have when it’s implemented correctly.
As we look ahead to the full launch of the UK’s National AI Strategy at London Tech Week at the end of the month, I wanted to give my thoughts on a few of the issues that came out of the AI Council’s report; to provide an on-the-ground view of how these reforms could shape the future of AI in the UK.
Research and innovation
Of those surveyed in the AI ecosystem survey, only around a third thought that the UK is investing sufficiently in implementing AI across the UK’s research sector. About three quarters felt that there are bottlenecks in AI research that could be addressed with public investment in talent.
It’s true that, with the right application, AI could be helping researchers of all backgrounds crunch data faster, extract deeper insights and draw more reliable conclusions.
The key to embedding AI in research isn’t just more of the same investment, though that will certainly play a crucial role. First, we need to address the funding culture that often plagues academia. As a former academic myself, before starting Faculty, I spent my career in academia, specialising in Quantum Physics. I can say with some certainty that interdisciplinary collaboration is vital if we want academia to embrace AI. With software developers working with physicists, or data scientists supporting biological research, the possibilities for innovation will multiply rapidly. The government could do more to help achieve greater collaboration.
Only around one in five respondents of the AI ecosystem survey thought that businesses had either the necessary skills and knowledge needed to understand where value could be gained from using AI, or that there is sufficient training in AI skills in their UK workforce. 81% of respondents thought there were barriers to recruiting and retaining top AI talent.
Faculty was founded for this very reason. We saw that lack of time, patience or technical expertise was holding many companies back from AI adoption, so we built the Faculty Fellowship programme, a scheme which addresses this skills shortage by training and placing science, maths and engineering postgraduate students from leading universities into roles in industry. Hundreds of organisations, from Easyjet, British Airways, Sky, Blackrock and the NHS have used the programme to get access to top data science talent. Eventually, we expanded our offering beyond the fellowship to help companies build their own in-house expertise, or to guide them through their AI projects.
We’ve just kicked off our 20th Faculty Fellowship with a fresh batch of fellows; some things have changed since we started the fellowship, but much remains the same. Again and again, we see that, to get real value from AI, companies must build their own data skills. Which is why we help them do this, with training and consulting services around our AI implementations. An example of this in action is with NHS England & NHS Improvement, who we are working with to build their internal capability so that their analytics teams can build and utilise similar forecasting techniques that were pioneered in the Early Warning System. Upskilling analytics teams will make a real difference to giving AI a real long-term impact in organisations.
Data, infrastructure and public trust
88% of respondents agreed that the UK should seek to lead the development of data governance on AI. Over three quarters agreed that increased regulation of AI was a priority to improve and maintain public trust and that most organisations in their sector/domain should take time to build trust and transparency in the use of AI.
It’s true that regulation is vital to protect both the general public and organisations from the potential negative effects of AI. But the responsibility doesn’t sit solely with the Government; tech companies should also be building unbiased, safe, private, reliable AI into their software.
To do this, we need clear, practical AI safety technology, as well as high level principles. It’s critical that good governance happens within the technology itself – either approach on its own will fail.
Finally, the Government should not think of AI as a single technology when it comes to regulation and governance. To do so is like saying we should regulate steel, when what matters is how steel is used. Imagine trying to design regulations to govern steel, without knowing whether it is turned into a car, a beam, or into a gun. You’d have to limit the access of steel in the same way you limit access to guns, but you’d then make it almost impossible to use as a building material. That would completely crush people’s ability to innovate in all the other domains of steel usage.
The level of regulation needed to ensure that AI is safe should adjust to the way that AI is applied.
An algorithm which decides whether blue or green jumpers should go at the top of a fashion company’s website is vastly different from the regular use of algorithms to make diagnostic decisions in the NHS. Each application will need to follow slightly different parameters in terms of privacy protection, bias mitigation and other important AI safety metrics.
National, cross-sector adoption
80% of respondents agreed there were particular areas in their sector where the adoption of AI was low, but its potential benefits were high.
However, opinion was mixed on whether legislative changes are required to change this, with only around half believing that legislative changes would encourage the adoption of AI in their sectors. 70% of respondents agreed that national, cross-sector adoption of AI is dependent on government initiatives and investment.
I would, of course, love to see more adoption of AI across both government and business. However, regulation isn’t necessarily the best way to solve this. The government could lead the way here with something absolutely game changing. What if the government changed its own rules to ensure departments have to procure services first from UK SMEs, before going out to the wider market? Or provide the Treasury with a pot of money to spend on AI applications from UK technology companies? That would do more to accelerate the adoption of AI than many regulatory flourishes.
It’s excellent to see the power of AI being so seriously considered by the UK government. But, if the UK wants to deliver on its ambitions; we must confront the reality of AI – its pitfalls, as well as its potential benefits – head-on, combining theoretical principles with grounded guidelines and technical understanding. It’s the only way to set us on the path towards building a UK AI economy that benefits everyone: business, government, and society.