We welcome the government’s plans on AI regulation. Context-specific regulation is the right approach given the applications of AI are vast, and “general rule” regulation should be avoided as the landscape evolves.
The real question surrounds AI risk and how we define it, particularly when AI often works hand in hand with humans. There is a wider ethical question around fair applications and potential bias of AI tools executed with human guidance. As TechUK rightly notes, the assessment of AI risk can be ambiguous and thus leaving the determination of risk to individuals and companies is risky itself.
As UK AI regulation develops, there must be a real focus on transparency and trust. There is a growing need for sector-specific regulatory frameworks that are informed by real world AI applications. Tailored guidance must be issued to each sector for consistency, and to support the greatest chance of UK regulation being understood across different jurisdictions. Different regulatory bodies need experts on the front line – data scientists, engineers, and more – to fully assess the scope and impact of AI projects as they develop.
The UK is an AI superpower and one of the best places in the world to found and grow an AI business. Anything that threatens this would be disastrous, both to individuals relying on the technology and the businesses providing it. That is why a pro-innovation, lighter touch regulatory regime is right for now. At the same time, we must ensure that the UK’s growing AI talent pool can continue to improve technical and business solutions for AI safety. By doing so, we can continue to strike the right balance between AI innovation and ensuring the highest standards of AI safety, ensuring trust in one of the transformational technologies of our time is not dented.
Faculty is a member of TechUK. A full response from TechUK to the UK’s AI regulation plans can be found here.