Last month, Faculty hosted a panel and reception for the Massachusetts Institute of Technology (MIT) London alumni on the current state of AI. 

Dr. Phil Budden, Senior Lecturer at MIT’s Management School, hosted the panel, putting questions to Dame Fiona Murray, Associate Dean of Innovation and Inclusion at the MIT School of Management, Faculty CEO and Co-founder, Marc Warner, and CCO, John Gibson.

The panel included an in-depth discussion of the burning questions on the minds of academics and technologists, such as AI regulation, the role of government and the UK’s potential to solidify itself as a major technology hub. 

Read on for three takeaways from our expert panellists.

1. AI regulation will require international cooperation, and Europe has a key role to play

The conversation around AI regulation continues to gather momentum on both sides of the Atlantic.

Both EU leaders and their counterparts in the US have held hearings to consider how regulation could work in practice. 

Dame Fiona Murray believes these conversations present a significant opportunity for a broader and deeper relationship between the UK and US. Not only on the future of AI, but for science and technology as a whole.

“[By] having that bilateral transatlantic framework for the whole regulatory piece, I would hope we see something that is framed as a transatlantic partnership around science and technology. Education, the underlying technical infrastructure, funding, the risk capital, or the trusted safe capital. These are all critical parts of that conversation.”

Faculty’s Marc Warner agreed, but also suggested there are risks for European countries taking a heavy-handed approach to regulation on their own. 

Italy became the first country to temporarily ban ChatGPT in April over privacy concerns. In Warner’s view, governmental attempts to, “regulate AI out of existence,” are only likely to set those countries back in their development.

He outlined that there are still relatively few bodies taking it seriously enough, but that means there is an opportunity for Europe to come together to take a leading role on AI safety. And with it, carve a unique place in AI’s future development.

“If we made the commitment to ‘innovate our way to safety and build the technology that makes AI safe’, that could be a differentiated path for Europe. This is going to be critically important, probably more important than anyone realises right now.”

2. The development of ‘narrow AI’ and ‘AGI’ will have very different implications

The term ‘AI’ refers to a very broad range of technologies. When considering its potential impact on society, Marc Warner believes it’s prudent to separate out the levels of scope between differing types of AI.

“I like to categorise things in terms of ‘narrow AI’; algorithms that are trying to achieve relatively limited things in relatively limited circumstances. And ‘general AI’; where algorithms attempt to optimise their environment and a whole range of environments.”

For Warner, narrow AI can be more easily thought of as ‘better software’ – tools created to complete specific tasks. Solutions that will be, “incredibly valuable and incredibly important to almost any organisation,” but are limited in scope and potential to harm, misinform or produce bias. 

Development in Narrow AI “should go as fast as possible,” while paying attention to comparatively lower levels of risk.

However, the potential advent of artificial general intelligence (or AGI) typically dominates the discourse around AI regulation. 

The charter created by our partners at OpenAI defines AGI as an “autonomous system that surpasses human capabilities in the majority of economically valuable tasks.” The ability of humans to adapt to AGI should not be lightly compared to previous innovations, according to Marc Warner.

“It’s a very bad idea to reason about [AGI] by making analogies. To say, “Oh well, the internet went all right, and televisions were okay, so probably the general intelligence transition will be okay as well…” This is transformative technology that deserves its own unique set of first principles.”

John Gibson agreed, saying the creation of AGI would be, “a potentially very profound moment,” in human history.

“We need to recognise that this technology has tremendous potential to bring us benefits that we should embrace. But we also need to have grown-up conversations about what we want to do with a technology that will mark the end of the era in which human intelligence is the dominant thing shaping life on Earth.”

While he conceded that nobody in the world today knows the answers to these questions, Gibson believes that, “we’re starting to get a sense of how we need to guide the conversation to get there.”

3. Can the UK become a ‘tech superpower’? Only through prioritising where to invest

Rishi Sunak is the latest Prime Minister to commit to making the UK a ‘science and tech superpower’, with a proposed deadline of 2030. But looking beneath the slogan, what steps would a country with the UK’s resources have to take to become a ‘superpower’ in science and technology?

For Fiona Murray, it means making hard choices around the areas in which you can excel.

“We have to get really clever and thoughtful about what that means and what we’re going to be good at.”

To make the most of the opportunities that AI presents, Murray highlighted a need to, “put technology into core conversations”. Not just in R&D or IT, but in other places where it has been overlooked to date.

John Gibson agreed, adding that while there is a veritable ‘shopping list’ of sectors to choose from, being ruthless and doubling down on a select few is the only realistic route forward.

“We [the UK] are obviously not going to be the best in the world in 30 different sectors. We need to pick four or six, and just commit. Science and technology is a long game, and if you have an ambition of being a superpower, you need to be in it for the long term.”

Preparing and supporting the next wave of AI talent

The panel concluded with thoughts about offering the kind of entrepreneurial culture needed for the UK to achieve ‘superpower’ status. In particular, supporting the next generation of emerging AI thinkers with sufficient STEM credentials.

On this, Fiona Murray believes it’s essential to ensure that they will have a seat at the table for future policymaking.

“Our ability rests on having those with science and technology backgrounds in the conversation. Not just as deep specialists in the margins, but to pervade the system. The education around science and technology, and how we make sure that we have that level of literacy, is incredibly important, but challenging.”

Her colleague Stephen Barnes, Managing Director at MIT-UK, is optimistic that the UK offers an attractive option for new talent hoping to further their careers in AI.

“We’re seeing growing interest in coming to the UK for further work or research among MIT students, in part driven by the UK’s global reputation as a leader in AI. It’s brilliant to see industry-leading organizations such as Faculty bringing these students on board for internships, and showcasing the UK as an exciting place for emerging talent across all STEM disciplines, and in AI in particular, to hone their skills.”


Faculty’s Fellowship programme trains and provides post-PhD STEM students with industry experience to equip them for future data science careers. 

Find out more about the Fellowship and how to get involved.


Recent Blogs

Subscribe

Subscribe to our newsletter and never miss out on updates from our experts.