The rapid rise of ChatGPT has catapulted artificial intelligence (AI) into the public consciousness, sparking both excitement about its potential as well as unease about its wider implications.

This was a key topic of discussion at last week’s AI Fringe, a series of events held across the UK to complement the Government’s first AI Safety Summit. By bringing a broad and diverse range of voices into the conversation, the goal of the AI Fringe was to expand the discussion around safe and responsible AI – as well as provide a platform where different communities could engage in the discussion and explore the potential of the technology.

To achieve this goal, what are the considerations and steps we need to take?

Across the week, there were panels, fireside chats and keynotes, featuring prominent contributors in the field of AI, including our very own CEO, Marc Warner, and Director of Performance, Sam Hanes.

Here are some of our key takeaways from the week’s discussions:

Innovation must be balanced with safety

During the first day of the event, our CEO Marc Warner spoke on a panel, hosted by Financial Times’ Innovation Editor John Thornhill and joined by Accenture UKIA’s Shaheen Sayed and University of Oxford professor Carissa Veliz.

The key to using AI ethically will be transparency, says Oxford Professor Carissa Veliz – adding that not only is it the right approach, but consumers will increasingly demand it.

“People want to be able to trust AI. The company who manages to use the technology [AI] in a way that is completely trustworthy is going to win this race.”

Our CEO Marc Warner added that it is important to be optimistic about the potential of AI and the advantages it can offer, but not to rush into it without carefully considering the implications and safeguarding against its risks.

“We have a great advantage when we have effective, safe technologies at our fingertips. However, I’m not a blind optimist who believes we should rush in recklessly. Instead I advocate a balanced approach that considers progress at the right pace.”

Accenture UKIA CEO Shaheen Sayed called for a continued partnership between the public and private sectors in the UK to ensure proper regulation.

“This is the point where we start to see the true convergence of the private and public sector. We’ve never done this before, with this level of technology, that moves this fast. But that shouldn’t be feared with the right people working together.”

The future of work in the AI era depends on education and a human-first approach

As part of the Digging deeper: Work, safety, law and democracy panel series, our director of performance, Sam Hanes, hosted a session on bridging the AI skills gap with Dr Paul Dongha, Group Head of Data and AI Ethics, Lloyds Banking Group, Philip Colligan, CEO, Raspberry Pi Foundation, and Anne-Marie Imafidon MBE, Co-Founder and CEO, Stemettes.

During the discussion, the group explored the questions AI is raising in the workforce, with many people worried about job security, career prospects and the impact on future generations.

It is important to be optimistic and open-minded when it comes to what they will end up doing due to this technology, according to Anne Marie Imafidon. In addition, we should consider what skills we need to develop for a world we do not fully understand yet.

Dr Paul Dongha shared how much time he is spending with his team at Lloyds, reassuring those working in roles using data that they are not about to lose their jobs.

“We must remember that the human brain has not been decoded by AI. If we manage this carefully – I mean corporations as well as the government – we can control how AI behaves and how it impacts us. The destiny of how it is rolled out is within our control. I think we have real agency and opportunity to ensure that we harness the best of it and at the same time minimise its risks.”

The purpose of AI, he explained, is to complement human capabilities.

“The bottom line is any deployment of AI is a co-pilot, it’s there to augment roles and help us be more productive and feel more satisfaction in our jobs.”

Reaping the benefits of AI, whilst avoiding the risks, means moving from conversation to action

Closing off the week, Melissa Heikkilä – Senior Reporter for AI, MIT Technology Review – hosted a fireside chat with Francine Bennett, Interim Director, Ada Lovelace Institute and Lila Ibrahim, Chief Operating Officer, Google DeepMind to discuss their key takeaways from the UK AI Summit and their hopes for the future.

For Lila Ibrahim, the most inspiring part of the AI Fringe was opening up the conversation on AI and bringing so many different voices and perspectives to the table. She added that the next key step is to capitalise on this moment.

Francine Bennett agreed, adding:

“We’ve had a lot of good general conversations and declarations, which seem very positive. So now we need to think about how to operationalise that. How do we make it into hard legislation and laws. How do we ensure there’s the capacity to do the things we want to do to make this technology really work for people and society.”

Lila added that open dialogue about the future of AI is bringing accountability to the forefront -something she has been eager to see.

“This is our moment to deliver and think about how we can develop this technology with future generations in mind. It’s a huge responsibility but it’s also great to be having these conversations with so many people now. This is not about any one country, company or sector, it needs everyone to come together.”

Francine‘s top tip is to always approach AI with a people-first mindset.

“It’s easy to go down a rabbit hole of technology which is super fun and interesting but it does not solve the problems. The outcome will always be much better if you start with the perspective of the people and society first.”

What’s next?

Looking back at a remarkable week in the field of AI, our CEO Marc Warner points out that success comes from opening up the conversation, which will open the door to meaningful progress towards safe AI.

“For me, success is getting the conversation started. It would have been inconceivable a few years ago to imagine world leaders coming together to talk about AI safety. I think it’s extremely admirable that we are having a sensible conversation before we are hitting the problem. Imagine if we had done the same with something like global warming. Now the conversation has started, we must make sure it continues.”


What could your business gain from putting people first in your AI transition?
Let’s chat and find out.


Recent Blogs

Subscribe

Subscribe to our newsletter and never miss out on updates from our experts.