The first AI Fringe: key themes to watch 

The UK will host the AI Safety Summit across the first two days of November. As the first international event of its kind, the summit is an opportunity for the UK to confirm its place as a leader in the global AI discussion.

2023-10-26Safety

The UK will host the AI Safety Summit across the first two days of November. As the first international event of its kind, the summit is an opportunity for the UK to confirm its place as a leader in the global AI discussion.

Equally exciting is the launch of the AI Fringe; a complimentary event series from 30th October to 3rd November. The series will bring together public sector representatives, industry leaders, and academic voices to discuss everything AI safety. At Faculty, we’ve been a leading voice on AI safety for the past decade and are a proud founding partner of the AI Fringe.

But what themes will dominate the discussions? What insights should attendees and online viewers expect to gain? 

We asked our subject matter experts and fellow panellists for their predictions on the themes they expect to be discussed across both events.

Focus on frontier AI, from a wider range of voices

The AI Safety Summit has five listed objectives. All are related to how the international community can agree on a collective approach to the risks presented by ‘frontier AI’ models. 

These are highly capable general-purpose AI models that can perform a wide variety of tasks to a level that matches or exceeds the capabilities of the most advanced models available today.

We spoke to Faculty cofounder, Angie Ma, on the scope of the Summit and how the AI Fringe offers a platform for a wider share of voices to be involved in this vital conversation.

Angie, what are your thoughts on the scope of the AI Safety Summit towards handling frontier AI? How can the AI Fringe help support those conversations?

“It’s right for the government’s AI Safety Summit to focus on how best to manage the risks from the most recent advances in AI. But it’s also hugely encouraging to see the government’s focus and commitment to making the UK a significant voice in this space.

By the nature of a government-hosted global summit on AI, there can only be so many people in the room. But there are a wealth of other voices across wider society, academia and business who can and should contribute to the debate.

Our ambition with AI Fringe is to create a space for a broad range of opinions and viewpoints to complement and add to the topics covered at the AI Safety Summit. This will be a place for people to come together and demonstrate that diversity in the AI ecosystem will be one of its strengths. 

The AI Fringe will add to the discussion by broadening participation, particularly as it takes place during the same week as the AI Safety Summit with all eyes on the UK.”

Navigating AI's potential risks

Faculty’s cofounder and CEO, Dr. Marc Warner, will be sharing the AI Fringe stage with Accenture UKIA CEO Shaheen Sayed, author and University of Oxford professor Carissa Veliz, and the Financial Times’ Innovation Editor John Thornhill for the session, “Successfully navigating the AI hype cycle.

The talk will dive into the potential of advanced AI models and how unlocking these benefits must be balanced with managing the risks. 

Marc, part of your upcoming talk at AI Fringe is on avoiding the potential risks that AI presents. What should organisations be mindful of here?

“It’s possible for organisations to realise the benefits of AI without having to face dangerous risks. A key component of this is applying AI to the right kind of problems. We know from experience that ‘narrow’ applications are safer because they have specific, predetermined goals. 

They are also usually the most organisationally successful, because they are laser-focused on solving a specific problem – not something loose or undefined. For example, using AI to forecast when hospital patients might be discharged so that beds can be freed up and more patients can be treated.”

When addressing AI safety, you’ve often stated that businesses and wider society should take a ‘human-first’ approach. What role should humans have in overseeing the output of the AI models that businesses put in place?

“One of the core AI safety principles we’ve long held at Faculty is that humans should always set the goals and be in control of what an AI system is doing, and that this should be a design principle from the ground up.

Human-first AI means constant, explainable oversight of what your AI systems are doing and why. It means humans keep the final sign-off over an AI system’s output so that biassed, harmful or false information will be stopped at the source. This means investing in AI transformation strategies and the right technology platforms that allow for system-wide governance and oversight.”  

Cutting through the hype

AI has dominated headlines this year, aided by the widespread adoption of tools like ChatGPT from our partner OpenAI. And the ‘hype cycle’ around this transformative technology has yet to show signs of slowing down.

Shaheen Sayed, Accenture’s CEO UK, Ireland and Africa, will be joining Marc on stage to discuss how businesses can cut through the hype around AI, unlock tangible value, and create long-term impact.

Shaheen, you will be speaking at the AI Fringe on how to navigate the hype cycle around AI. How significant is the AI opportunity for organisations today? And what steps should leaders take to future-proof their investments?

“ChatGPT captured people’s imagination because it ushered in a technology that felt playful and powerful simultaneously. Anyone could experiment and feel instant gratification with the immediacy of result. Our new human paradigm is to expect response dynamics in the moment and there is little difference between social expectations and that of the world of work. 

The opportunity is significant; no organisation is immune, and we are already seeing strong demand from industries like banking, insurance, capital markets, communications, and life sciences. Business leaders need to help their employees succeed with Gen AI, through embracing this new “superpower” and using it as a tool to enhance performance. 

This means establishing the necessary training and support for their teams as well as being systematic in how roles and operating models are designed. By default, the idea of managing AI responsibly will be part of every leader’s DNA. 

This is not just about wielding technology safely in your given organization, its bigger than that; it will require a mindset and cultural shift for executives and our places of work that is frankly unprecedented.”

Bridging the AI skills gap

The World Economic Forum (WEF)’s 2023 Future of Jobs report estimates that 44% of worker skills will be disrupted over the next five years. Advances such as generative AI will undoubtedly play a significant role in that disruption. 

Analytics, technological literacy and creative thinking are listed as the skills most likely to be in the greatest demand in the new AI economy. So how can we best prepare both the existing workforce and those entering the market to enhance their work with AI?

Samuel Hanes, Head of Faculty’s Fellowship programme, will be addressing this question on a panel with; Raspberry Pi CEO Philip Colligan, Stemettes CEO Anne-Marie Imafidon, Lloyds Banking Group’s Paul Dongha, and Indeed’s Trey Causey.

For Hanes, the time is now to start defining what these skills should look like.

Samuel, how do you see the proliferation of AI shaping the skill sets needed in the modern workplace?

“AI is changing the way we all work, and with this transition comes huge implications for the skills that the next generation will need over the next decade. For those in technical and non-technical roles alike, access to generative AI models will reshape how we approach everything from coding to team management skills.

It’s something we’re already seeing play out with the Fellowship as we help find and develop the next leaders in AI and data science. I’m looking forward to sharing the insights we’ve uncovered through our work, and learning from our expert panel joining us on 2nd November.”

A defining week for AI safety and inclusion

Across the five days of AI Fringe, we will hear from the most prominent minds and thought leaders in the space today. The series of events will also give a chance for a wider range of communities across business, academia and civil society to come together and demonstrate that diversity in the AI ecosystem will be one of its strengths going forward. 

Whether you’ll be joining us in person or tuning in online, we look forward to seeing you there.  


Find out more and register for the AI Fringe here.