Closing the AI skills gap: takeaways from the Manchester & London AI Fringe  

Seemingly everyone has worried about their job security or career prospects since the launch of ChatGPT (and other generative AI tools) almost a year ago.

2023-11-16
Samuel Hanes
Director of Performance

Seemingly everyone has worried about their job security or career prospects since the launch of ChatGPT (and other generative AI tools) almost a year ago. This week, I was lucky enough to explore this issue at two events at the AI Fringe: a session in Manchester with mayor Andy Burnham, and a panel in London with Paul Dongha, Group Head of Data and AI Ethics of Lloyds Banking Group, Philip Colligan, CEO of the Raspberry Pi Foundation and Anne-Marie Imafidon, CEO of Stemettes. 

Should we be worried about AI taking our jobs?

A quick show of hands at the beginning of each of the sessions showed that almost universally, our audiences had themselves considered their own prospects since the advent of generative AI. 

Paul Dongha, described how frequently his colleagues worry about AI affecting their jobs and seek his reassurance. 

“I talk to people everyday who have jobs that process data and interact with our customers, and they do worry about AI. I spend a lot of time telling people the human brain hasn’t been decoded. I think we have real agency and a real opportunity to ensure that we harness the best of it and minimise the risks of workforce impact.”

Personally, I’ve gone through a few ups and downs on the topic – sometimes feeling like a leaf blowing in the wind – my outlook influenced by the views of the last AI expert I’ve spoken to. For example, a passionate young computer science academic described with confidence a near future where human labour is no longer needed. Whereas my colleagues at Faculty are typically much more optimistic about the future need for human labour. Fortunately, the panel at the AI Fringe was at the more upbeat end. 

Even in very technical jobs such as software engineering, AI is changing the long-term outlook. As you may have seen, new tools like Microsoft’s Copilot are increasingly able to write code that works. The opening audience question in Manchester, and the first in our panel in London was whether people should still bother learning to code themselves. 

Philip Colligan, remains confident that while these tools make it easier to develop software, they primarily increase the productivity (and perhaps the demand for) software engineers.

“The invention of compilers and languages like Python dramatically improved people’s productivity and the ease with which anybody could write a script to give instructions to a computer. Large language models and Copilot are a fractional improvement compared to what those things did and both of those technological innovations increased the demand for programmers. I think we’re slightly forgetting the lessons from history. Making it more efficient and easier for a human to give meaningful instructions to a computer is likely to lead to more people needing to have those skills.”

In Andy Burnham’s address he concurred and doubled down on the need for (perhaps mandatory) computer science in our schools. So perhaps at least those of us interested in technology careers can feel optimistic. 

The nature of this optimism is perhaps rooted in an understanding of where AI and robotics are unable to do what people can. Even those of us most worried about the impact of generative AI can empathise with the experience described by Anne-Marie Imafidon, of firing up a generative AI tool expecting magic; then feeling suddenly more confident in the future of humanity after getting a disappointing, absurd or humorous result. 

“If you spend enough time with generative AI, you get to see some weird and incorrect hallucinations and then you think, actually maybe I’ve got a little bit longer.”   

What are the things that AI/robotics can't do?

When preparing for the Manchester session, I wanted to remind myself what we thought computers would, and would not be able to do in the recent past. I re-read a brilliant paper from 2013 – The Future of Employment: How susceptible are jobs to computerisation? In it, the authors Carl Frey and Michael Osborne used a range of clever methods and data sources to estimate that 47% of US jobs were at high risk of automation ‘within the next decade or two’ using only existing technology (as of 2013). Clearly an alarming number under any circumstances, but I wanted to know – what makes the remaining 53% of the labour market safer?

In the paper, the authors describe three ‘bottlenecks’ in automation – things that experts in 2013 felt AI and robots struggled to do. The first area was perception and manipulation. Robots have traditionally struggled to perform tasks that involve fine motor skills – especially in small spaces or with delicate objects. I’m told that even a small child can hold an egg without breaking it (as a father of three I’m sceptical) – but while robots hands are often strong and precise, it’s hard to replicate the feedback between the millions of nerves on our hands and the muscles in our forearms that control our fingers. So while we know exactly how much pressure to apply, robots often don’t. Ten years ago, when the paper was written, this bottleneck meant the paper considered plumbers and surgeons safe from automation. 

But this first bottleneck is rapidly being overcome. Recent developments such as Nvidia’s Eureka show how AI models can rapidly simulate and train robots to perform very challenging manipulation tasks, for example spinning a pen between its fingers or throwing and catching a ball.

The second bottleneck was social intelligence – in particular the ability to understand people and persuade them to do things. Your opinion on whether we have passed this bottleneck will depend a little on your perspective of how much social intelligence we humans have. 

The most exciting recent advance in this domain is Meta’s CICERO, which can play the complex social game Diplomacy. It’s an old and much beloved board game where players build alliances (and sometimes betray each other) through a series of private one to one conversations. Meta’s CICERO has two AI engines, one that figures out a game-winning strategy; and the other convinces the human players to go along with it. It can already perform as well as human players in real games. 

The final 2013 bottleneck listed was creativity. The advent of generative AI tools such as MidJourney, Chat GPT, Bard, DALL-E and Anthropic, raises the question of whether creativity will remain a comparative advantage of humanity or more fundamentally, how we define creativity to begin with.

To stay relevant, learn fast

So none of the three bottlenecks have stood the test of time. Of course no doubt there are new ones that weren’t considered in 2013 – nonetheless the debate over whether AI and robotics will eventually supersede most or all of our employable skills is certainly getting more interesting.  

Although technology has moved rapidly, the rate of automation has been far slower than predicted, despite the advances in technology. Society, business and government simply take time to change. But what we do know is that predicting the near future is very challenging. Advances in technology are quicker than we expect in some areas, but much slower in others (I’m looking at you driverless cars). 

Should we be optimists or pessimists? My bet is that change will be gradual overall, but seem far too fast for those affected. Unfortunately, the recent past shows us that trying to avoid being affected will be challenging – predicting which jobs, sectors or skills could become obsolete will be very difficult indeed. But we need not fret, the answer may lie not in avoiding long-term obsolescence, but rather learning rapidly and being ready to do so again when the time comes. 

There are programmes like TechUpWomen; Academy; and Faculty’s own fellowship that can teach people the skills that are in desperate demand today – very rapidly. Allowing us all to adapt as the demand for skills changes around us. Taking a look at the recent past gives us confidence that learning ‘future proof’ skills is likely to be a fantasy. Those who succeed in tomorrow’s labour market will have to be retraining quickly and regularly. 


How can your business help close the AI skills gap and trial an AI project?
Find more information here.