The Geopolitics of AI

Our CEO and co-founder, Dr Marc Warner shares his perspective on AGI – why it's such a big deal and what impact it could have geopolitically.

2025-03-17AI Development
Dr Marc Warner
Chief Executive Officer & Co-Founder

Last weekend, I went to an international security conference in St Moritz. The first thing that struck me was the nature of the discussions – there was little effort to precisely define, understand, or solve the challenges ahead. It was frustrating, but maybe inevitable, given the long timelines and slow feedback loops of international politics.

For my panel, I was asked to talk about how AI will change warfare. I took a broad view—I think short-term tactical gains will be countered by the opponents, but the real shift will come from how AI reshapes society itself and the downstream consequences of that.

And so what would constitute a shift like this? For my remarks, I made the claim that AGI within 5 years could be this profound, and then went on to think a little about what that might look like, and what impact it could have geopolitically.

What is AGI?

In AI, as people will have heard me discuss before, I find it helps to think in terms of a spectrum - from narrow to general. Narrow AI refers to domain-specific algorithms designed for particular tasks – playing chess, spotting tumours, high frequency trading. While these systems can fail in individual cases, their risks are mostly understood, making them manageable and (relatively) easier to regulate.

General AI (most often called Artificial General Intelligence or AGI) is different. It aims to be closer to a universal problem solver, like a human mind, capable of tackling any challenge thrown at it. Nowadays, people are defining AGI as ‘roughly expert human level at roughly all cognitive tasks’. We don’t yet know if we can build AGI, but many (including me) believe we’re getting closer.

Predictions of AGI arriving within 2–5 years are no longer considered outlandish, although there are still sensible doubters. No one knows how to build AGI in a way that guarantees alignment with human value – or how to prevent it from shifting over time. Given what’s at stake, that’s a serious problem.

Why it’s a big deal

Let’s take seriously the idea that AGI could arrive within five years, what should we expect? It would, by definition, be capable of almost all cognitive tasks currently performed by people. A few human domain-specific experts may still be needed to generate new knowledge, but once codified into data, AGI could match human level expertise.

What’s the right comparison for this? In the same way that the industrial revolution was the automation of physical labour, this is the automation of cognitive labour. And the industrial revolution was a very big deal.

The industrial revolution reshaped the world – where we live, what we eat, how we work. But just as importantly, it created a vast divide between industrial and non-industrial nations. That gap dictated global power structures for centuries. Historically, industrial (read: Western) powers dictated world affairs. Their technological edge allowed them to shape markets, alliances, and conflicts. AGI powers will hold an overwhelming advantage over non-AGI nations – just as industrial nations once did over pre-industrial ones. Any conflict between them would be entirely one-sided. But this kind of power won’t just reshape geopolitics – it could bring remarkable benefits.

Imagine fully personalised education, medical breakthroughs beyond today’s imagination (a pre-industrial society wouldn’t grasp an MRI scanner, let alone personalised cancer treatment), and levels of wealth that make today’s economy look primitive.

What should we do?

Now, we don’t know when AGI will happen. We certainly don’t know whether it will be good or not. But that doesn’t negate the need to try, as much as possible, to shape the future towards a more positive outcome, however modestly.

Fundamentally, it was this concern that led us to start Faculty, and focus our efforts on the safe adoption of AI. We believed, and still do, that safely adopting AI will be a net positive in most possible futures. Prioritising ‘safety’ means we minimise both accidental and malicious risks. Focusing on ‘adoption’ means the technology reaches many, not just a few; broadening access, spreading the economic benefit and preventing power becoming too centralised. These feel like positive contributions in an uncertain future.