We're all philosophers now

Our Principal Product Manager, Tom Oliver presents a fresh perspective on how the rise of AI will impact our roles and the skills we’ll need moving forward.

2025-04-23AI Development
Tom Oliver
Product Manager

The conversation about AI and the future of work tends to run at two extremes. On the one hand, you have utopian visions of liberated humans in post-scarcity abundance. On the other, dystopian forecasts of mass unemployment and even outright doom. 

This polarised debate has regrettably become typical of our public discourse today. Which means we’re neglecting the most interesting insights lurking in the centre ground. 

As AI progress continues, a dramatic shift in the nature of human work is inevitable regardless of p(doom). This shift has already begun and will speed up dramatically in the years ahead.

With AI moving up the stack of work, what remains for humans isn't nothing – it's everything that matters most.

This is the first instalment of a five-part blog series where I’ll explore humanity's evolving role in an era of increasingly capable AI. Below, I'll discuss why human work will shift more toward goal-setting than goal execution.

How should we understand intelligence? 

In ‘Human Compatible’ Stuart Russell defines intelligence as "the ability to achieve goals in a wide range of environments." This emphasises that intelligence isn't just about having knowledge or smarts. It’s about using what you know in the right way to accomplish your objectives. 

Another influential definition comes from psychologists and AI researchers such as Robert Sternberg. He describes intelligence as "mental activity directed toward purposive adaptation to, selection of, and shaping of real-world environments relevant to one's life." 

Both definitions share a common perspective. Intelligence is about the successful pursuit of goals. 

AI doesn’t decide what you want to do

What's often lost in discussions about AI capabilities is a fundamental truth: these systems are “just” pursuing goals that people have set.

When you type a prompt like "Write me a business plan for a sustainable coffee shop" into ChatGPT, you're setting the goal. The AI doesn't have its own agenda or desires – it's trained through millions of iterations to predict what text should follow your prompt. 

Behind the scenes, the developers have shaped the system with a process called gradient descent. This is essentially a mathematical method that gradually adjusts the system's parameters to better achieve its assigned objective of producing helpful, accurate text. That is, to train in its intelligence.

This pattern holds across all AI systems today. They don't decide what to do; people do. They don't choose what goals to pursue; people do. Assuming we can solve alignment (discussion of which I’ll reserve for a future blog), this should remain true. 

As AI becomes more capable of performing human labour – including not just routine tasks but increasingly creative and analytical work – this distinction becomes crucial. 

AI systems need people to specify the goal they should pursue in training and at inference time.

Philosopher kings

This isn't as abstract as it might initially sound. In fact, these dynamics are already visible in today's society.

Consider some of the roles that command some of the highest status and compensation in our society: CEOs, investors, policymakers, and other leaders. What is their primary function? It isn't to personally execute tasks – it's to determine which tasks are worth doing at all. They form judgments about objectives, trade-offs, and values. They decide what problems their organisations should solve and what goals they should pursue. 

Consider great musicians, artists, and creators. They imagine new things they could bring into the world and decide whether it will resonate with others. That resonance is in some sense a proxy for what will be considered valuable.

These are fundamentally philosophical questions: What is valuable? What is the good we should aim for? How should we balance competing values? What kind of world do we want to create? What goals should we set? Which should we pursue at the expense of others?

In the end, what is left for us, is to determine who wants what, who gets what, and why. We’re all philosophers now.

The new dividing line

This suggests a profound change in how we should think about the future of work: not as a competition between humans and AI, but as a division of labour between goal-setting (people) and goal pursuit (AI). 

This partnership will be central to how resources are allocated and consumed, as well as the value – both real and perceived – that results. As this division of responsibility takes hold, our economic systems will necessarily transform to reflect this new reality.

In the past, technological revolutions created dividing lines within human labour markets. The industrial revolution drove a distinction between manual and cognitive work. The information revolution accelerated this shift, privileging knowledge workers over physical labourers.

The AI revolution is different. Rather than creating new divisions between types of human work, it fundamentally redefines the boundary between human and machine contribution. 

As AI capabilities expand, the domain of uniquely human work contracts. But it contracts toward something essential: the determination of which goals are worth pursuing in the first place.

This has profound implications for how we prepare people for the future, how we structure our organisations, and how we think about the distribution of power in society. 

If only a small elite is empowered to set objectives for increasingly powerful AI systems, we risk amplifying existing inequalities and surrendering and centralising control. The democratisation of goal-setting power becomes not just a question of fairness, but of ensuring that the objectives pursued by increasingly powerful systems reflect our collective values and aspirations.

Conclusion: The path forward

The future of work in an AI-enabled world isn't about humans competing with machines at tasks machines are increasingly better suited to perform. It's about humans focusing on what remains essentially human: determining which goals are worth pursuing and why.

This transition won't be easy. Our economic systems, educational institutions, and social structures aren't currently designed to distribute goal-setting power widely or to develop this capacity at scale. And this transition will be disruptive. Jobs will be lost. Industries will be transformed. Skills will become obsolete.

But rather than simply reacting to these changes, we have an opportunity to proactively shape this transition. The question isn't just how to prepare for a world where AI can do more of what people do currently. It's how to create a world where people – all people – can do more of what truly matters – exercise judgment and give voice as to which goals are worth pursuing and why.

In the next part of this series, we'll explore the stubborn persistence of resource constraints, even in a world of AI abundance.