Scarcity in a world of abundance
Our Principal Product Manager, Tom Oliver explores why we need to reflect on the broader implications of implementing more AI into our daily lives.
In discussions about artificial intelligence (AI), particularly among techno-optimists and effective-accelerationists, a familiar refrain emerges: 'AI will usher in an era of abundance'.
In his ‘Moore’s Law for Everything’ blog post in March 2021, Sam Altman, OpenAI's CEO, argued that both intelligence and energy costs will trend toward zero. He used this to motivate the claim that the fruits of AI progress will be an unadulterated good. If you haven’t read that, I recommend you do.
The nub of the argument is clear enough – AI is coming for everything and when it does, everything will be cheaper, more plentiful, better.
It's a seductive vision. But like any utopian vision, it deserves careful scrutiny. While computing resources and the ability to deploy it in the form of intelligence may indeed become extraordinarily abundant, real-world constraints, scarcities and choices will still exist. It’s not just me saying this – it’s central to Mark Zuckerberg’s strategy at Meta too. These constraints – some old, some new – will shape how and by whom the benefits of AI are distributed and enjoyed.
The most important things for us to get right in this future, go beyond the widespread adoption of AI. It’s essential to reflect on the broader implications that implementing more AI into our daily lives will have on us, and our societal structures.
In this article, I’ll explore why we need to focus on making sure AI is developed and used to serve genuine human flourishing. Not narrow metrics of efficiency or profit alone.
Constraint space
The most obvious constraints are physical. No matter how smart our AI systems become, we still face hard limits on land, raw materials, and environmental capacity. Each acre used for solar panels can't simultaneously grow food. Each tonne of lithium mined for batteries isn't available for other uses. Each tonne of greenhouse gas emission adds to the warming effect.
My argument here is not that having access to more intelligence could not enable us to discover more optimal configurations and uses of these constrained resources; I expect it will. My argument is that, at some point, there will be constraints. And we will have to choose.
These physical constraints, however, are just the tip of the iceberg. The more subtle – but I believe more important – considerations relate to our growing understanding of happiness and wellbeing.
Beyond a certain threshold, more stuff doesn't make us happier. Economist Richard Easterlin identified what's now known as the "Easterlin Paradox" in his seminal 1974 work – as societies grow wealthier over time, their average happiness levels tend to plateau rather than rise proportionally with GDP growth. Similarly, in their landmark 2010 study Daniel Kahneman and Angus Deaton found that emotional well-being rises with income only up to about $75,000 annually (in 2010 dollars), after which additional income yields diminishing returns for day-to-day happiness.
This occurs partly because we judge our station in life not in absolute terms, but relative to others. As the "wealth floor" rises, so too does the standard against which we measure our own lot in life. A house that would have seemed luxurious to one’s grandparents can feel merely adequate to the grandchild two generations on. These are relative, not absolute judgements. We live in social worlds where what matters is often not what we have, but what we perceive we have compared to others. Many of us wish this weren’t so, but it does appear to be.
I vote we plan accordingly.
The new economic questions
AI progress forces us to confront fundamental unanswered questions about production in an economy where the main constraint being relinquished is not physical labour or capital, but cognitive work.
How much latent demand exists for products and services whose production has been bottlenecked by human intelligence? What happens when this constraint is dramatically and rapidly loosened?
Some anticipate an explosion of new markets – personalised education, healthcare, entertainment, and services previously available only to the wealthy suddenly democratised through AI. The incremental costs of software development seem to be plummeting. But this is not all rosy. There are vital questions about the spoils.
If AI systems have been built on the past cognitive labour of workers, creators and owners (extracted through training data) while the economic benefits flow disproportionately to those who own the models, we risk exacerbating inequality.
More worryingly, we've learned from the social media era that optimising for revealed preferences – what people choose with their system 1 when presented with options – doesn't straightforwardly lead to human flourishing. We click, we scroll, we engage with content that triggers dopamine responses, yet often feel worse for it. Not to mention the thumb on the scale problem I mentioned in a previous blog.
What happens when AI systems become even more sophisticated at targeting our revealed preferences now with auto-generated, hyper-personalised, perhaps superintelligent (?!) content? What happens when the cost of all this content plummets? When I see my AI generated news and you see yours, how do we coordinate and act politically? Who will decide what we see and why? We may create vast new markets serving desires that don't – or only notionally – improve our lives.
The social value dilemma
The capitalist’s response – at least a capitalist with a healthy stake in the AI future – to concerns about AI-driven inequality naturally points to value creation: "Look at all these new products and services generating economic activity and wealth." This opportunity is important and we want to capture it. But this narrow argument misses two critical points. First, market creation without meaningful human fulfillment is a Pyrrhic victory. Second, the distribution of benefits matters enormously for social cohesion.
It has become more or less a cliché to point out that as AI increasingly handles routine tasks and content creation, genuine person-to-person interaction will likely gain premium value. I hope so. The physician who now has more time to connect personally with patients. The teacher who can understand and cater to each student's unique needs. The artist whose work reflects lived human experience and expression. The restauranteur who leaves people filled with joy at a great evening well-spent. These "human touch" elements may become increasingly scarce and valued precisely because they cannot be algorithmically generated.
At the same time, we face a crisis of moral authority in determining who deserves what. Traditionally societies have relied on shared virtue frameworks – religious, civic, or cultural – to justify resource allocation. In modern societies in much of the “West” and elsewhere, these frameworks have begun to be supplanted, leaving market logic as a dominant arbiter of worth. Markets alone, I would argue, have been shown unable to create the sense of fairness and justice needed for social cohesion, especially when the gap between the wealthiest and everyone else reaches historically unprecedented levels.
We need to rediscover a shared moral philosophy and re-enshrine it in institutions fit for the AI era; or things may well fall apart.
The social fabric at risk
The perception that resource disparities result from an unfair system rather than legitimate contribution corrodes social trust. This corrosion accelerates when extreme outliers emerge – individuals or corporations controlling resources orders of magnitude greater than average citizens and even some states. The damage is psychological as well as material; studies consistently show that inequality itself, independent of absolute wealth levels, correlates with reduced trust, increased stress, and poorer health outcomes.
We've already witnessed how centrally controlled digital platforms optimised to give us more and more of what we seem to want can damage rather than enhance human welfare. Social media companies didn't set out to increase teenage anxiety or political polarisation, but these emerged as byproducts of optimisation for engagement. And it’s hard to put the genie back in the bottle. Especially when most of our pensions and collective wealth by now have a healthy stake in the surveillance capitalism business model. This is not me saying these tech firms are villains; just that there are externalities to everything and if we don’t price them in, at some point, someone pays.
In the end, my great fear is that AI systems – operating at far greater scale and sophistication – could similarly optimise for metrics and a business model that satisfy surface preferences while undermining deeper human needs. If this happens I expect a world that's materially richer, unequally shared, and experientially poorer.
Conclusion: the allocation imperative
As we venture further into the AI era, our greatest challenge won't be maximising growth.The focus will be on ensuring these new AI systems are genuinely valuable and support human growth and wellbeing.
This isn't a technical problem, but a deeply philosophical and moral one. It requires us to ask what we truly value, what makes a good life, and how tech-driven abundance should be harnessed in service of those ends. Ideally, as a liberal and a believer in democracy and the ideals of the enlightenment, I would root for any system that seeks to get and act on everyone’s answers to those questions – which I’ll cover in a future blog. The greatest danger isn't that AI will outcompete humans – though that is also on my mind – but that we'll fail to develop adequate mechanisms to direct its power toward worthy goals. As a whole, this discussion highlights the critical importance of implementing safe, impactful and human-centric AI.
In the next instalment of this series, we'll explore the challenge of steering these complex systems and how we might develop approaches to guidance and alignment that reflect our collective aspirations. Rather than simply extending existing power structures into the future.