The false dichotomy of platform power: Why the content moderation debate misses the point

One of our Product Manager's, Tom Oliver explores the need for transparent and open discussion about both the censorship and amplification levers on social platforms, and the effects both have on our information landscape.

2025-02-07
Tom Oliver
Product Manager

In recent interviews, both Mark Zuckerberg and Marc Andreessen have made passionate and well-reasoned cases for moving away from content moderation toward a more hands-off, "free speech" approach to governance in social media. Their arguments are compelling on the surface – Andreessen's analogy of content moderation to Tolkien's "ring of power" particularly resonates. Who would deny the corrupting influence of a power like Meta or X scale censorship?

Let me be clear: I've spent years deeply concerned about the creeping influence of what many have termed the "woke agenda" in tech. I've watched with unease as platform policies and then everyday speech began to mirror what felt eerily similar to Orwell's thoughtcrime – where certain viewpoints, however reasonably expressed, became practically unspeakable. The pendulum swing away from heavy-handed content moderation - a desire to take the hand off-the-scale - is, in many ways, a necessary and welcome correction.

But as someone who builds AI products myself, I feel profoundly let down by the incompleteness of Mark and Marc's analysis. They're telling half the story. As a political conservative whose whole reason to care about this topic is placing a high value on intellectual honesty and truth, I can't stay silent about the other half.

The hidden editorial hand

Every social media platform makes millions of “editorial” decisions per second. Not through human moderators, but through recommendation algorithms that determine which content gets amplified and which remains in obscurity. These algorithms aren't neutral arbiters – they're optimised for engagement, which directly translates to advertising revenue. Just like a traditional newspaper editor decides what’s on the front page to drive engagement and sell papers (and ads), social media companies decide what’s on your personal “front page”. 

When Zuckerberg and Andreessen frame content moderation as the primary axis of platform power, they're presenting a false dichotomy. It's as if they're suggesting that by removing "censorship," they're creating a truly free marketplace of ideas. But here's the reality: every new piece of content is like a spark that faces three possible fates:

1. Being immediately quashed and extinguished → "censorship"

2. Being left alone as just a spark → effectively invisibility in today's algorithmic landscape

3. Being deliberately fanned and fuelled into a raging inferno → algorithmic amplification

The decision to fan certain flames while leaving others to sputter out is just as much an exercise of platform power as the decision to extinguish sparks entirely. Perhaps more so, given the massive reach that algorithmic amplification enables.

Both 1 and 3 are “rings of power”. And Mark and Marc conveniently ignore this over >6 combined hours of interview content.

Follow the money

The crucial detail that's conspicuously absent from these discussions is the role of paid amplification. Platforms aren't just passive conduits for organic content – they're sophisticated advertising engines that allow paying customers to bypass the natural limitations of reach. This isn't just about traditional adverts; it's about sponsored content, boosted posts, and promoted trends.

When Andreessen likens content moderation to the ring of power, he willfully ignores misses that there's another ring: the ability to decide which voices get amplified based on their ability to pay and their capacity to attract and hold the eyeballs those advertisers are paying for. This power isn't just corrupting – it's explicitly designed to serve the business over users. As such self-styled. 

The real product leadership question

As AI product leaders, we need to be honest about these dynamics. The debate shouldn't be reduced to "censorship vs. free speech." Instead, we should be asking:

  • How do amplification algorithms shape public discourse?

  • What responsibilities come with the power to algorithmically curate reality for billions of users?

  • How is a thumb-on-the-scale that willfully shapes the information diets of 3bn peoples’ opinions and minds best governed? By whom?

  • Can we design systems that optimise for societal benefit rather than engagement?

  • How do we balance the legitimate need for sustainable business models with ethical obligations, transparency, and user choice?

Moving beyond the false dichotomy

The solution isn't to abandon content moderation or to continue with the censorship approach. Rather, we need to recognise that platform power exists on multiple axes. When tech leaders present content moderation as a primary threat to free expression while ignoring their own algorithmic and financial levers of influence, they're ducking responsibility.

The real conversation we need to have is about the totality of platform power – not just what gets removed, but what gets amplified, why, and who benefits. Until we're ready to have that honest discussion, we're just trading one ring of power for another, perhaps a more insidious one.

For product leaders building the next generation of AI-powered platforms, this understanding is crucial. We can't repeat the same mistakes. AI systems must be transparent about all forms of content manipulation – whether it's moderation, algorithmic amplification, or paid promotion. If we want AI to serve society rather than just engagement metrics, it needs to be guided by human oversight, ensuring accountability and prioritising the greater good.  

At Faculty, one of our core lessons is that AI needs to be built into processes led by people, and trusted by users. We call this human-centric. Trust in AI doesn’t come from obscured algorithms chasing viral moments; it comes from open, responsible systems that make it clear why content is being promoted, who is shaping those decisions, and how those levers are being used in a way that benefits everyone, not just the bottom line.

We don't need excess censorship or opaque amplification. We need transparent and open discussion about both the censorship and amplification levers and the effects both have on our information landscape. Building such systems requires not just a social and cultural shift but real expertise.

The stakes: Two paths forward

Imagine a world where we continue down the current path: platforms abandon content moderation while retaining and expanding their engagement-driven amplification machines. What emerges isn't a free marketplace of ideas, but rather a pay-to-play thunderdome where the loudest voices are those with the deepest pockets and the most inflammatory messages. In this world, truth doesn't rise to the top – whatever generates the most clicks does. Democracy doesn't thrive – it drowns in a sea of artificially amplified outrage and sponsored narratives.

But there's another path. Imagine platforms that are transparent about both their content moderation AND their amplification decisions. Where algorithms are optimised not just for engagement but for societal benefit. Where paid promotion is clearly labelled and reasonably bounded. Where the "ring of power" isn't just cast away – it's transformed into a tool for collective use, direction, and governance.

This isn't just idealistic thinking – it's a practical necessity as AI systems become more powerful and our digital town squares more crucial to democratic discourse. The choice isn't between censorship and freedom. It's between honest and dishonest uses of platform power. As product leaders, we must demand – and deliver – the former.