Last week, Faculty’s Lead Data Scientist Alex Adam sat down with Nina Schick, an investigative journalist who specialises in how technology is reshaping politics in the 21st century. 

In her recent book “Deepfakes: The Coming Infocalypse”, she reveals alarming examples of deepfakery and explains the dangerous political consequences of the “Infocalypse”, both in terms of national security and what it means for public trust in politics. 

This is part 2 of the conversation between Alex and Nina. You can find part 1 – where we cover Nina’s background and the origins of deepfakes – here.

The ‘liar’s dividend’ and the case of Ali Bongo 

It has to be said that, to date, there haven’t been many charismatic, deep fakes that have disrupted politics in a very meaningful way. But to me, that is only a sign that the barriers to entry are still higher than people sometimes think. Those barriers are coming down; it’s only a matter of when, not if. 

Equally, ‘shallow fakes’ – manipulated media that have nothing to do with AI – are already so effective that deepfakes haven’t really been responsible for most of the damage.  Again, you see this, both in the context of COVID, and particularly of the US election. The President himself, Donald Trump, was repeatedly using manipulated media and cheap fakes to hunker down on his corrosive narrative that somehow the election had been illegally conducted and subject to widespread voter fraud.

I mean, you literally had the President of the United States using authentic videos of election workers taking ballots into buildings, saying that this was proof that they were somehow tampering with the vote. This was something that was believed by millions of Americans. We saw what happened on 6 January with the storming of the Capitol. The people who went there were goaded by the president into believing this narrative that the election was stolen from them. They didn’t go there because they thought they were violent insurrectionists, they went there because they thought they were democrats protecting democracy.

These cheap fakes already have a corrosive effect, in that they enhance something known as the “liar’s dividend”. This was coined by academics in the context of deep fakes. Let’s say we end up in a world where everything can be faked – even things that traditionally we’ve seen as an extension of our own perception, like video – and in such a sophisticated way that to the human eye is not going to be able to distinguish if something synthetic or authentic. 

So, in a world where all of that is possible, everything can be denied. That means liars get this double dividend: not only can they target anyone with synthetic media, but they can also eny all authentic media. So that is actually the biggest impact of deep fakes we’ve seen to date. They’re not ubiquitous yet in the political realm, but they’re already enhancing the liar’s dividend. 

We’ve seen some crazy case studies of the liar’s dividend playing out in politics across the world already.

At the end of 2018, Ali Bongo, the President of Gabon, had not been seen in public for quite a few months. He had actually suffered a stroke, whilst he had been abroad in Saudi Arabia. 

He was the kind of heir to a dynasty that had ruled the relatively stable nation for about 50 years. So his allies were trying to keep the fact that he was incapacitated on the down low. But increasingly, his political rivals started speculating that he was dead. Political unrest began bubbling along in the country. 

So, in order to quell this unrest, they decided to get Ali Bongo to do his traditional New Year’s Eve address. Now, he looked very odd in this video: his face looked frozen, his eyes were really wide. Basically they started speculating that this was a deep fake of Ali Bongo, and that he was actually dead. And that led to was an attempted coup d’etat just a week later.  

There’s an explanation for why he looks so bizarre: he had suffered a stroke and he had probably had plastic surgery and Botox to alleviate some of the effects of this stroke. But the rumour that he was dead, and that this was a deep fake, was enough to essentially kick off a coup.

Another interesting example came when I was submitting my manuscript last year. It was just as the George Floyd video had come out. That video was so powerful that it unleashed a movement against racism and police brutality, not only in the United States, but all around the world. And I was thinking just as I submitted my manuscript: “Well, it won’t be long before a video like this is going to be decried as fake.” 

And sure enough, it was only two weeks later when Dr Winnie Armstrong, an African American, PhD-holding, verified Republican candidate standing for the house, released a 24 page documents saying that the entire George Floyd video is a deep fake hoax. She said that his face was actually swapped onto the body of a former NBA player. She’s not some lunatic on some fringe; this is a woman who’s got a PhD who is standing for the house.

In the context of 2020,it didn’t get that much traction. But in 2024 or 2026 or 2030, when synthetic media media and other forms of manipulated media are ubiquitous…when we no longer have a grasp on what’s real or what’s not… the danger is that you use your own biases, or political allegiances to interpret what’s the truth. I thought that was a very interesting case study of what is probably likely to happen again in the polarised political environment of the United States and other countries. 

Ultimately, the ability to essentially align a piece of media with your own beliefs by simply denying that it’s real….yeah, it’s quite worrying.

Alex Adam: I agree, I think the ability for individuals to simply deny that media is true on the suspicion that it could be a deepfake is one of the main reasons why I think deepfake detection technology is going to be very important going forward. It’s needed so that we can assign some measure of truth to content that one sees online or on social media. 

Who is responsible for countering deepfakes? Governments, or technology companies?

Alex: I think that brings me naturally on to my question:  what do you think that technology companies and big tech could actually do to help in this space? 

Nina:  They have to be players, not least because they are the new power brokers in this new information ecosystem. So you saw for example Facebook launching the deepfake detection challenge, where they released a dataset that people could use to try and develop deepfake detection models. I think Amazon was also involved in that processl. 

But, as I already mentioned, when you look at deep fakes in the context of the corroding information ecosystem and the damage done by cheap fakes, we’re talking about a much bigger problem here. We need to shore up the integrity of the information ecosystem itself. 

In an ideal world, you would want to have tech companies or social media companies almost at the point of entry. So, on Facebook or on Twitter, we would want to have detection technology inbuilt to their platform. That means any kind of manipulated or synthetic media can automatically be flagged – because it’s going to be too much work for moderators. 

Of course, there’s a lot of grey spaces there as well. How do you define synthetic media that’s being used for comedy, or for satire, for example? 

But then tech companies and industry can also work on authenticating the provenance of authentic media. It’’s been very interesting to see how Adobe has been leading an initiative here in conjunction with the New York Times and Twitter. They’re actually working on building provenance methodology for any media into the hardware of devices. This would be a very useful tool for journalists or human rights activist. 

So tech companies should be involved in terms of detecting fake media and then also using tech solutions to prove the provenance of authentic media.  

Alex: What do you think governments should be doing to help in this space? 

Nina: I think governments can’t afford to ignore this challenge. I would argue that the corroding information ecosystem is one of the biggest challenges of the 21st century; as big as climate change. 

And you cannot look at the problem of misinformation or disinformation or deepfakes in isolation. You have to see it as this broader paradigm change in the actual information ecosystem.  The changes in ways that we are communicating as humans is going to impact everything in politics and society. 

So we need to think about how we update our legislation, our society, the way we govern to be in tune with this new reality. 

Alex: I remember, back in 2017, when we were first speaking to governments about deep fakes, people were almost somewhat sceptical about the true extent of what this problem could become. And I think people are now starting to really realise the extent to which this represents, as you say, a paradigm shift in the way of thinking. 


You can find more information on Nina’s book on her website. If you’d like to find out more about Faculty’s work in deepfakes and online harms, check out the ‘deepfake’ tag on our blog or reach out to one of our AI experts


Recent Blogs

Subscribe

Subscribe to our newsletter and never miss out on updates from our experts.