Yesterday, Faculty’s Lead Data Scientist Alex Adam sat down with Nina Schick, an investigative journalist who specialises in how technology is reshaping politics in the 21st century.

In her recent book “Deepfakes: The Coming Infocalypse”, she reveals alarming examples of deepfakery and explains the dangerous political consequences of the “Infocalypse”, both in terms of national security and what it means for public trust in politics. 

What follows is an edited transcript of the first part of their conversation, in which we cover: 

Nina’s background in geopolitics and the events that led to her interest in deepfakes. 

The difference between misinformation and disinformation – and how both can shape society. 

The origins of deepfakes.

Introducing Nina 

Alex Adam: I just want to start by giving a quick intro to what deepfakes are, then Nina is going to be telling us a lot more detail about this in the context of her book. 

So, what is a deep fake? A deepfake is a type of synthetic media in which an individual’s likeness is changed for that of another. And that could be in an image, a video or it could even be in audio. 

Now we usually restrict the term ‘deepfake’ to refer to the kind of synthetic media that’s generated through machine learning. You’ll also probably have heard the term ‘shallow fake’, which refers to more traditional tampering techniques.

Many types of deepfakes exist: you’ve probably heard of face swapping, where you take someone’s face and swap it for another person’s face. You’ve probably also heard of facial reenactment, where you overlay another person’s expressions over the image in  the video. Here’s a deep fake of our CEO, transformed into Donald Trump using style transfer. We also created an audio deep fake for the Copenhagen democracy summit of Donald Trump. 

Now, because there’s a huge amount of awareness now surrounding misinformation and disinformation online, it’s becoming increasingly important to be able to detect deep fakes. And I guess one message I really want to hit home today is that this is a really, really, really hard problem. 

Often you find that models that work well on one kind of deep fake don’t generalise to others, and you know compression and things like this can make a very very difficult situation. 

So with that, I’d like to move over now to introduce you to Nina Schick. She is an author and a broadcaster, who specialises in how technology and artificial intelligence are reshaping society today. Nina’s most recent book ‘Deepfakes: The Coming Infocalpyse’, she talks about the crisis of mis- and disinformation, and warns that AI-generated synthetic media is really the next evolving threat. 

Over the last decade, Nina has gained expertise in geopolitics and emerging technology. She’s advised many global leaders, including Joe Biden and Anders Fogh Rasmussen, former Secretary General of NATO, on deepfakes. 

So welcome Nina. I think we should kick off with asking: why do you think deep fakes are interesting? 

Nina Schick: Well, like Alex already outlined, my background really is in geopolitics. Over the past decade, you can’t talk about geopolitics politics or society without understanding that emerging technologies are having a massive impact on the future of geopolitics and politics and society. 

I have been particularly interested in misinformation and disinformation, how the information ecosystem itself has transformed over the past 10 years. This is because I’ve been working, for instance, on the Russian invasion of Esatern Ukraine and the annexation of Crimea on Russian interference in the context of the US election and various campaigns. 

It’s really important here for the basis of our conversation just to point out how nascent this is – this ability of AI is to actually generate synthetic or fake media. It can be anything – an image, a video, an audio track – and the actual ability of AI to not only manipulate it, but create something from scratch that didn’t exist before, is going to completely transform the future of all human communication. Not only that, it can become potentially the most harmful tool of visual disinformation known to humanity.

So, when Faculty alerted me to this emerging thing, known as a “deepfake” at the end of 2017,  I was very much coming at it from this angle of disinformation in geopolitics, and what the next emerging threats would be.

And it was very very clear, even a few years ago, that this could be a very potentially dangerous weapon. Because the AI behind it will democratise the ability for almost anyone to create highly sophisticated visual disinformation –  something for which until now, there was kind of a barrier to entry.  It’s going to be ubiquitous. It’s coming. And it’s going to have severe geopolitical and political ramifications. That’s really why I started writing about it in the context of geopolitics. 

Misinformation, disinformation, COVID-19 and the US election

Deepfakes didn’t emerge in a vacuum right? Even before AI got good enough to generate synthetic media, we were already facing an epidemic of mis- and disinformation. A lot of the misinformation we’re seeing comes in the form of what you can call “shallow fakes” or “cheap fakes” –  authentic media that is somehow manipulated, not with AI, but either by being mis-contextualised by being clipped in a certain way or being crudely edited. 

We’ve seen over the past 10 years how these pieces of visual disinformation have led to severe political consequences. 

For instance, Myanmar was basically shut off from the internet until the regime decided to loosen up restrictions in 2014. Then suddenly Facebook became the internet, almost overnight. And then the proliferation of cheap fakes and manipulated media actually helped unleash a genocide against the ethnic minority Rohingya. So we know, if you look at the political context of the last 10 years, how powerful visual disinformation is in the form of cheap fakes or shallow fakes. 

So now, when you consider what AI is increasingly able to do in the very short- to medium-term future, it would be silly not to think about the geopolitical consequences of that. So I wrote the book, not only looking at deepfakes but also looking at the entire corrosion of the information ecosystem and how deep fakes are the latest evolving threat in that trend.

Alex: I wonder if it’s worth just briefly outlining for everyone sort of what you consider the sort of the difference between misinformation and disinformation? 

Nina: Misinformation and disinformation are essentially both forms of bad information. The difference is the intent.  So disinformation is bad information that is spread with the express intention to deceive someone. Misinformation is bad information that is spread sometimes naively, so the person who’s spreading it may genuinely might think that this is a piece of authentic information. 

And when you think about the crisis of bad information we’re currently facing in society, both disinformation, and misinformation, are equally as potent. COVID-19 is perhaps the perfect case study of that. 

With my book, I’m essentially outlining how some of the exponential technological advances of the last 30 years have led to this paradigm change in our information ecosystem. Misinformation and disinformation are actually age old phenomena, right? The internet didn’t create them, but it has accelerated them to a degree that is unprecedented. 

In the book, I contextualise how AI is going to create this crisis of bad information, this corroding information ecosystem, which I call the “Infocalypse”. And I outline how that is, not only playing out between state actors at the geopolitical level, but is also something that is increasingly encroaching upon society, including Western democratic societies. 

You only need to think about recent political developments to see that in action, whether that’s the US election, or what’s happened around the pandemic, to see how it actually has a real life cost for businesses and individuals as well. Sometimes when we think about disinformation or misinformation, it’s easy to think about it as a geopolitical thing that happens between nation states. But actually it has a very real cost to private individuals and all businesses as well.

The origins of deepfakes and the rise of non-consensual pornography 

Alex: What do you think are the most kind of concerning examples of deep fakes that we’ve really seen today?  I think there is a sense that deep fakes are real now, I think many people believe that they’re a real problem. But I think people don’t always know the entire extent of the problem.

Nina: So, without a doubt, the most malicious and widespread use of deepfakes to date are in the form of non consensual pornography. At the end of 2017, some of the amazing advancements in the AI research community started leaching out into the community. And they emerged, of all places, on Reddit, where this anonymous user calling himself 

“Deepfakes” – and the name stuck. 

Using some of the open source tools coming out of the AI research community, this user figured out how to make AI-generated fake pornography. Essentially, he took an authentic pornographic film and managed to face swap the face of an actress into that film. It just went crazy on Reddit. He revealed how he had done it, then other redditors started doing it. 

Within weeks, this deepfake porn community on Reddit was shut down. But the cat was out of the bag. Since then, there has been an entire deep fake pornographic ecosystem that’s developed online. Really interestingly, it’s an entirely gendered phenomenon; you can find every celebrity and political figure, from Michelle Obama to Ivanka Trump, but it is almost exclusively gendered. 

However, even though porn is still the most malicious and widespread use of deepfake technology, there is no doubt that this is just a harbinger of what’s to come. What AI is so good at is taking somebody’s likeness and almost being able to hijack your biometrics – your digital image or your voice. When I first started working with Faculty at the end of 2017, it was very hard to use AI to generate voice, but since then we’ve moved along very quickly. 

It’s obvious that this is going to become a political weapon and drive a lot of the headlines around deepfakes. In the past few years, most headlines have been about how it’s going to become this weapon of disinformation that’s going to corrupt and undermine democracy. 


In the next instalment of this blog, we’ll see Nina and Alex diving deeper on the role deepfakes will play in business, politics and society. We’ll cover: 

The ‘liars dividend’ and the case of Ali Bongo.

The role of tech companies in protecting the public from deepfakes.

How governments should begin to prepare for deepfake legislation. 

You can find more information on Nina’s book on her website.


Recent Blogs

Subscribe

Subscribe to our newsletter and never miss out on updates from our experts.