Deepfake videos: How they work - and why they are dangerous

tmg.video.placeholder.alt reAjA_bQGig

It was a seemingly harmless email that Ariane Sherine wished she had never opened. 

“What I saw was terrifying,” Sherine recalls. “There were all these pictures of my face superimposed onto hardcore porn... I was 21, almost 22. I’d never seen anything like it.”

What was most disturbing, she says, was that someone she knew had spent time creating these images. 

At the time, the technology to create such images wasn't sophisticated. The job was "crudely done". But, now, Sherine says, “I have an eight year old daughter".

"Things like FaceApp can be fun, but what if they were used for very nefarious purposes? It could actually damage people’s livelihood, their reputation.”

For some, that's already happening. Increasingly technology is allowing for more and more realistic “deepfakes” to be created, using deep learning or artificial intelligence.

What is a deepfake?

Deepfakes are videos, but not as you know them. The people in them are real, and look and sound real too. But the images have actually been produced artificially using ever-more accessible technology.

The results are digital manipulations of the planet’s most famous people which can be made to do and say whatever their creators want.

As this technology becomes more sophisticated, so too does the technology to detect it – an area in which, it turns out, British companies are paving the way.

It is a market that could easily be worth billions of pounds within the next few years, as the fight against disinformation heats up and fears swirl over the effect it can have. 

The opportunity has, certainly, already piqued the interest of investors. According to figures from PitchBook, cash is flooding into the space, the amount invested into artificial intelligence and machine learning companies this year on track to outstrip last year’s total of £860m. Already £610m has been raised by British AI companies since January. 

The UK’s expertise in machine learning is making it a ripe birthing ground for deepfake-related businesses, MMC Ventures’s Mina Samaan says. As an investor, it is clear that now there are “more companies operating in the field of video synthesis and ‘generative AI’"

“The UK is the heartland of European AI talent and a global hub for visual effects, and so, it is well positioned to develop world-leading technology companies in this area,” he says.

World-leading technology is exactly what is needed. After all, identifying which videos are deepfakes and which are real is essentially a game of cat and mouse, becoming an increasingly difficult task as those developing deepfakes use ever-more sophisticated techniques. 

Examples of deepfakes

Up until now, deepfake clips that have gone viral are, largely, benign – actor Bill Hader morphing into Tom Cruise mid-sentence, former president Barack Obama calling his successor Donald Trump a “dipsh-t” – but, experts warn, it’s only a matter of time before a deepfake has major political repercussions. 

Right now, "there's a good window to start developing new technologies around this stuff that can help alleviate some of the problems," says Victor Riparbelli, the chief executive and co-founder of London start-up Synthesia. 

The ramifications of video fakery are already becoming clear. Take the gun-control debate in America. So intense is it that every school shooting is dismissed by some pro-gun activists as an elaborate hoax, staged by what they call “crisis actors”, to discredit guns.

In February this year, 17 students and teachers were killed at Marjory Stoneman Douglas High School in Parkland, Florida. Afterwards, one survivor - Emma Gonzalez - posted a video in which she tore up a paper shooting-range target. But the video was then doctored to make it appear that she had torn up a copy of the US Constitution. The fake went viral on YouTube, and Gonzalez later said she had received death threats.

How to tell if it's real

Spotting the fakes usually involves one of two things: either companies use artificial intelligence to spot the “glitches” in the videos which are common to other deepfakes, or they use data collected about the video or audio when it was first recorded to then be able to tell if that has been edited.

Synthesia’s systems use the first of those, the company having already gained a reputation for creating its own, ultra-realistic synthetic videos. 

Eventually, what its technology will be able to do is to create video content just through someone speaking to their phone, essentially forming a much more realistic “avatar” of their face which could be placed in any setting. 

But on top of this, Riparbelli says his company is also building “the world's best system for detection of deepfakes”.

“Basically, you can have a system that you show a video, and the AI system analyses the video, and it tells you with what probability does is a video that has been edited using these kinds of tools.” 

His company is not alone in thinking this, and using AI as a solution.

Faculty, which is best known for its work with the Home Office to spot terrorist content before it is posted, is creating its own systems which can scan content and spot the discrepancies.

“We look to detect after the fact,” says Faculty chief data scientist Scott Stevenson. “We feed deepfake videos and real videos through an AI model, and as we do this, we tell the model which ones are real and which ones are fake. Then the model learns to pick up the very slight correlations that are not always perceptible to the human ear and eye.”

Of course, these systems require constant updating, as deepfakes get ever harder to distinguish from real footage. In the past, for example, early deepfakes didn’t blink, or had shimmering in them, but now “that's no longer a good rule to judge them by,” Stevenson says. 

How can we combat the threat?

Companies, and especially media groups, will have to start adopting these types of verification techniques to their videos and audio so they know what they're putting out is “authentic content”, he says. 

“Clearly the likes of Google or Facebook have the technical acumen and the resources to build this kind of technology to spot deepfakes themselves, but if you consider maybe a news media organisation in Latin America, they’re not going to have the ability to detect them in house.” 

Already a number of companies have started to wake up to this fact. Serelay, an Oxford start-up working on deepfake authentication, says it has recently signed deals with a British newspaper and a human rights organisation, to provide them with systems to verify images and videos – key to make sure user-generated content is real, but also, for human rights organisations, to protect them against claims they have falsified claims. 

Serelay’s technology works differently to Faculty and Synthesia in that it captures photos and images in a way which is “inherently verifiable”.

“Most solutions in this space, they're called sort of verified capture solutions, they tend to take a chain of custody approach,” says boss Roy Azoulay. “Our approach is different in the sense that we calculate around the hundred mathematical values that relate to the capture event, and have all that metadata, but we can't view your videos or photos."

Of course, he admits, there is a “scary element” to deepfakes, but what being able to verify images does is it allows for the technology’s huge creative potential.

“What I'm arguing, and many, many other people in our space are arguing is not that deepfakes should be outlawed, but rather just that, given the deepfakes are here to stay, we should create the mechanisms to distinguish between generated media or synthetic media, and real media that is a documentation of the physical world.”

Establishing these mechanisms, it seems, is for now down to private companies. Politicians may be concerned about the potential for deepfakes, but as it stands policy around them is “certainly not keeping up,” Carl Ohman from the Oxford Internet Institute says. The Law Commission in England and Wales has said it will undertake a review of laws in this area, but it's not likely to be a quick process. 

Big technological leaps require “some deep thinking”, Ohman says, but the fundamental debates haven’t yet been had – by law scholars, philosophers, computer scientists and politicians.

"That debate is necessary in order to implement any adequate policy response," he says.

People may be waking up to the threat that deepfakes pose in spreading misinformation, but, for now, it appears to be a problem companies have to fix on their own. For Britain, luckily, those companies are world leading. 

License this content