The Christchurch attack changed how counter-terrorism thinks about online propaganda, hashing and the role of AI

On Thursday, Brenton Tarrant, the terrorist who killed 51 Muslim worshippers in two mosques in New Zealand in March last year, was sentenced to life imprisonment without parole, the longest term ever handed out in the country’s legal history.

2020-09-01Public Services

On Thursday, Brenton Tarrant, the terrorist who killed 51 Muslim worshippers in two mosques in New Zealand in March last year, was sentenced to life imprisonment without parole, the longest term ever handed out in the country’s legal history.

Before he was sentenced, Tarrant was asked whether he would like to address the court and the world’s media, who were so great in number that they were placed in another courtroom. This was an event many had worried about. The court had even planned reporting restrictions to prevent Tarrant, a self-proclaimed white supremacist, from using the opportunity to turn his sentencing into theatre. 

He quietly replied “No, thank you.”

Without the crutch of highly anonymised online platforms and the provocation of an active community of extremists, the man who published a 74-page manifesto attempting to justify his attack was rendered mute.

Tarrant's weaponisation of the internet

For Tarrant the Christchurch attack was an act of performative propaganda, trailed in advance to supportive online communities and livestreamed on the world’s biggest social network. His final post before carrying out the attack urged his ideologically sympathetic audience to, “please do your part by spreading my message”.

Through the visceral immediacy of live video, the attacker sought to achieve the ultimate aim of all terrorist propaganda: to inspire and inflame a new cycle of violence. The attack achieved its objective, contributing to a global sequence of copycat far-right terrorism and retaliatory Salafi-Jihadist attacks, including the Easter bombings in Sri Lanka that killed 269 people and injured many hundreds more.

The rapid online distribution of the video exposed a crucial weakness in the measures that were then being deployed to find and remove terrorist propaganda online.

Facebook reports that fewer than 200 people saw the original livestream of the Christchurch attack during broadcast. But, in the 24 hours that followed, more than 1.5 million copies of the video were uploaded on their platform alone.

Facebook was able to remove 1.2 million of those automatically. It’s thought this was principally achieved through hashing, a computer programming function that effectively assigns a unique fingerprint to a piece of content, which can then be blocked before it is uploaded.

However, at least 300,000 copies of the video circumvented automated blocking because they had been manipulated to trick hash-based detection, and remained on the site for hours. To the human eye, and for more naïve hashing algorithms, some of these manipulations can be impossible to detect. Even the displacement of one pixel in an image would deceive the most basic hash detection function.

Facebook’s proprietary hashing technology is more robust than most, but in the 24 hours following the attack the company reports detecting more than 800 visually distinct versions of the video, many of which circumvented automated detection and were shared thousands of times. It is likely that the majority of these videos were adversarially manipulated to ensure that their upload was successful.

Those 300,000 uploaded versions of the attack video were enough to ensure that, in the hours after the attack, copies of the video could be found on Facebook simply by searching for ‘New Zealand’. And while Facebook has shouldered most of the blame for hosting this content, the same was true of almost every major social network.

How technology has adapted to counter new terrorist tradecraft

Almost 18 months have passed since the attack. In that time, the global counter terrorism tech community has focused significant effort on developing solutions to this problem. Hashing approaches can be – and in many cases have been – made more resilient. But there are still gaps in the ‘net’ created by hashing. AI can play an essential role in the detection of content that circumvents detection through hashing. Changes to the visual, audio or structural values of a video can be detected through deep learning, a field of artificial intelligence inspired by human cognition. With deep learning, algorithms can learn from seeing manipulations to video content and use those learnings to spot adversarially manipulated content more effectively in the future.

These technological developments have the potential to be incredibly powerful in limiting the reach and efficacy of terrorist propaganda, as well as other harmful and illegal content – from piracy and copyright infringement, to child sexual exploitation and abuse content.

But while this technology has immense potential for good, it also has to be handled responsibly. We need to ensure it’s not deployed to suppress free speech and undermine foundational principles like democracy and the rule of law. In practice, this requires a level of independent oversight and regulation that currently does not exist, though Ofcom is poised to assume that role in the UK.

Of course, this doesn’t make it harder to carry out a terrorist attack. The low-sophistication terrorism that has punctuated the latter period of the past decade will always be difficult to prevent. But it can help ensure that the narrative framing of acts of terrorism are not determined by those seeking to use it to inspire further bloodshed.