The terrorist organisation Daesh generates
and shares large amounts of high-production propaganda videos online.

Automated content moderation tools can easily detect and remove these videos – but in 2017, most video hosting platforms didn’t have the data science capability to build them. Daesh was free to spread terrorist content online to recruit and radicalise people.

The Home Office asked Faculty to build a model for these video hosting platforms to automatically flag Daesh propaganda videos. Our highly-accurate model detects terrorist content for human review, ultimately working towards wiping Daesh propaganda from these platforms.

Customer

We worked with the Home Office’s counter-terrorism unit to support the UK government’s response to Daesh.


Problem

Video hosting platforms were unable to automatically detect and remove terrorist propaganda. Terrorist groups targetted small platforms, in particular, to spread their propaganda on the internet.

Terrorist organisations like Daesh share propaganda videos online to radicalise vulnerable people and inspire ‘lone wolf’ attacks. Following a rise in these murderous attacks in the UK, the UK Home Office encouraged online platforms to crack down on extremism and terrorism. 

Large video hosting platforms with advanced data science resources found it too difficult to build a tool to automatically remove terrorist propaganda with sufficiently high accuracy. They instead relied on video hashing and human moderators to manually process flagged videos and remove content that violated their terms of service. 

However, smaller platforms didn’t have access to the same level of manpower to moderate content, meaning organisations like Daesh targeted them due to the far lower risk of content being removed. With largely unchallenged freedom to circulate terrorist material, small platforms were “swamped”.

The Home Office brought in Faculty to build a cross-platform model to automatically detect Daesh propaganda videos, stemming the flow of terrorist content online.


Solution

Faculty built a model that automatically detects both new and existing propaganda videos and stops them from being uploaded. Our highly accurate model removes a huge proportion of Daesh propaganda without disrupting other users.

Collaborating with the Home Office’s propaganda specialists, our model has cross-platform functionality that makes it suitable for any platform to adopt – no matter their size or tech stack. 

Unlike previous algorithms that rely on databases of detected content, our classifier detects both old and previously unseen propaganda to remove all traces of the harmful content at source. Faculty’s classifier model can even sit in the upload stream function of a video platform, preventing Daesh material from making it past the upload stage.

Given the vast number of videos shared online, the model needed to deliver an extremely low ‘false positive’ rate to ensure the smallest number of videos are flagged for human review. 

To make sure legitimate users didn’t have their content flagged or deleted, we designed a model that looks for subtle signals contained within videos. We trained this model on terabytes of video and audio data, teaching the system to distinguish Daesh propaganda from non-harmful content with extremely high accuracy.


Impact

As our model cuts Daesh propaganda off at the source, the UK Police Force is using our technology to identify and analyse terrorism on the web, informing their counter-terrorism strategy.

Amber Rudd, Former Home Secretary said: “We know that automatic technology like this can heavily disrupt the terrorists’ actions, as well as prevent people from ever being exposed to these horrific images. This government has been taking the lead worldwide in making sure that vile terrorist content is stamped out.”

As reported by BBC News, the model can achieve a true negative rate of 99.995% and a true positive rate of 94%. This means that if a platform receives five million uploads per day, the model would catch almost all propaganda videos and flag only around 250 for human review. 

Following the launch of Faculty’s propaganda detection tooling in 2018, all major video hosting platforms have implemented their own AI models to automate the detection of terrorist propaganda. Our own model has mainly been deployed by law enforcement to detect and analyse terrorist content online. 

We’ve also made significant improvements to the model, extending it to detect al-Qaeda propaganda across video, audio, image and text. The model could even be used to eliminate other forms of illegal and harmful content, such as child sexual abuse material and hate speech, from online spaces.

Matthew Collins, Deputy National Security Advisor said: “Faculty’s work set a new standard for the automated detection of harmful and abhorrent terrorist video propaganda at a time when technology leaders across the world had said this wasn’t possible.”

“Today, all the major communications service providers have developed and deployed their own AI models to automate the detection of terrorist propaganda and I’m extremely proud of our work with Faculty in pioneering this movement.”

99.995%

true negative rate

94%

true positive rate