Arrow DownArrow ForwardChevron DownDownload facebookGroup 2 Copy 4Created with Sketch. linkedinCombined-Shape mailGroup 4Created with Sketch. ShapeCreated with Sketch. twitteryoutube

Client

The UK Home Office’s counter-terrorism unit.

Situation

Daesh creates large amounts of high-production-value propaganda material. This propaganda is disseminated around the web, and is considered to have helped inspire some of the individuals who committed murderous ‘lone wolf’ attacks over the last couple of years. The UK Home Office has successfully encouraged the large online content platforms to invest in automated detection technology that can spot and remove these videos.

However, the videos remain available on a large number of smaller video hosting platforms, which do not have the artificial intelligence (AI) expertise or the resources necessary to develop their own detection capabilities. Faculty was asked to build a model that can automatically detect Daesh propaganda videos and can be used by smaller platforms, to give them access to the same kinds of AI technology used by the tech giants.

Action

We worked closely with experts in the Home Office to devise a platform-agnostic approach that is difficult for Daesh to ‘game’ its way around. Unlike previous algorithms that relied on a database of detected content, our classifier detects both old and previously unseen propaganda.

Given the vast number of videos that float around the web, the model also needed to deliver an extremely low ‘false positive’ rate, so that the smallest possible number of videos are flagged to human reviewers.

The classifier can sit in the upload stream of a video platform, preventing Daesh material from ever reaching the platform with minimal disruption to legitimate users.

Based on these design needs, we processed terabytes of video and audio data, and designed an ensemble model that looks for a range of subtle signals contained within videos that can distinguish between terrorist propaganda and all the other videos on the web with extremely high accuracy.

Impact

As reported by BBC News, the model can achieve a true negative rate of 99.995% and a true positive rate of 94%. This means that even for a platform that receives five million uploads per day, the model would catch almost all propaganda videos, while flagging only around 250 for review. This is an operationally realistic number that can be dealt with by a single human reviewer.

We are in discussions with several media platforms to help them make use of our tool, removing the vast majority of Daesh content online, with minimal impact on their business.

To find out more about what Faculty can do for you and your organisation, get in touch.