Arming small platforms with AI in the fight against terrorist propaganda
A fortnight ago, in the first such move by a Western country, the French government passed a law that will mean social networks and other websites must remove terrorist propaganda and other illegal content within an hour of request, or face large fines.
A fortnight ago, in the first such move by a Western country, the French government passed a law that will mean social networks and other websites must remove terrorist propaganda and other illegal content within an hour of request, or face large fines.
This one-hour deadline may seem arbitrary, but it reflects the speed with which companies must act to effectively stem the reach of terrorist propaganda.
Before joining Faculty, I led the UK Government team responsible for producing the analysis on which proposed UK and EU legislation on online terrorist propaganda is based. The research my team conducted during that time demonstrated the need for reacting rapidly to the appearance of terrorist propaganda online, finding that a third of the links to Daesh propaganda that would ever exist online were disseminated within an hour of that content first being published.
Why is rapid response so important?
Just like any other brand with a new product, Daesh puts a premium on the immediate promotion of its new content. As the organisation suffers territorial losses and military defeat, the production of propaganda is an attempt to project strength in the face of adversity; it embodies the group’s slogan of ‘Baqiyah wa-Tatamaddad’ (commonly translated as ‘remaining and expanding’).
In this context, new propaganda is released quickly and is extensively promoted by the group’s hyper-partisan supporters. These supporters want to use social networks to promote new propaganda, for the huge global reach these platforms afford them. However, as the major social platforms have developed sophisticated measures to expedite the detection and removal of terrorist accounts, they are aware they have a limited window of opportunity to promote new propaganda before their accounts are suspended. As a result, Daesh supporters often reserve social media accounts for new releases, never posting content in support of the group before the new propaganda is released, and then immediately sharing as many links as possible before they are suspended to boost the reach of pro-Daesh content and hashtags on those platforms.
The challenge for small platforms
In truth, major companies like Facebook, Google and Twitter are unlikely to be significantly affected by the new French legislation. Each of them have developed automated tools to detect and remove terrorist content and employ large teams of moderators to remove harmful and illegal material on their platforms; they should have little difficulty removing terrorist content within the time limit the government has allotted them.
But, Daesh content is by no means confined to the major social networks. To ensure the longevity of its propaganda, Daesh uploads releases to multiple platforms so that, if one is removed, another link will always remain live for its supporters.
At the time of writing, one of al-Qa’ida’s media arms has just released a new propaganda video which, in its first appearance online, has been uploaded by the group over more than 100 unique URLs. While some of the platforms on this list are recognisable, most are platforms that very few people will have heard of: obscure video streaming sites or text pasting sites used to host yet more links to the video.
Many of these platforms are not large enough to have a moderation team. Some are one-person start ups. The chances of complying to a law that insists on removal within an hour of request for these companies is slim. Such platforms will require much more support, or face potential collapse.
A vital support system for small platforms
Technology can hugely assist such companies. A first step for any content host should be the application of a hash filter for terrorist propaganda. Hashing is a computer programming function which effectively creates a unique fingerprint for content that can be deployed to automatically block the upload of harmful and illegal material, like terrorist propaganda. The Global Internet Forum for Countering Terrorism (GIFCT) operates a hash sharing consortium for small platforms seeking to block the upload of terrorist content.
But hashing has its limitations. Bad actors can manipulate digital media to circumvent hash-based detection (I will discuss this in the context of the Christchurch attack in March 2019 in a forthcoming blog). And you can only create a hash for content you know exists – current hashing processes would not enable the removal of new propaganda within an hour. This is important, because it is the danger of new propaganda that removal within an hour is seeking to address.
This is where machine learning can play an important role. Through developing classifiers that seek to identify consistent signals within terrorist propaganda, it is possible to detect and prevent the upload of even new, unseen content.
Faculty has worked extensively in this area. Having previously developed a model that can detect Daesh video propaganda with 99.995% precision, we’ve expanded our work to cover the detection of other forms of media produced by Daesh, as well models for the detection of al-Qa’ida propaganda.
Our mission when we started developing this technology in 2017 was to work with small communications service providers to offer them these models at no cost to support the detection and removal of terrorist propaganda from the internet. That’s something we’re incredibly proud to continue doing to support Industry’s efforts to keep pace with the international trend towards formal regulation of the internet.