Back in March, a week before the UK entered lockdown, an online newsletter produced a one-page infographic on COVID-19, warning its readers to limit travel and exercise precautions like washing their hands to prevent the spread of the virus.

The messaging itself wasn’t peculiar, although its origin perhaps was; the source was al-Naba, a weekly propaganda newsletter produced by Daesh.

Daesh’s messaging on coronavirus has changed as the scale of the global pandemic has broadened. Initially, Daesh described the virus as an act of God to punish China for their treatment of the Uighur Muslims in the country’s Xinjiang province. As it spread to Europe, COVID-19 was characterised as divine retribution for the foreign policy of what the group terms the ‘crusader nations’.

More recently, however, the production of propaganda relating to the virus has shifted from officially produced and centrally scripted messaging, like that within al-Naba, to content created by the group’s supporters. This propaganda has tended to have a localised appeal and more frequently suggests specific targets for lone actor terrorism during the crisis across Europe, Southeast Asia and North America. 

The changing role of Daesh supporters

Propaganda remains a strategic imperative for Daesh. Last month, the US State Department offered a $3m reward for information about Mohammed Khadir Musa Ramadan, the group’s propaganda chief – a huge sum, reflecting his value to the organisation. By comparison, last year the US offered a $1m reward for information as to the whereabouts of Hamza bin Laden, Osama’s son and a leadership figure in al-Qaeda.

In their statement announcing the bounty on Ramadan, the State Department also released an interesting vignette about his role in the organisation, which they said included: “the management of content from ISIS’s dispersed global network of supporters”.

That such a senior figure within the Daesh hierarchy is actively managing supporter-produced content is a sign of the changing times: five years ago, at the height of the so-called ‘Caliphate’ and the production of Daesh propaganda, the group would not so easily have countenanced ceding editorial control to their supporters. 

Of course, the move to deploy Daesh’s supporters as remote media houses is a product of the group’s loss of territory and capability to produce propaganda. In 2015, they were able to quickly mobilise media teams across their territories in Iraq and Syria to produce high quality media in response to world events within hours. In one media campaign, the group produced a series of more than a dozen videos within a 48-hour period responding to the European migrant crisis. At the time there were reports of Daesh fighters being censured by the group for posting memes and selfies online presenting an idyllic vision of life in Syria, what became known as ‘five-star jihad’. 

It’s possible that the decision to relax the restrictions on the content produced by the group’s supporters was a conscious reflection on the changing nature of online communications; its growing disposability and focus on instant gratification. The group’s approach to communications has always been characterised by innovation and response to the demand of their audience – they were, after all, among the first terrorist groups to pioneer the effective use of social media. 

Whatever the cause, it’s clear that more informal, ‘crowd-sourced’ social media propaganda plays an increasingly pivotal role in Daesh’s strategy. The question is, how do we fight such a dispersed wave of content?

How to disrupt supporter propaganda

In the first blog in this series, I wrote about some of the automated measures social networks and content hosts could put in place to automate the detection of officially-produced terrorist propaganda on their platforms. 

But supporter propaganda presents a range of additional challenges for moderators and regulators. Traditional low-tech approaches to the removal of terrorist content from the internet, such as hashing or URL sharing, are not effective mechanisms for the speedy detection and removal of content that is, by design, ephemeral. Daesh’s centrally-produced blockbuster video releases and glossy magazines are designed to be reshared and live on; supporter propaganda, by contrast, is easier to produce and designed to be tactical and disposable.

But that does not mean disruption of this content is impossible. Just as we have previously developed AI to detect signals within officially-produced propaganda, we’ve found that it’s also possible to spot consistent signals within supporter media and use these to build detection models. Such work requires a nuanced understanding of rhetorical and visual allusions and tropes within supporter propaganda; an ideal response would combine data science with input from leading terrorism academics.

Classification is, of course, only one part of the challenge. Deciding when and where to draw the line  between illegal terrorist content and harmful but legal content produced by supporters of terrorist organisations is already a perennial problem for regulators. That difficulty is only likely to increase as propaganda production fragments further. Any application of AI to help solve this problem has to be ethical and explainable; we must be able to attribute any classification made to a set of predetermined rules. But it must also involve higher levels of human scrutiny if we want to stem the influence of terrorist groups, while protecting legitimate free speech.


Recent Blogs

Subscribe

Subscribe to our newsletter and never miss out on updates from our experts.