Microsoft warns EU anti-terror law is 'unworkable'

The EU wants member states to be able to fine tech firms if they fail to remove terrorist material from their websites

Tech giants have told the EU it would be impossible to comply with officials' demands to take down terrorist material within one hour without damaging free speech.

Microsoft warned the European Commission that its new anti-terror law would be "unworkable in practice" and would lead to "erroneous decisions", while Google, which owns YouTube, also said it could be "unworkable" and predicted "very negative consequences".

The claims, made in 2018 in response to a confidential EU consultation, shed light on how Silicon Valley has resisted stricter demands on its services even as it pledged to clean them up.

Snapchat, however, told the Telegraph that it had changed its view and now supports the EU's one-hour deadline. Google and Microsoft indicated that they stood by their concerns.

The EU is intent on passing its new Terrorist Content Regulation, which would empower member states to fine tech firms up to 4pc of their revenue if they persistently fail to scour terrorist material from their services.

The controversial law has its origins in the global wave of terror attacks that followed the rise of Isis in 2014, during which both Islamic extremists and the far Right were able to exploit social networks and messaging platforms to radicalise, recruit and coordinate. 

Since then, the biggest tech companies have cut their response times, implemented some of the EU's demands and set up a global counter-terror council to share information between. But Brussels believes more can be done, and wants to make sure smaller companies comply.

European civil rights activists and press freedom groups fear that the law could create an impossible burden for small firms while causing large ones to impose new levels of censorship and surveillance on their users.

A spokeswoman for the Commission said: "Research shows that one third of all links to Daesh propaganda disseminate within one hour of release... member states most affected by terrorist propaganda have repeatedly called for a one-hour rule."

The consultation responses were published last week by Patrick Breyer, a German MEP who opposes the new law. Though echoing what some companies have said in public, they go much further in their criticism.

Google said: "One hour is often not enough time to make a responsible assessment... it creates the wrong incentive by encouraging deletion and speedy decisions over in-depth analysis."

Microsoft said the limit would be "likely to result in erroneous decisions and the over-removal of content". Dropbox, Snapchat and the non-profit Internet Archive also argued against the proposal. Facebook did not submit a response.

A spokesman for Microsoft told the Telegraph: "In many cases we work diligently to take down terrorist content within an hour or less, and are often able to do so. But in other cases there are trade-offs – for example, when a significant amount of human judgement is required."

But the Commission's spokeswoman dismissed that argument, claiming that there is no need for tech companies to inspect such material themselves because when it has already been thoroughly investigated by the government ordering its removal.

Giants such as Facebook and Google now routinely use artificial intelligence (AI) to remove terrorist material before any human user can see it, and have formed the Global Internet Forum to Counter Terrorism (GIFCT), an industry body that has been hailed by governments but also criticised as a puppet of Big Tech.

Tom Drew, who helped build the Home Office's own AI scanners and is now head of counter-terrorism at the British AI firm Faculty, said: “The structures and technologies in place to detect and remove terrorist content have improved immeasurably... [the industry] can provide expertise and capabilities that in 2017 even the major tech companies were telling the Home Secretary were not possible."

He added that some "incredibly thorough" European agencies, including the Metropolitan Police, are already trusted enough that tech firms sometimes accept their requests automatically.

However, Daphne Keller, an internet law professor and former assistant general counsel of Google, warned that removal orders might not be trustworthy because the list of agencies empowered by the new law is far too broad.

She said: "Lawmakers should be clear that what this is is a requirement to do the state's bidding, and the state in this case might be local police in Budapest. These are not orders coming from a court that has examined the content and reached a reasoned determination."

Last year the Internet Archive said it had received false removal notices from the French government for 555 web addresses in a single week, including academic articles, US government reports and a podcast about veganism. 

Prof Keller also said the law could allow governments to impose their own definitions of terrorism across the whole EU, citing Spain and Hungary, which have both been accused of misusing over-broad anti-terror laws.

Mr Breyer described other requirements of the laws as a Trojan horse for the imposition of "upload filters", which would examine everything that users post and block it if they detected something dangerous.

 

 

License this content