February 2018 - Digital Report

On 25th January, Theresa May spoke at the World Economic Forum in Davos and told world leaders that big tech firms need to take more responsibility for the content they host. Harmful material such as child abuse, terror and extremism should be “removed automatically”.

"Earlier this month, a group of shareholders demanded that Facebook and Twitter disclose more information about sexual harassment, fake news, hate speech and other forms of abuse that take place on the companies' platforms," she said.

"Investors can make a big difference here by ensuring trust and safety issues are being properly considered - and I urge them to do so."

Perhaps unsurprisingly then, just a few weeks later, the Home Secretary, Amber Rudd announced the development (at a cost of £600k to the taxpayer) of a tool that supposedly detects and blocks jihadist content online. There were hints that tech companies may end up being legally required to use it.

The Home Office claims that the tool, developed by London based firm ASI Data Sciences, can detect 94% of Daesh propaganda with 99.995% accuracy. Industry experts however were skeptical of the tool for two reasons.

Firstly, being able to distinguish between real extremist content versus legitimate interest pieces (i.e. news reports showing footage of terror) is notoriously difficult and often means with minor tweaks, extremists can defeat the content filtering tool easily.

Secondly, legally requiring firms to implement a tool which pre-filters content based on an algorithm decided by the government is a potentially slippery slope towards wider censorship.