UK develops tool to automatically block extremist content

A long-running theme of various British governments has been that tech giants have to do more to tackle extremism on their platforms. Tech giants, for the most part, shrugged, safe in the knowledge that while they could effectively filter content in countries where strict laws were already in place, their general fondness for free internet principals meant they could comfortably stare down the government should it threaten bans or fines.

UK develops tool to automatically block extremist content

So it seems that rather that going through the same old song and dance of empty threats followed by inaction (Cameron was talking tough on WhatsApp over three years ago), the government has taken the initiative and come up with its own solution: an extremism-blocking tool that reportedly can detect 94% of Islamic State’s (ISIS) online activity with 99.995% accuracy.

The tool, developed by London’s ASI Data Science, was trained using thousands of hours of data posted by ISIS, with help from £600,000 of government funding. If the tool has doubts about something, the post is flagged for human moderators to check.

READ NEXT: MPs slam social media as “the lifeblood of Daesh”

While predominantly aimed at smaller companies without their own solutions to the problem, the government hasn’t ruled out passing law to force businesses to use the software. “We’re not going to rule out taking legislative action if we need to do it,” home secretary Amber Rudd told the BBC. “But I remain convinced that the best way to take real action, to have the best outcomes, is to have an industry-led forum like the one we’ve got.”

That’s The Global Internet Forum to Counter Terrorism, which features several governments, alongside the likes of Facebook, Google and Twitter.

Possible issuesamber_rudd_encryption

A success rate of 99.995% could be game-changing in the fight against terrorism, but the key word there is “could.” This solution – assuming it works as well in the wild as it does in the lab, and that terrorists don’t just find ways around it – is aimed at tackling online radicalisation and propaganda, but does nothing to deal with the considerably thornier issue of encryption.

In other words, if people are radicalised offline, there are still plenty of private online channels for would-be terrorists to plot, away from AI checks – and there’s not really a simple solution to that, no matter how much hazily-briefed ministers might wish there were. And that’s not even looking at the potential for unchecked black box algorithms to flag false positives, without any human intervention.

Still, proactive action on terrorism is undoubtedly a positive step – and it’s positive to see governments taking the initiative where Silicon Valley has been somewhat cautious in the past. Although YouTube, for example, has been highly effective at weeding out porn uploaded to the system, extremist content has been allowed to slip through the net for years – and you can’t help feeling that the heavy penalties for the former may have helped shape the company’s thinking in the early days.

It may feel a little heavy-handed, but at this point, anything that makes technology giants push their social responsibilities higher up the agenda should be cautiously welcomed.

Image: Peter used under Creative Common

Disclaimer: Some pages on this site may include an affiliate link. This does not effect our editorial in any way.

Todays Highlights
How to See Google Search History
how to download photos from google photos