Sorry, Stan Collymore, you can't beat Twitter racists with an algorithm

Barry Collins
22 Jan 2014

Twitter is, once again, taking a mauling in the mainstream media for failing to tackle abuse. I’ve just watched ex-footballer Stan Collymore on the BBC Breakfast sofa, describing how he received racist abuse and death threats for daring to suggest a Liverpool player dived for a penalty. Earlier this week, Olympic medallist Beth Tweddle took some appalling, misogynistic abuse in a live Twitter Q&A about women in sport. Twitter’s ability to amplify the opinions of the dregs of our society remains undiminished.

Twitter, Collymore and others argue, is not doing enough to tackle the abusers. I wholeheartedly agree. What I don’t agree with is Collymore’s assertion that tackling racist comments is a simple matter of tapping out a few lines of code:

Collymore’s a footballer, not a computer scientist, so his faith in the ability of algorithms to filter out abuse is understandable, if misplaced. No "simple" script or algorithm could accurately discern the context of tweets; it’s a brutally complex task to make computers interpret human language. The continued existence of the Loebner Prize – an annual competition which will be scrapped the moment the judges cannot distinguish a computer from a real human in a Turing test that includes deciphering and understanding text – is proof of that.

Several of the abusive tweets fired at Collymore contained the word "nigger", for example. You might think it reasonable for Twitter to simply block any tweet that contains such as an offensive word, but such a policy would also ensnare entirely benign tweets such as this, which is just one of several non-abusive tweets I found within seconds of tapping “nigger” into Twitter’s search engine:

The word "yid" is used as a form of abuse to Jewish people, but as the Jewish writer and comedian David Baddiel explained on last night’s Newsnight, in reference to the Nicolas Anelka case, the word has also been "reclaimed" by Tottenham Hotspur supporters, in much the same way “nigger” has been by black comedians such as Chris Rock and Reginald D Hunter.

"Spurs fans are completely correct to say that they think they do it [use the word 'yid'] in a different way to the way that Chelsea fans do it," Baddiel explained. "That is absolutely right. All that has to go into the mix when you’re trying to get to a place, at the end of the day, where anti-Semitism isn’t on the terraces any more. It is complicated, it is nuanced."

It’s even more complicated when you’re dealing with millions of messages every minute. There is simply no conceivable way that an algorithm could, in real-time or otherwise, accurately discern the racist intent of a tweet.

Human intervention

That’s not to say Twitter is powerless to prevent such abuse. Collymore claims that, six weeks after reporting a previous incident of racist abuse to the police, Twitter has yet to provide details of the account holder to police. And many of the tweets that Collymore claims to have reported to the site remain online, days and weeks after they were published.

The journalist, Caroline Criado-Perez, who was subject to vile threats for doing nothing more provocative than championing the cause of Jane Austen to appear on a bank note, has also claimed Twitter is too slow to respond to reports of abuse. It took a fellow journalist – not the police or Twitter – to identify the two offenders, who were eventually prosecuted.

I shudder to think how many reports of abuse Twitter receives each day. I suspect it’s tens of thousands. But Twitter’s not a poor company, nor a fledgling start-up any more. It patently needs to hire more staff to deal with abuse, because – unfortunately – this isn’t a job it can outsource to machines.

Read more about: