Social media companies could soon foot bill for policing internet hate in the UK
Amid warnings from the Crown Prosecution Service that online abuse should be considered akin to hate crimes committed face to face, the Department for Digital, Culture, Media and Sport (DCMS) has launched a drive to find better and more innovative ways of tackling internet abuse.
While it’s long been established that policing the internet is neither an easy feat nor a liberal one, the DCMS has come up with a number of ways to deal with online offences, including the creation of an internet ombudsman to monitor complaints. In a scheme that’s also being mulled over in France and Australia, the ombudsman’s role would be to deal with material such as abuse and violent threats proffered on social media, acting as a mediator between the latter companies, and victimised members of the public.
Social media companies might be held directly – and, what’s more, financially – to account, with a proposal circulating that would include a levy on the companies in order to raise the requisite funds to police the sites. “Later this year we will publish the government’s internet safety strategy, and a levy on social media companies is one of a series of measures that we are considering as part of our work,” a DCMS spokesperson told The Guardian. “We are determined to make Britain the safest place in the world to be online, and to help people protect children from the risks they might face.”
The proposal is not without precedent; football teams in the UK are obliged to pay for policing in their stadiums (and surrounding areas) under Section 25 of the Police Act 1996. Many believe that a viable solution to the proliferation of internet hate would be to bill cash-rich social media companies for costs that would otherwise be borne by understaffed and underfunded police forces.
The complications thrown up by this endeavour are many. The proposal will no doubt be met with derision by social media giants, many of whom already employ moderators to monitor sites and believe they’re doing an adequate job. But the real issue comes with the internet’s nebulous boundaries – who would foot the bill for cybercrime that transcends state boundaries? Where either the victim or the perpetrator lived overseas? What’s more, internet giants such as Facebook wield an enormous amount of popular influence. With two billion users worldwide, Facebook is hardly struggling for business. What’s to say that it wouldn’t simply up and leave the UK, on principle if nothing else, leaving millions of dismayed users high and dry, not to mention resentful of the UK government’s interventionary measures? These are all hypotheticals that need addressing.
The fresh drive to tackle internet abuse comes in the wake of the far-right Charlottesville rally, a social media-conducted event that ended in tragedy with the death of 32-year-old civil-rights protester Heather Heyer. While the response – and responsibility – emanating from the tech world was largely commendable (Mark Zuckerberg pledged to eliminate hate speech on Facebook; Tim Cook penned a letter to Apple employees denouncing Trump’s response to the tragedy), the world needs more than PR-sanctioned words from billionaires. In order to solve the problem of online abuse, with its myriad real-world ramifications, concrete measures requiring concrete funds need to be implemented. Time for social media companies to cough up.