Trolling: The evolution and future of online abuse

Trolling used to mean something comparatively good-natured. How did we graduate to death threats and intimidation, and what comes next?

Back in January, I took part in a panel discussion on creativity and trolling for Wieden+Kennedy and Crowd Talks, alongside designer Simon Whybray and Gadgette editor Holly Brockwell. It’s a fascinating topic that I’ve written about a fair bit in the past, but not so much here. Time to change that...

So, internet historians and pedants of the world will object to the way the definition of trolling has evolved over the years. What used to mean reasonably good-natured winding up of the self-important (think Rickrolling, or baiting fanboys) has transformed to encompass a whole menagerie of depressing activities, from rape threats to mocking the target’s grief, or sending round SWAT teams (doxing) to the ever-popular common or garden death threat.

Any pedants who grew up in the 1990s like myself would probably describe this as closer to “flaming” than trolling, but even then, this is some pretty extreme flaming. Depressingly, death threats, and worse, have become part of the grammar of internet activity, which doesn’t say great things for us as a virtual society.

"Depressingly, death threats have become part of the grammar of internet activity, which doesn’t say great things for us as a virtual society."

As I wrote in Wired back in 2013, a landmark paper from 2004 by psychologist John Suller tried to outline the reasons why people act out online in ways that they never would in the real world, and he came up with six reasons: “Dissociative anonymity (‘my actions can't be attributed to my person’); invisibility (‘nobody can tell what I look like, or judge my tone’); asynchronicity (‘my actions do not occur in real-time’); solipsistic Introjection (‘I can't see these people, I have to guess at who they are and their intent’); dissociative imagination (‘this is not the real world, these are not real people’); and minimising authority (‘there are no authority figures here, I can act freely’)”. All six come down to one key difference: communicating using computers minimises our empathy because it feels so foreign.

Personally, to date, I have managed to stay reasonably clear of abuse, short of your standard range of insulting comments and the odd shout-out on Twitter (including from a Labour activist outraged that I wasn’t writing propaganda on the party’s behalf in a publication he deemed to be sympathetic). This might be because I’m not hugely prolific on social networks, but it probably helps that I’m a man with opinions, rather than a woman, if I’m brutally honest.

Trolling and creativity

The classical definition of trolling is still present in some respects, which isn’t entirely surprising. People still enjoy a wind up, after all. Look beyond the malice and threats of the modern definition, and there’s actually an interesting subset of trolling: creativity. Using trolling to make a statement is a curious beast because people are either too jaded or not jaded enough. Here are a couple of case studies.

The first is nice and simple. A writer at The Telegraph gets a nice, easy article out of pretending to be the most outrageous left-wing stereotype he can think of in the Guardian’s comments section. The result? A (probably statistically insignificant) number of upvotes from peers agreeing with his comments, unaware that their ideology was actually being mocked in a snide column to a different echo chamber altogether (you could very easily reverse the experiment, should the Guardian feel so inclined).  

The second is a bit more subtle. A Twitter parody account tweeted this on the night of the Paris attacks last November:

Why is this trolling? Because it’s not true: the lights go out every night at 1am. It was a sting to catch out the kind of person who believes anything they read online, and passes on a meme without critical thought. It worked terrifyingly well, actually getting a number of news sites and journalists retweeting it, without pausing to question exactly how it would be possible for the lights to have been on continuously for 126 years.

In general, I am fascinated by the way history and fake history spreads on Twitter, such as the many ‘History in Pics’ type accounts, and the very low bar for spreading a viral meme through a credulous public,” the account’s creator said in an interview. With nearly 29,000 retweets at the time of writing, it’s possible the joke was too clever by half.

For the most part, though, what was once a catch-all term for harmless pranking now encompasses abuse, threats and intimidation. I can’t help but wonder if the reason people (including, it would seem, the police) struggle to take threats on the internet seriously is partially down to this accident of linguistic evolution. The very word sounds just as benign as its original origins: they’re not real, and can’t hurt you – so grow a thicker skin.

"If you dismiss honest disagreement as vindictive, you just retreat further and further into your own echo chamber, especially if your friends offer unqualified support in return."

Of course, this naive dismissal of trolling cuts another way, too: knowing full well that there are people out there trying to upset and silence you is actually a brilliant get-out clause for legitimate criticism, and sadly one that is definitely used. For every man or woman who has self-censored for fear of abuse, there’s another who will use “I was getting trolled,” as dismissive shorthand for “people were calling me up on some factual inaccuracies”. A mindset that compartmentalises valid criticism with malicious haters isn’t a great place to be. If you dismiss honest disagreement as vindictive, you just retreat further and further into your own echo chamber, especially if your friends offer unqualified support in return.

So, thanks to trolling, we’re in a situation where people either ignore their critics entirely, or pander to the extreme elements and self-censor, depending on their confidence. Neither is ideal.

How have we not fixed the problem?

Whether you consider this a problem worth more than a passing thought will depend on how seriously you take the internet. I personally see a marked divide in those who use the internet as the centre of their social world, and those who use it pretty passively to augment their everyday life. For some, the internet is like a toaster or kettle – it makes things a bit easier, but it wouldn’t be the end of the world if it stopped working. It’s easy to ignore something that isn’t central to your daily life – and even for those who see it as more than a simple utility, the majority of people will go their lives without being targeted unless something goes wrong and they’re catapulted into the public eye against their will.

That, of course, will be of little comfort to those who put up with constant abuse on a daily basis. It’s quite easy to put forth platitudes like, “grow a thicker skin” or “sticks and stones”, but the relentless – and sometimes quite threatening – abuse that others receive is in a whole different league, and there seems to be very little you can do. As Brockwell pointed out on the panel discussion, ignoring doesn’t work, confronting doesn’t work and the sites themselves are spectacularly unhelpful at responding to abuse, seemingly taking the huge scale they’re working with as the ultimate get-out clause that nothing can be done, rather than questioning the very nature of their business models that such crucial functionality has become an afterthought.

"If humans can’t be trusted to self-police, there are other possibilities."

If humans can’t be trusted to self-police, there are other possibilities. The Guardian recently outlined a whole variety of technical options that involve slowing people down and making them consider their actions. Asking people to rate others’ comments and then rate their own has a small but significant “edit rate”, while even something as simple as software detecting unpleasantness and asking the user if they’re 100% certain they want to proceed can help. Anyone who has used Facebook’s moderation tools will also be aware of the delicious way things work – comments are hidden to everyone but them, leaving the troublemaker shouting into the void and wondering why nobody is biting.

A simple, active moderator presence can have a marked impact, though. In the real world, there’s a well known sociological concept called “Broken Windows Theory”, which argues that well-presented neighbourhoods tend to attract less crime because it would go against the societal expectations. By contrast, run-down, vandalised areas end up attracting more crime because, hey, everyone else is doing it. It’s very easy to see how similar forces could be at play online: even in a virtual environment, nobody relishes being an outcast. Sometimes, people even end up repenting and apologising under their own steam when confronted by the genuine harm they’ve caused.

That brings me back to the reason why people can be jerks online in a way they wouldn’t dream of in the real world: empathy. The internet has moved on from text on IRC channels and bulletin boards to being a rich, real-time multimedia experience where (nearly) everyone has evidence of their humanity in photos and video across multimedia sites. I recall naively championing Facebook comments in a past job, with the logic that nobody would want to seem like a tool when their real name and photo was attached. I was wrong. In fact, abuse only seems to be getting worse, despite the veneer of real-life window dressing the web has incorporated over the years.

"Virtual reality brings us to an interesting crossroads. Will this lead to worse and worse abuse – death threats wouldn’t necessarily have to come in the form of 26 letters anymore – or a gentler, kinder online society?"

In that respect, if virtual reality becomes more prominent in our social networking (and with Facebook now owning Oculus, it’s very possibly a “when” rather than an “if”), we’ll be left at an interesting crossroads. Will this lead to worse and worse abuse – death threats wouldn’t necessarily have to come in the form of 26 letters anymore – or a gentler, kinder online society? It’s really way too early to say confidently one way or another, but one sociological study should give us a tiny sliver of modest hope: the University of Haifa found that people were far less likely to be hostile to each other online if they had to look their remote peer in the eye.

Could technological advancements return some much needed real-world humanity to an unfeeling virtual one? I hope so. Otherwise, the future of online abuse could be something wholly more unpleasant than that just words and pictures.

READ NEXT: YouTube – where you can experiment on humans as much as you like

Read more about: