Study finds Reddit’s ban of toxic subreddits worked in curbing hate speech
In 2015, Reddit pivoted on its laissez-faire administration attitude by introducing a new set of anti-harassment rules, resulting in the closure of a handful of deplorable subreddits. These included the body imaging shaming r/fatpeoplehate and the white nationalist hub r/coontown.
At the time the move was lambasted by corners of the Reddit community as a curtailing of free speech; a predictable reaction given Reddit’s previous approach had cultivated a number of fringe right wing groups. Other Reddit users criticised the ban on grounds of effectiveness, suggesting that closing a bunch of subreddits would only push those users into other groups.
Now, a study co-authored between the Georgia Institute of Technology, Emory University and the University of Michigan has conclusively determined that the ban worked, and that the squashing of toxic subreddits had a clear effect on hate speech across the site.
The researchers surveyed over 100 million posts from Reddit, from before and after the administrators banned r/fatpeoplehate and r/coontown. They then came up with a way to quantify the usage of hate speech:
“Given that Reddit has banned the r/fatpeoplehate and r/CoonTown forums, we focus on textual content that is distinctively characteristic of these forums,” the paper explains. “Using an automated keyword identification technique, we build lexicons of keywords for r/fatpeoplehate and r/CoonTown, which makes it possible to track whether the words in these lexicons become more common in other forums after the ban.
“Next, we manually inspect the automatically generated lexicons, and identify a subset of terms that are especially oriented towards hate speech. These manually refined lexicons are sparser, but offer higher precision.”
Across both boards, the researchers found that a significant number of users discontinued their accounts after the ban came into effect. Those that remained did not become more virulent in their opinions, but instead decreased their level of hate speech “by at least 80%” in subsequent posts.
“Following the ban, Reddit saw a 90.63% decrease in the usage of manually filtered hate words by r/fatpeoplehate users, and a 81.08% decrease in the usage of manually filtered hate words by r/CoonTown users (relative to their respective control groups). The observed changes in hate speech usage were verified to be caused by the ban and not random chance, via permutation tests.”
One criticism of the ban in 2015 was that it would “spread the infection”, with former users of toxic subreddits bringing their message into innocuous communities. The researchers instead found that r/CoonTown users moved to subreddits “where racist behavior has either been noted or is prevalent”, including r/The_Donald and r/BlackCrimesMatter. The users of r/fatpeoplehate moved to subreddits “dedicated to roasting users who voluntarily post pictures of themselves or others”, including /RoastMe, as well as gaming and TV hubs like r/fo4 and r/MrRobot.
In both cases, the study found no change in hate speech usage of “invaded” subreddits: “The migrants did not bring hate speech with them to their new communities, nor did the longtime residents pick it up from them.”
The bottom line for the researchers is that the ban worked, succeeding at both a user and community level in reducing the prevalence of hate speech. The researchers do however note that toxic users seemed to flee for other, more obscure sites, such as the Reddit clone, Voat, and Gab. The ban did not, therefore, stamp out bigotry altogether, but as far as Reddit’s goal of maintaining the health of its own community goes, it was a clear success.
The continued existence of subreddits such as r/The_Donald suggests that Reddit’s approach is far from comprehensive, but the study is a strong indication that throwing barriers in front of hateful communities can prevent toxic ideologies taking root on particular platforms. As to the larger question about how hatred can be fought across the internet in general, that’s an issue preoccupying not only the likes of Facebook and Google, but governments across the globe. Earlier this year, for example, a number of UK MPs accused social networks of “peddling hate”, and called on them to do more to prevent online abuse.