Twitter expands its definition of “dehumanising language”
In a blog post from 25 September, Twitter announced plans to step up its game of challenging dehumanising language. In Twitter parlance, this refers to tweets that reduce people to simplistic traits that deny their human nature (such as referring to social groups by stereotypes) or comparing them to animals, diseases or objects.
Previously the ‘hateful content policy’ covered content that promoted violence based on race, gender, ethnicity, or a number of different identities, and some hateful users were banned for violating it. However Twitter’s policy only covered tweets that had a particular target, and many users circumvented the rules by using ambiguous language that had no specific target.
Now the social media giant is expanding its definition of “dehumanising language” by including this material. This means tweets don’t have to “@” a target to violate the rules — instead, any attempt to incite violence against any kind of social or political group of any size is punished.
This reduces the leeway users have to skirt around Twitter’s rules on hate-speech. It’s hoped this will lead to Twitter becoming a moderately less-hateful forum.
In an unprecedented move, Twitter is asking for feedback on the potential changes. Previously changes followed a “policy development process”, but in this case, the platform is asking users to offer feedback via a form in order to “increase health of public conversation”. The form is open until 9 October, and the policy change will follow shortly after.
It remains to be seen if this will have a noticeable effect in curtailing the amount of hate speech on Twitter, but it’s impressive that the social media platform, which is famously slow in removing hateful content, is making this change.