Facebook’s algorithms “let advertisers directly target racists”
Facebook, as I’ve written before, is a company that is experiencing a huge number of growing pains. That’s to be expected for any club that counts nearly a third of the planet amongst its members – you can’t expect all of them to represent your values.
All the same, it’s not a good look when your revenue model makes it stupidly easy to spread hate speech to an audience you know will be receptive. This is the problem Facebook has to face up to, after a ProPublica investigation uncovered that it could serve sponsored Facebook ads directly in the timelines of people with the term “Jew hater” in their bio.
The publication paid $30 to target individuals who were interested in the topics of “Jew hater” (2,274 people), “how to burn jews” (two people), “History of why jews ruin the world” (one person) and “Hitler did nothing wrong” (15 people). Discovering this coalition of bigotry wasn’t enough to make an advertising segment, the social network’s algorithm suggested that ProPublica target people with an interest in the Second Amendment, which would boost it by a healthy 119,000 people. The Second Amendment, for the unaware, relates to the right to bear arms – and the fact that the company’s algorithm suggested the group implies that there are plenty of people in the Venn diagram for anti-Semites and those that wear their enthusiasm for gun ownership on their sleeves. In the end, ProPublica settled on using fans of the German Schutzstaffel and Nazi Party to their advertising segment, boosting it by a further 5,643 people.
While ProPublica put a deliberately bland advertisement out to these people (it was approved by Facebook within 15 minutes), the risk is that others could use the social network’s unprecedented ability to target people by their interests to spread propaganda or organise far right rallies. As it was, ProPublica managed to get 101 people to click through for a broadly untargeted message.
Facebook, to its credit, was quick to take down the offensive tags as soon as ProPublica reported its findings, and the company was keen to point out that it was algorithmically generated. That said, Slate followed up the investigation with one of their own, finding equally troubling categories for advertisers to exploit:
Facebook has now removed the ability of advertisers to target people based on “self-reported targeted fields” which should lead to more sanitised options – although, it’s presumably still possible to target using dog-whistle terms if you know what legitimate phrases to use.
Facebook issued a full statement on the matter, which claims “hate speech and discriminatory advertising have no place on our platform” and that “community safety is “critical to our mission.” As such, “we are removing these self-reported targeting fields until we have the right processes in place to help prevent this issue”.
That’s all well and good, but it treats the problem as one of advertising and algorithms. It doesn’t say anything about the people who wear their anti-Semitism proudly on their sleeves on Facebook’s pages. Being able to advertise directly to racists is clearly a sociological problem if it’s exploited, but it would be a non-issue if Facebook didn’t allow such people to proudly self-identify without repercussions. To suggest this is merely an algorithmic snafu is burying the lede somewhat – add that to Facebook’s problem pile.