Facebook and Google launch first strikes against fake news
A week ago, polls were opening in the United States, and it looked extremely likely that Hillary Clinton would beat Donald Trump to be the next president of the United States. We all know what happened next, and the defeated party turned to recriminations blaming everything from the creaking electoral college system to Hillary Clinton’s ongoing email problem.
But there was another factor at play here: in an election in which fact-checkers should all have received some kind of medal for their sterling work in calling out the lies, they were persistently dismissed as partisan troublemakers trying to rig the system. And it didn’t take long for people for people to take a critical look at Facebook and the way in which fake news could ricochet around the partisan echo chambers the site creates. At the same time pre-election, I wrote last week that (British, admittedly) adults have an over-reliance on search engines for topics that typically aren’t black-and-white enough for a quick crib-notes search-engine overview.
It looks like Google and Facebook are both taking action against their problems, which is too little too late for Hillary Clinton, but something that should be cheered in the long run. Especially when Facebook CEO Mark Zuckerberg’s initial reaction was to call the idea that his site’s nonsense conductivity tilted the election “a pretty crazy idea”.
Whether or not he believes that in his heart of hearts, Facebook has taken action. It’s a small thing, but possibly signs of a bigger reaction to come: the Facebook Audience Network Policy has been updated to disallow fake news sites from using the ad platform. It already blocked “misleading or illegal content”.
“We have updated the policy to explicitly clarify that this applies to fake news,” a Facebook spokesperson said in a statement. “Our team will continue to closely vet all prospective publishers and monitor existing ones to ensure compliance.”
Hours before this, Google had also stepped up to the challenge. While less in the firing line than Facebook, the search engine had come under fire after Mediaite spotted a WordPress site right at the top of a search for “Final Vote Count 2016” in which – you guessed it – a partisan site was giving inaccurate information about the popular vote:
In fact, Clinton is still on target to win the popular vote by a decent margin (just under a million as things stand) and this site that Google drew attention to was plain wrong.
“The goal of search is to provide the most relevant and useful results for our users,” said Andrea Faville from Google in a statement. “In this case, we clearly didn’t get it right, but we are continually working to improve our algorithms.”
Google’s first answer to the problem wouldn’t stop a dubiously sourced WordPress site from appearing in the news (although you can bet the company is tweaking its algorithms behind the scenes after the uncomfortable scrutiny), but shows a willingness to crack down on fake news sites in a different way: through their income. Google has now banned fake news sites from using their advertising platform. That’s a big deal, potentially: a BuzzFeed investigation previously found 100 pro-Trump fake news websites getting rich on a combination of Google AdSense money and Facebook shares operating from a single town in Macedonia.
“Moving forward, we will restrict ad serving on pages that misrepresent, misstate or conceal information about the publisher, the publisher’s content or the primary purpose of the web property,” Faville explained.
Elsewhere, an – ironically Facebook-sponsored – hackathon has come up with a Chrome plugin that would highlight verified and unverified news by algorithmically fact-checking on the fly. The FiB plugin works like this:
“It classifies every post, be it pictures (Twitter snapshots), adult content pictures, fake links, malware links, fake news links as verified or non-verified using artificial intelligence.
“For links, we take into account the website’s reputation, also query it against malware and phishing websites database and also take the content, search it on Google/Bing, retrieve searches with high confidence and summarise that link and show to the user. For pictures like Twitter snapshots, we convert the image to text, use the usernames mentioned in the tweet, to get all tweets of the user and check if current tweet was ever posted by the user.” Then it adds a verified or not verified sticker to posts. Like this:
All very clever, but my problem with all these solutions is this: if you’re the kind of person that genuinely believes Hillary Clinton has had people killed and that the rich and powerful are rigging elections against you, then why would you trust the intentions of mega-rich companies such as Facebook and Google?
There are some problems that algorithms can’t solve, no matter how smart.