Zuckerberg’s reaction to the Facebook election crisis
Facebook has had a tough couple of years since Donald Trump came to power. There’s been questions of election interference, its role in spreading fake news, and the almighty Cambridge Analytica scandal. With more elections on the horizon and fresh reports suggesting more users are abandoning Facebook than ever before, what happens next?
To address these concerns, and voice his own thoughts, Mark Zuckerberg dropped the first of what he calls a “series of notes” concerning Facebook’s security issues. Considering how long the topics of fake news and data breaches have been circulating, this is certainly a long time coming.
To address these concerns, and voice his own thoughts, Mark Zuckerberg dropped the first of what he calls a “series of notes” concerning Facebook’s security issues. Considering how long the topics of fake news and data breaches have been circulating, this is certainly a long time coming.His post covers Facebook’s recent work to combat the spread of fake accounts and misinformation concerning the US elections, as well as the company’s plans for the future.
“My focus in 2018 has been addressing the most important issues facing Facebook,” Zuckerberg writes, “including defending against election interference, better protecting our community from abuse, and making sure people have more control of their information.”
Zuckerberg’s post is a worthwhile read but, as it’s fairly long, I’ve taken the time to break it down into exactly what you really need to know about Facebook’s plans for the future.
Facebook and Fake Accounts
At the moment, finding and removing fake accounts is Facebook’s main priority. This makes sense when you realise that the majority of server abuse originates from these fake profiles. In the past six months alone, Facebo has removed over one billion fake accounts – most within minutes of their creation.
According to the Community Standards Enforcement Preliminary Report, Facebook disabled 583 million fake accounts in the first quarter of 2018 alone. This is a decrease from the 694 million fake accounts disabled at the end of 2017, though this doesn’t necessarily mean Facebook’s enforcement is becoming less effective.
Fake accounts are usually created in bulk, explaining the ridiculously high numbers of spam accounts in circulation. But, while these accounts are proving fairly easy to detect and disable, Facebook is clearly struggling with what Zuckerberg calls “sophisticated actors”. These “actors” are those who manually create fake accounts, one at a time, and network them to maximize the spread of misinformation.
“By working together, these networks of accounts boost each other’s posts, creating the impression they have more widespread support than they actually do.”
To combat this, Facebook has followed through on a promise made in October of last year and more than doubled its safety and security team, which now consists of over twenty thousand people.
“One advantage Facebook has is that we have a principle that you must use your real identity,” he explained. “This means we have a clear notion of what’s an authentic account. This is harder with services like Instagram, WhatsApp, Twitter, YouTube, iMessage, or any other service where you don’t need to provide your real identity.
“So if the content shared doesn’t violate any policy, which is often the case, and you have no clear notion of what constitutes a fake account, that makes enforcement significantly harder.”
Apparently, this increase has paid off. Within the past year alone, Facebook has discovered and taken down over 270 Russian accounts with associations to the Internet Research Agency, as well as an Iranian propaganda network with hundreds of pages, groups, and accounts. And it has also removed a network of accounts associated with a Brazilian presidential misinformation campaign.
“Our systems are shared, so when we find bad actors on Facebook, we can also remove accounts linked to them on Instagram and WhatsApp as well. And where we can share information with other companies, we can also help them remove fake accounts too.”
Through all of this, Zuckerberg continues to call for increased investment in security, stating that “these systems will never be perfect, but by investing in artificial intelligence and more people, we will continue to improve.” Nowhere in his post did Zuckerberg address the expenses associated with these new security measures, though he did imply that there was some disagreement amongst his investors surrounding his decision to continue investing so much into security.
How Facebook is tackling misinformation
Fake accounts are not the only perpetrators here when it comes to the spread of misinformation. Clickbait services, spammers, and misinformed personal accounts are also responsible for the distribution of fake news, something Facebook has found much more difficult to deal with.
One of Zuckerberg’s main goals is to stop the spreading of news that incites and promotes violence, stating that “In places where viral misinformation may contribute to violence we now take it down. In other cases, we focus on reducing the distribution of viral misinformation rather than removing it outright.” A noble statement, one that doesn’t quite align with his stance on Holocaust deniers and anti-semitism, which was that Facebook should be a “place where people can discuss all kinds of ideas, including controversial ones.”
Posts that have had their distributions “reduced” are flagged as false by the independent, non-partisan International Fact-Checking Network, and are then demoted, resulting in an average loss of 80% of future views. For those of you wondering, the IFCN is a unit of the highly-respected Poynter Institute, which commits itself to transparent, non-partisan, and unbiased journalism.
Facebook is also blocking repeat spammers from their ad services, cutting these services off from their profit sources and removing their incentives for spreading misinformation. Since many fake news sites rely almost entirely on Facebook and similar social media sites for their ad revenue, this could have quite a significant impact.
Facebook’s commitment to transparent advertising
This year has also brought new changes to Facebook’s advertising policy, the most important being a new system of hyper-transparency.
“You can see all the ads an advertiser is running, even if they weren’t shown to you,” Zuckerberg explains. “In addition, all political and issue ads in the US must make clear who paid for them. And all these ads are put into a public archive which anyone can search to see how much was spent on an individual ad and the audience it reached.”
So what exactly does this mean? Simply put, it means you can now see which organizations and parties are behind what ads. You can then see which groups or demographics these ads were targeted at, even if you aren’t the target audience.
All of Facebook’s ads are stored in its Ad Archive – a database available to anyone with a Facebook account that allows users to search for specific companies, organizations, and political figures.
Zuckerberg also revealed that Facebook has decided to not enforce a ban on political ads. Facebook had previously discussed this as an option, one which certainly would have been a simple solution to this dilemma. However, Zuckerberg and co decided against it, citing free speech as the main motive.
Zuckerberg claims this was not a financial decision. The verification process for political ads is, apparently, costly enough that Facebook barely turns a profit on them. Instead, he says that the change of heart was because Facebook “didn’t want to take away an important tool many groups use to engage in the political process.”
Independent Election Research Commission
In an effort to impartially analyze the effect of social media on elections, Facebook has organized what they’re calling an Independent Election Research Commission. Set up in April of this year, this commission pairs independent scholars with research topics chosen by the commission, and provides the researchers with funding and access to Facebook data for analysis.
Those associated with the commission, including Elliot Schrage, the Vice President of Communications and Public Policy, have clarified that “Facebook will not have any right to review or approve their research findings prior to publication.”
While the research obtained from this commission will most likely be beneficial to our understanding of how Facebook as a platform can affect the political process, there is justifiably some concern over the role of private data in this research initiative. The biggest controversy surrounding this development is Facebook’s plan to give independent researchers access to personal and private data. Zuckerberg states in his post that they are, quote, “dedicating significant resources to ensuring this research is conducted in a way that respects people’s privacy and meets the highest ethical standards.” However, there is no further mention of how he plans to achieve this.
This last section serves as Zuckerberg’s “call to arms,” so to speak.
He brings up an excellent point here: misinformation is not a Facebook-specific issue. While Facebook is certainly a hub for these types of activities, the fake news network is not confined there.
Zuckerberg insists that internet services and government organizations need to work together to tackle this problem. After all, most other social networking sites have no authenticity requirement, meaning users don’t have to use their real identities when signing up. This makes it that much harder for these sites to track down spam accounts.
Overall, it seems that the biggest change coming to Facebook is transparency. The fight against misinformation has been going on for years, and not much has changed other than the numbers of their staff. However, it appears that we can expect more communication from Zuckerberg concerning security, since he did say that this note was going to be the first in a series. In the meantime, all we can do is wait and see how effective these measures are.