Facebook has announced in a blog post that its AI is getting better at detecting terrorist-related content and, subsequently, removing it. Personally, I’ve never had the experience of scrolling through a Facebook feed and stumbling upon posts promoting terrorist groups like ISIS and Al Qaeda, but I can imagine just how terrifying it could be to see.

Over the years Facebook has volleyed questions over just how responsible it is for curbing the posts on its platform, but now it appears to have finally done something of note to stop it.
Historically, Facebook has relied on its users to report terrorism-promoting content. However, it is now increasingly using AI to identify posts and remove them within the hour – something that will please critics who believe that the social media giant isn’t doing enough to fight the spread of terrorist-related material online.
One such critic, Hans-Georg Maassen, Germany’s Head of Domestic Intelligence Agency, blamed Facebook for the spread of hateful posts and fake news, accusing the company of being a “fifth estate that makes claims, but up until now” has not wanted “to take any social responsibility.”
Facebook’s blog post appears to want to shift the narrative and to show sceptical countries like Germany that Facebook is, in fact, doing something.
READ NEXT: Facebook admits it fiddled with users’ emotions
“Today 99.9% of the ISIS and Al Quaeda-related terror content we remove from Facebook is content we detect before anyone in our community has flagged it to us and, in some cases, before it goes live on the site,” Monika Bickert, Head of Global Policy Management and Brian Fishman, Head of Counterterrorism Policy, wrote in the blog post. “We do this primarily through the use of automated systems like photo and video matching and text-based machine learning. Once we are aware of a piece of terror content, we remove 83% of subsequently uploaded copies within one hour of upload.”
Currently, the algorithm only targets content from Al Qaeda and ISIS as the AI has been trained to learn the sentence structures and linguistic functions purely from these two organisations. While that’s still progress, it does raise questions about where content from other regional extremist organisations lie, such as the EDL and Britain First within the UK.
Facebook claims that these regional organisations can’t just be targeted using its AI platform. To get Facebook’s AI to understand what these posts are, experts from Facebook’s collaborative forum will need to identify the extremist posts on their platform and flag them.
Back in June Facebook, Microsoft, Twitter and YouTube formed the Global Internet Forum to Counter Terrorism. The collaboration between the social network giants aimed at halting the spread of terrorism and extremism on their platforms. Through this group, Facebook and its collaborators are looking to spot changes in how terrorist organisations use social media to spread propaganda and then act upon those findings accordingly.
Only time will tell if any of this will actually curb the spread of terror-related material on the website. But let’s hope they continue to accept their duty to foster an online environment free of hateful posts and extremist content.
Disclaimer: Some pages on this site may include an affiliate link. This does not effect our editorial in any way.