Facebook has reportedly said that it was removing 99 percent of content related to militant groups Islamic State and al Qaeda before being told of it, as the company prepares for a meeting with European authorities on tackling extremist content online.
In a blog post Monika Bickert, head of global policy management, and Brian Fishman, head of counter-terrorism policy at the social media firm noted that 83 percent of “terror content” is removed within one hour of being uploaded.
According to Facebook, this is primarily done through the use of automated systems like photo and video matching and text-based machine learning.
The world’s largest social media network, with 2.1 billion users, has faced pressure from US and European governments to tackle extremist content on its platform more effectively.
“Today, 99 per cent of the IS and Al Qaeda-related terror content we remove from Facebook is content we detect before anyone in our community has flagged it to us, and in some cases, before it goes live on the site,” said the blog post.
Deploying AI for counterterrorism is not as simple as flipping a switch, the company noted.
A system designed to find content from one terrorist group may not work for another because of language and stylistic differences in their propaganda.
The social media giant is currently focusing its techniques to combat terrorist content about Islamic State (IS), Al-Qaeda and their affiliates.
“It is still early, but the results are promising, and we are hopeful that AI will become a more important tool in the arsenal of protection and safety on the Internet and on Facebook,” Bickert and Fishman wrote.
In September, Facebook admitted that Russian actors manipulated its platform to sway American political discourse.
Twitter and YouTube, a unit of Alphabet Inc.’s Google, have also been under fire over the past year for allowing misinformation, hateful speech and terrorist content to spread across their platforms.
The blog post comes a week before Facebook and other social media companies meet with European Union governments and the EU executive to discuss how to remove extremist content and hate speech online.
The European Commission in September told social media firms to find ways to remove the content faster, including through automatic detection technologies, or face possible legislation forcing them to do so.