Removed 14 Million Terror-Related Content so Far: Facebook
The US Department of Justice recently discovered an alleged IS supporter warning others to be careful when posting propaganda on Facebook.
Lok Sabha Elections 2019: Twitter, Facebook Remove 500 Posts After Election Commission of India Flags Them (Image for representation)
Facebook has removed more than 14 million "pieces of terrorist content" on its platform till September this year that was related to the Islamic State (IS), Al Qaeda and their affiliates. In the second quarter (April-June) 2018, Facebook took action on 9.4 million pieces of terror content, the majority of which was old material surfaced using specialised techniques.
In the third quarter (July-September), overall takedowns of terrorist content declined to 3 million, of which 800,000 pieces of content were old. "In both Q2 and Q3, we found more than 99 per cent of the IS and Al Qaeda content ultimately removed ourselves, before it was reported by anyone in our community," Monika Bickert, Global Head of Policy Management, said in a blog post on Thursday.
"These figures represent significant increases from Q1 2018, when we took action on 1.9 million pieces of content, 640,000 of which was identified using specialized tools to find older content," she added. The US Department of Justice recently discovered an alleged IS supporter warning others to be careful when posting propaganda on Facebook, pointing to the network's Q1 2018 terrorism removal metrics as evidence that pushing propaganda on the social media giant was getting more difficult.
"We now use machine learning to assess Facebook posts that may signal support for IS or Al Qaeda," said Brian Fishman, Head of Counterterrorism Policy at Facebook. "In some cases, we will automatically remove posts when the machine learning tool indicates with very high confidence that the post contains support for terrorism." According to Facebook, the new machine learning tools have reduced the amount of time terrorist content -- reported by its users stays on the platform -- from 43 hours in Q1 2018 to 18 hours in Q3 2018.
"Our experiments to algorithmically identify violating text posts (what we refer to as language understanding) now work across 19 languages. We now also use audio- and text-hashing techniques for detecting terrorist content," Bickert informed. In Q2 2018, the median time on Facebook for newly uploaded content surfaced with its standard tools was about 14 hours -- a significant increase from Q1 2018 -- when the median time was less than 1 minute.
"The increase was prompted by multiple factors, including fixing a bug that prevented us from removing some content that violated our policies, and rolling out new detection and enforcement systems," said Facebook By Q3 2018, the median time on platform decreased to less than two minutes, "illustrating that the new detection systems had matured".
User-reported terror content removals grew around 16,000 in Q3 - from 10,000 in Q1. According to Facebook, it removed 99 per cent of it "proactively".
Get the best of News18 delivered to your inbox - subscribe to News18 Daybreak. Follow News18.com on Twitter, Instagram, Facebook, TikTok and on YouTube, and stay in the know with what's happening in the world around you – in real time.
Subscribe to Moneycontrol Pro and gain access to curated markets data, trading recommendations, equity analysis, investment ideas, insights from market gurus and much more. Get Moneycontrol PRO for 1 year at price of 3 months. Use code FREEDOM.
Recommended For You
- India vs West Indies: Need to be More Patient in the Middle: Rahul
- Neha Dhupia, Angad Bedi Are Slaying it By the Beach in Maldives
- India vs West Indies 2019: Kohli Needs to be Consistent in Selection: Ganguly
- NASA’s Next Moon Landing Will be Powered by The Aitken Supercomputer Made by HP
- First Details of The OnePlus TV Confirmed: It Will Have a 55-Inch QLED Panel