Facebook is Letting AI Take More Calls on Which Sensitive Posts Require Human Attention

Facebook representative image.

Facebook representative image.

Facebook says that the improved moderation system would continue to have flaws, though it would enable human moderators to address harmful posts better and more effectively.

Facebook has said that it is improving its content moderation tool with enhanced artificial intelligence and machine learning capabilities, which it says will help combat hate posts and misinformation more effectively. A recent announcement by Ryan Barnes, product manager with the Facebook community integrity team, revealed that the new system would enable constructively prioritising content based on a number of key parameters – virality, severity and impact. Until now, posts that are thought to violate the company's rules are typically flagged to human moderators, both proactively and reactively, in chronological order. Now, Facebook intends to ensure that the most important posts are reviewed by human moderators first with the help of an improved AI algorithm system for content that requires moderation to queue the posts.

To achieve this, Facebook will reportedly use a combination of machine learning algos to sort a queue of flagged posts, based on the sensitivity of the content. These potentially harmful posts can either be reported by users, or in the proactive mode, detected automatically by Facebook's AI depending on a number of pre-defined parameters. The improved system will look to improve moderation of 'real world harm', i.e. posts such as fake propaganda that can have serious implications on the socio-economic fabric. Once such posts are passed from the AI level to human moderators and eventually resolved, the same system will then tackle spam and other less inciting posts. Posts that receive the highest level of priority include terrorism, child exploitation, self harm and other related aspects. To do this, Facebook is using its AI expertise that it already has.

This, though, appears to be work in progress, and Facebook software engineer Chris Palow notes that the system might still witness flaws. However, the eventual goal that Facebook has is to instil a level of human-like intelligence in computer recognition models, which has so far been missing from such AI models. This can help it make contextual decisions in super important post moderations – something that Facebook will look forward in order to cut down on problematic content. "The system is about marrying AI and human reviewers to make less total mistakes," said Palow.

In the last few years, Facebook has been criticised severely for mishandling hate posts and misinformation across its platforms. During the Covid-19 pandemic, the company faced the challenge of dealing with pandemic-related misinformation across platforms. Earlier in October, the social media giant banned QAnon accounts that are accused of spreading Covid-19-related misinformation from Facebook. The company also took several steps to limit misinformation during the US Presidential elections.

Next Story