Facebook will reportedly study the AI and ML-driven neural networks to know whether they promote racial bias and Instagram will create an equity team on the same.
The Deepfake Detection Challenge drew more than 2,000 participants, who trained and tested more than 35,000 models using a unique new data set created for the challenge.
Facebook will pay its users $5 for their voice recording and aims to improve its AI transcription technology.
Facebook has an around-the-clock team of 25 people and trained Artificial Intelligence tools to detect non-consensual intimate images or videos.
California's Attorney General, Xavier Becerra, has disclosed some details of the probe, which has been ongoing for over a year now.
It is not yet clear as to how exactly does Facebook intend to use a voice assistant, but the move is bound to raise more privacy and data collection questions about the company.
The new technology is in addition to a pilot program that required trained representatives to review offending images.
Facebook's chief AI scientist, Yann LeCun indicated that the social networking giant is already developing its own custom application-specific integrated circuit (ASIC) chips to support its AI software.
Facebook AI researchers seek to understand and develop systems with human-level intelligence by advancing the longer-term academic problems surrounding AI.
The new test is significant after Facebook last year had to shut down one of its AI systems after chatbots started speaking in their own language defying the codes provided.
The algorithm was trained on Skype so that it could learn and then mimic how humans adjust their expressions in response to each other, New Scientist web portal reported.
Facebook uses image matching technique to identify and remove a terror-related content.