Home » News » Buzz » Mystery of Twitter's Racism: Viral Experiment Shows AI Chose Ananya Pandey's Face over Beyonce's

Mystery of Twitter's Racism: Viral Experiment Shows AI Chose Ananya Pandey's Face over Beyonce's

Is Twitter racist? A recent experiment seems to prove it is | Image credit: Twitter

Is Twitter racist? A recent experiment seems to prove it is | Image credit: Twitter

In a Twitter experiment, an infrastructure security expert recently exhibited how Twitter's algorithm preferred white faces by constantly choosing Mitch McConnell's photo over Barack Obama's.

The recent killings of Geroge Floyd, Breonna Taylor, and Ahmed Arbury’s have cast fresh light on systemic racial discrimination that exists in countries like the United States and the United Kingdom. And now, more and more research is coming to light, proving that racial discrimination does not just exist in human behavior and systems but also in the digital tools that we create.

In a Twitter experiment, cryptocurrency, programming, and infrastructure security expert Tony Arcieri recently exhibited how the microblogging and photo-sharing site’s algorithm preferred white faces.

Arcieri used two photos, one of former US President Barack Obama and Republican Senator Mitch McConnell from Kentucky to find out if the Jack Dorsey-founded platform is racially selective. As it turns out, it is.


Arcieri used a series of images in which he put two photos of Obama and McConnell in various permutations and combinations (but only in a vertical format where one photo follows the other.

In each case, the algorithm chose to make McConnell’s face the thumbnail instead of Obama’s. The latter’s face was only revealed once you clicked on the image.

It was only after the skin colors of both the photos were inverted that the thumbnail chose a face as per the order in which the photos appeared.

A user tried to reason with the critics, claiming that the bias was based on a design flaw, not racist prejudice. But another Twitter user by the name of Mitch Benn responded, claiming, “Something doesn’t have to be ‘designed to be racist’ to be racist".

The experiment soon went viral with others trying it out for themselves to check if the racial bias actually existed. Indian columnist Sahil Rizwan took two photos - one of singer Beyonce and the other of Bollywood actress Ananya Pandey. Pandey’s face was chosen by Twitter’s algorithm as the photo thumbnail.

Coming in the wake of accusations of racism against the makers of Pandey and Ishan Khattar’s upcoming film “Khaali Peeli" which stumbled into controversy for a song initially titled “Beyonce Sharma Jayegi", Rizwan’s post has since been going viral,

This is not the first experiment or study that found systemic racism against non-white people with digital tools such as auto health assessment and facial recognition.

A recent study published in Science magazine in October 2019 found that a popular algorithm used by healthcare systems in the United States to assess the health of patients almost always refers a white person to better, more complex treatment while not referring the same to a black person with identical afflictions.

Racism in electronic media and digital tools impacts people of colour at all levels starting from healthcare to education to biases in the criminal justice system that favours whites.

Facial recognition technology - used to unlock smartphones or for security clearances at airports - depends on matching photo of a face to existing photos from police databases, mugshots, passport photos etc. A 2019 US government study confirmed that almost 200 of the top facial recognition-based security systems used in the US used an algorithm which was not “designed to be racist" but ended up being so just the same by throwing up a higher number of false-positive matches for Asian and African-American faces over Caucasian faces.

A September 2020 report by a team of researchers at the University of Toronto’s Munk School and the International Human Rights Program at the University of Toronto’s Faculty of Law released a report that looked at the impact of algorithm policing technologies on people of colour and ethnic minorities such as Asians. The report found that the existing technologies reinforce biases against ethnic minorities and people of color.