Fake Messages and Death: How Social Media Giants Have Turned into Monsters Across Asia
Illustration by Mir Suhail/News18.com
At least 31 people have been lynched in more than 10 states after rumours of child-lifting spread on WhatsApp. There is no doubt that social media and lynching in the country have become inextricably linked as at least 14 lives have been lost in less than three months due to online rumor mongering.
Consider the following:
Five beggars were attacked and injured by an angry mob in Maharashtra’s Dhule and two in Tamil Nadu over rumors of child lifting. In Assam, two young men were slayed by a group after the prime accused in the case, Alphajoz Timung, instigated people into believing that the two were child lifters. Later, it turned out to be a case of settling scores, the police said.
These are the latest incidents in a series of mob attacks and public killings taking place in various parts of the country, where innocents have been lynched on the mere suspicion of being child lifters, notably on the basis of rumors circulated on social media.
Cyber law expert and advocate Pawan Duggal, insists that the lack of political will and compliance by social media intermediaries is what is causing this mayhem. “Indian laws pre date social media norms in the country and lack of specific legal provisions is causing this. The Indian IT Act is not dedicated or sufficient enough to deal with this. While intermediaries like WhatsApp want to take advantage of the Indian markets, they do not want to comply with the Indian laws,” Duggal said.
Duggal, however, insists that under Section 87 of the IT Act, the government can mandate stricter laws to prevent this problem of fake news. “India has to be stricter with intermediaries and more compliance is required from these platforms. Cyber space hygiene has to be increased along with sensitisation in cyber laws in school curriculums,” he said.
But this disturbing and fatal trend is not exclusive to India.
Activists in Myanmar, for instance, have strongly condemned Facebook’s unregulated content, which led to the circulation of hate speech that in turn contributed to a campaign to drive out Myanmar's Rohingya Muslim minority.
Nearly 6,50,000 Rohingyas have fled their native land to neighbouring Bangladesh between August 2017 to March 2018 after being branded as "foreigners", "illegal immigrants" and even "terrorists".
These inflammatory messages further fuelled the anti-Rohingya and anti-Muslim sentiments in the state, playing a determining role in the ensuing violence against the Rohingyas.
Several members of civil society organisations wrote to Facebook CEO Mark Zuckerberg in April describing instances when they had flagged viral posts spreading misinformation. The activists asked Facebook to invest in more moderators, engage local groups and display more transparency in dealing with the problem. Facebook has, in turn, increased the number of people working in its Community Operations team working in Myanmar with insights into the social, cultural and religious context of the country. Sadly, the move has come too late in the day. By the time Facebook woke up, thousands of Rohingyas had to flee their homeland.
Likewise in Sri Lanka, deadly anti-Muslim riots fanned by online hate rocked the nation three months ago. Images of masked men attacking mosques and urging others to do the same were freely circulated on social media sites before riots erupted in Kandy, a major Sri Lankan city. A meme in Sinhala, which had gone viral, called for death of Muslims.
This led to Sri Lankan authorities blocking social networking sites after violence that left three people dead and thousands of businesses, homes and mosques razed to the ground. It was only after facing a massive backlash for failing to filter content did Facebook send two high-level delegations to take on more Sinhalese resources in order to keep a tab on dangerous local content. Clearly, this oversight was what allowed extremist content to flourish on the platform in the first place.
Thousands of complaints lodged by Facebook users over extremist content were met with silence till it came to a point of deaths.
The world’s biggest social media and social networking service company has since blocked all hate organisations in Sri Lanka including the BoduBala Sena, a radical Buddhist outfit blamed for instigating the attacks. But despite the efforts, several extremist accounts continue to exist on the platform.
The underlying reason for this online-inspired violence is the opening up of global web services and increasingly accessible internet services worldwide. Coupled with cheaper mobile phones flooding the markets, a large part of the population has come to rely on social media for their world view.
It is important to note that while Facebook CEO Zuckerberg was made to testify before the US Senate after a scandal was uncovered, which involved data leak that may have influenced voter choices in the US elections, similar accountability of social media is missing in South Asia.
The different stance taken by the same platform on issues pertaining to its own irresponsibility are starkly different and do seemingly point to a certain level of hypocrisy displayed by the social networking site.
The dangers imposed by fake news on polling patterns is something that is likely to impact impending 2018 elections in Pakistan with all political parties asking their social media teams to create fake profiles for their poll strategy.
Fake accounts can also have a devastating impact considering Pakistan’s history of false blasphemy accusations. For instance, back in 2013, a student of journalism was lynched by a mob who suspected him to have uploaded blasphemous content on Facebook.
The lack of a data protection law coupled with the number of fake accounts operating out of the country make it a deadly combination, which a nation has to deal with.
The trend seems to be catching on in the whole of South Asia, which is home to the largest social media networkers in the world. In Bangladesh, in November 2017, a mob set on fire some 30 houses of the minority Hindu community after a youth published an offensive Facebook post. One person was killed when police opened fire to disperse the crowd that had indulged in arson. Bangladeshi authorities and personnel successfully controlled the mob in time before the incident spiralled out of control into ugly inter-community clashes.
Different countries are reacting differently to this newly-emerging threat. Uganda, in Africa, has this week imposed a new tax on social media users, prompting charges of curtailing free speech. India, too, is gearing up to counter this growing threat.
On Thursday, the Centre asked the states to check mob lynching fuelled by WhatsApp rumours. In an advisory, the home ministry has urged state governments to “keep a watch for early detection of rumours of child lifting and initiate effective measures to counter them.”
But will these measures be effective enough in our times?