Home » News » Tech » Twitter is Now Banning Posts Dehumanising People on Age, Disabilities and Diseases
1-MIN READ

Twitter is Now Banning Posts Dehumanising People on Age, Disabilities and Diseases

Image for representation. (Photo: Reuters)

Image for representation. (Photo: Reuters)

Tweets that break this rule pertaining to age, disease and/or disability, sent before today will need to be deleted, but will not directly result in any account suspensions because they were Tweeted before the rule was in place said the social media company.

Twitter has expanded its rules around hate speech to include language that dehumanises people on the basis of their age, disability or disease. Last year, the micro-blogging platform updated it's 'Hateful Conduct' policy to address dehumanising speech, starting with one protected category: religious groups.

"Our primary focus is on addressing the risks of offline harm, and research shows that dehumanising language increases that risk. As a result, we expanded our rules against hateful conduct to include language that dehumanises others on the basis of religion. "Today, we are further expanding this rule to include language that dehumanizes on the basis of age, disability or disease," Twitter said in a statement on Thursday. Disease is important as the novel coronavirus is spreading across the globe and people are sharing all kinds of information, including jokes, videos, memes, and GIFs related to certain communities that can hurt their sentiments.

"Tweets that break this rule pertaining to age, disease and/or disability, sent before today will need to be deleted, but will not directly result in any account suspensions because they were Tweeted before the rule was in place," said the company. In 2018, Twitter asked for feedback to hear directly from different communities and cultures. In two weeks, it received more than 8,000 responses from people located in more than 30 countries. Across languages, people believed the proposed change could be improved by providing more details, examples of violations, and explanations for when and how context is considered.

Respondents said that "identifiable groups" was too broad, and they should be allowed to engage with political groups, hate groups, and other non-marginalised groups with this type of language. Many people wanted to "call out hate groups in any way, any time, without fear". In other instances, people wanted to be able to refer to fans, friends and followers in endearing terms, such as "kittens" and "monsters". "We are continuing to learn as we expand to additional categories," said Twitter, adding that it has developed a global working group of outside experts to help how it should address dehumanising speech around more complex categories like race, ethnicity and national origin.