According to the plea, false information disparaging the Rohingya community is pervasive on Facebook, and the social media site purposefully does nothing to stop users from sharing it.
In a Public Interest Litigation (PIL) submitted to the Delhi High Court, two Rohingya refugees are requesting that Facebook (now Meta), the social media platform, cease posting offensive and divisive content directed towards the Rohingya community.
Facebook has been urged to stop using virality and ranking algorithms that support hate speech and acts of violence against minority communities.
The High Court is probably going to hear the petition later this month.
Kawsar Mohammed and Mohammed Hamim are the petitioners. After fleeing persecution in Myanmar, they arrived in India in March 2022 and July 2018, respectively.
They have claimed in their plea, filed through Advocate Kawalpreet Kaur, that there is evidence demonstrating that Facebook purposefully does not take action against posts that spread misinformation, harmful content, or posts that originate in India and target Rohingya refugees.
It was emphasized that, in fact, its algorithms encourage this kind of content.
It claimed that Facebook was widely used to dehumanize the Rohingya community in Myanmar and that there is a significant chance that harmful content and misinformation will spread widely and spark violence against the community as the general elections of 2024 approach.
“The presence of Rohingya refugees in India is a highly politicized matter, and as such they are disproportionately targeted with harmful content on Facebook painting the group as a threat to India, often referring to the group as ‘terrorists’, ‘infiltrators’ and exaggerating the numbers of Rohingya that have fled to India,” the plea contended.
It cited a 2019 Equality Lab study on hate speech on Facebook in India, which discovered that although the Rohingya made up only 0.02 percent of India’s Muslim population, 6% of the Islamophobic posts were specifically anti-Rohingya.
According to the plea, there are notable parallels in the language used on Facebook to disparage the Rohingya community in India.
It claimed that failing to take proactive measures to suppress hateful content and to take action against those who spread hate puts the lives of the Rohingya people in danger, thereby violating their constitutionally guaranteed right to life.
The petitioners also claimed that Facebook is in breach of Rule 3 of the Information Technology (Intermediaries Guidelines) Rules 2011, which addresses the due diligence that an intermediary must follow when performing its duties, and Section 79(3) of the Information Technology Act.
Thus, Hamim and Mohammed ask Meta to take action against accounts that spread hate towards the Rohingya community and to be transparent about how it implements its content moderation guidelines regarding user-flagged content.
“It [Meta] must also provide an India-specific report on hate speech content moderation. This report must clearly identify the content moderation decision trajectories where content is removed and where content is not removed. This report should also include specific numbers on how many users flagged reports were received, what part of user flagged reports were removed, how many of these were appealed and what amount of content was removed during the process of appeal and under what categories,” the plea demanded.