Facebook Dithered in Curbing Divisive User Content in India

Facebook in India has been selective in curbing hate speech, misinformation and inflammatory posts, particularly anti-Muslim content, according to leaked documents obtained by The Associated Press, even as its own employees cast doubt over the company’s motivations and interests.

From research as recent as March of this year to company memos that date back to 2019, the internal company documents on India highlight Facebook’s constant struggles in quashing abusive content on its platforms in the world’s biggest democracy and the company’s largest growth market. Communal and religious tensions in India have a history of boiling over on social media and stoking violence.

The files show that Facebook has been aware of the problems for years, raising questions over whether it has done enough to address these issues. Many critics and digital experts say it has failed to do so, especially in cases where members of Prime Minister Narendra Modi’s ruling Bharatiya Janata Party, the BJP, are involved.

Modi has been credited for leveraging the platform to his party’s advantage during elections, and reporting from The Wall Street Journal last year cast doubt over whether Facebook was selectively enforcing its policies on hate speech to avoid blowback from the BJP. Both Modi and Facebook chairman and CEO Mark Zuckerberg have exuded bonhomie, memorialized by a 2015 image of the two hugging at Facebook headquarters.

According to the documents, Facebook saw India as one of the most “at risk countries” in the world and identified both Hindi and Bengali languages as priorities for “automation on violating hostile speech.” Yet, Facebook didn’t have enough local language moderators or content-flagging in place to stop misinformation that at times led to real-world violence.

In a statement to the AP, Facebook said it has “invested significantly in technology to find hate speech in various languages, including Hindi and Bengali” which has “reduced the amount of hate speech that people see by half” in 2021. 

“Hate speech against marginalized groups, including Muslims, is on the rise globally. So we are improving enforcement and are committed to updating our policies as hate speech evolves online,” a company spokesperson said. 

This AP story, along with others being published, is based on disclosures made to the Securities and Exchange Commission and provided to Congress in redacted form by former Facebook employee-turned-whistleblower Frances Haugen’s legal counsel. The redacted versions were obtained by a consortium of news organizations, including the AP.

In February 2019 and ahead of a general election when concerns about misinformation were running high, a Facebook employee wanted to understand what a new user in the country saw on their news feed if all they did was follow pages and groups solely recommended by the platform.

The employee created a test user account and kept it live for three weeks, during which an extraordinary event shook India — a militant attack in disputed Kashmir killed more than 40 Indian soldiers, bringing the country to near war with rival Pakistan.

In a report, titled “An Indian Test User’s Descent into a Sea of Polarizing, Nationalistic Messages,” the employee, whose name is redacted, said they were shocked by the content flooding the news feed, which “has become a near constant barrage of polarizing nationalist content, misinformation, and violence and gore.”

Seemingly benign and innocuous groups recommended by Facebook quickly morphed into something else altogether, where hate speech, unverified rumors and viral content ran rampant.

The recommended groups were inundated with fake news, anti-Pakistan rhetoric and Islamophobic content. Much of the content was extremely graphic.

“Following this test user’s News Feed, I’ve seen more images of dead people in the past three weeks than I’ve seen in my entire life total,” the researcher wrote.

The Facebook spokesperson said the test study “inspired deeper, more rigorous analysis” of its recommendation systems and “contributed to product changes to improve them.”

“Separately, our work on curbing hate speech continues and we have further strengthened our hate classifiers, to include four Indian languages,” the spokesperson said.

Other research files on misinformation in India highlight just how massive a problem it is for the platform.

In January 2019, a month before the test user experiment, another assessment raised similar alarms about misleading content. 

In a presentation circulated to employees, the findings concluded that Facebook’s misinformation tags weren’t clear enough for users, underscoring that it needed to do more to stem hate speech and fake news. Users told researchers that “clearly labeling information would make their lives easier.”

Alongside misinformation, the leaked documents reveal another problem dogging Facebook in India: anti-Muslim propaganda, especially by Hindu-hardline groups.

India is Facebook’s largest market with over 340 million users — nearly 400 million Indians also use the company’s messaging service WhatsApp. But both have been accused of being vehicles to spread hate speech and fake news against minorities.

In February 2020, these tensions came to life on Facebook when a politician from Modi’s party uploaded a video on the platform in which he called on his supporters to remove mostly Muslim protesters from a road in New Delhi if the police didn’t. Violent riots erupted within hours, killing 53 people. Most of them were Muslims. Only after thousands of views and shares did Facebook remove the video.

In April, misinformation targeting Muslims again went viral on its platform as the hashtag “Coronajihad” flooded news feeds, blaming the community for a surge in COVID-19 cases. The hashtag was popular on Facebook for days but was later removed by the company.

The misinformation triggered a wave of violence, business boycotts and hate speech toward Muslims.

Criticisms of Facebook’s handling of such content were amplified in August of last year when The Wall Street Journal published a series of stories detailing how the company had internally debated whether to classify a Hindu hard-line lawmaker close to Modi’s party as a “dangerous individual” — a classification that would ban him from the platform — after a series of anti-Muslim posts from his account.

The documents also show how the company’s South Asia policy head herself had shared what many felt were Islamophobic posts on her personal Facebook profile. 

Months later the India Facebook official quit the company. Facebook also removed the politician from the platform, but documents show many company employees felt the platform had mishandled the situation, accusing it of selective bias to avoid being in the crosshairs of the Indian government.

As recently as March this year, the company was internally debating whether it could control the “fear mongering, anti-Muslim narratives” pushed by Rashtriya Swayamsevak Sangh, a far-right Hindu nationalist group that Modi is also a part of, on its platform.

In one document titled “Lotus Mahal,” the company noted that members with links to the BJP had created multiple Facebook accounts to amplify anti-Muslim content.

The research found that much of this content was “never flagged or actioned” since Facebook lacked “classifiers” and “moderators” in Hindi and Bengali languages. 

Facebook said it added hate speech classifiers in Hindi starting in 2018 and introduced Bengali in 2020.

your ad here

leave a reply: