Facebook has been taking a lot of heat for myriad controversies in the UK and US, including its jaw-dropping data privacy screwups and allegations it is maybe wrecking democracy by helping fuel online echo chambers. But as sceptical as the west may be getting of Mark Zuckerberg and crew, the company’s role doing damage in other countries is beginning to attract more scrutiny.
According to a Saturday report in the New York Times, participants on both sides of Sri Lanka’s ongoing tensions between the majority Sinhalese Buddhist population and the country’s Muslim minority have become intimately familiar with the way Facebook can be used to perpetuate conflict. While Sri Lanka’s long-running civil war between the government and Tamil separatists ended in 2009, in its wake ethnic persecution has grown to the point where the government was recently forced to declare a state of emergency.
The situation is particularly bad because, as the Times detailed, for many Sri Lankans Facebook is for all purposes their primary portal to the internet, and the company has made little effort to act as a responsible gatekeeper:
Time and again, communal hatreds overrun the newsfeed—the primary portal for news and information for many users—unchecked as local media are displaced by Facebook and governments find themselves with little leverage over the company. Some users, energised by hate speech and misinformation, plot real-world attacks.
A reconstruction of Sri Lanka’s descent into violence, based on interviews with officials, victims and ordinary users caught up in online anger, found that Facebook’s newsfeed played a central role in nearly every step from rumor to killing. Facebook officials, they say, ignored repeated warnings of the potential for violence, resisting pressure to hire moderators or establish emergency points of contact.
In the Facebook-fuelled popular imagination, the Times wrote, one small town named Ampara became the “shadowy epicentre of a Muslim plot” to destroy the Sinhalese population. In one particularly notorious incident, a family of Tamil-speaking Muslims who ran a restaurant there became involved in a dispute where an angry customer believed that they had put sterilisation pills in his food—almost certainly something he had seen in a viral Facebook meme. An angry mob beat the man running the register, Atham-Lebbe Farsith, and then “destroyed the shop and set fire to the local mosque.”
Farsith now has to hide because he is recognisable to many from a misleading Facebook video suggesting he really did put a sterilisation pill in the food.
In other cases, Sinhalese extremists have posted photos of improvised weaponry to Facebook and subsidiaries like WhatsApp before deadly riots.
While Facebook is sometimes compared to an absentee landlord in developing countries where it has displaced traditional media yet has little on-the-ground presence, this passage in particular makes it sound a bit more like a guy selling cigarettes to minors:
But where institutions are weak or undeveloped, Facebook’s newsfeed can inadvertently amplify dangerous tendencies. Designed to maximise user time on site, it promotes whatever wins the most attention. Posts that tap into negative, primal emotions like anger or fear, studies have found, produce the highest engagement, and so proliferate.
This sort of thing sounds exactly like what’s happened in Myanmar, where activists have accused Facebook of doing nothing to moderate content. As a result, the site is spreading disinformation and propaganda, building much of the furore behind the ethnic cleansing of the country’s Rohingya Muslim minority. Social media analyst Victoire Rio told BuzzFeed the company’s response was “grossly insufficient” and “only reinforces our belief that Facebook is not doing anywhere near as much as they should and could do to prevent the spread of hatred in Myanmar.”
In Indonesia, the Times added, rumours of outsiders kidnapping children as part of organ-harvesting rings spread rapidly on Facebook, and as a result, “locals in nine villages lynched outsiders they suspected of coming for their children.” Similar rumours ended up with similar results in India and Mexico.
When Sri Lankan researchers with the Center for Policy Alternatives and government officials asked for help, Facebook directed them to the reporting tool; it’s unclear how many Sinhalese-speaking moderators the site has, but 25 such positions have been unfilled since June 2017.
“You report to Facebook, they do nothing,” Center for Policy Alternatives researcher Amalini De Sayrah told the Times. “There’s incitements to violence against entire communities and Facebook says it doesn’t violate community standards.”
Sometimes, Facebook uses entire countries as testing grounds for changes to their platform—like in 2017, when the company rolled out changes to the news feed segregating posts from most Facebook Pages including media organisations in a separate feed in six countries including Sri Lanka. The result was plummeting engagement, and the Times writes one possible effect of that experiment was fewer people reading credible news even as attacks on Muslims in Sri Lanka were reaching a fever pitch.
Even turning off Facebook doesn’t work: When the government blocked the site earlier this year as part of emergency measures, the Times wrote an estimated three million users simply turned to VPNs to gain access.
While Facebook didn’t create ethnic tensions in Sri Lanka, Myanmar, or elsewhere, it’s becoming clear that the company’s role as a gatekeeper for millions of internet users is also coming with responsibilities it is either ill-equipped to handle or simply wants to brush off. Even in the US, where it was founded, the company has been largely opaque about its rules and how it implements them, and it’s repeatedly punted when asked to do more content moderation or flag misinformation on the platform. [New York Times]