The British government is preparing to publish its plans to clamp down on social networks that distribute and promote harmful content. According to a leaked document, the proposed measures include enabling regulators to hold company executives personally liable in cases entailing negligence.
The government is expected to release a white paper detailing the plans on Monday; however, the Guardian acquired the paper ahead of its publication. It reportedly describes the recommendations as broad, aimed not only at social networks, but search engines such as Google, online messaging services, and file hosting websites.
The paper reported that the plans are aimed at addressing the heightened concern over the distribution of terrorist and child-abuse content, as well as posts and video promoting self-harm and suicide.
One of the principal motivators in the quest for a regulatory solution is the tearful story of Molly Russell, a 14-year-old who took her own life in 2017. The girl’s father, Ian, blamed, in part, Instagram after family members discovered disturbing messages on her profile relating to suicide.
Likewise, the terrorist attack on two mosques in Christchurch, New Zealand, last month—in which 50 people were killed and 50 more were injured—have influenced the debate over content moderation. The terrorist, a white supremacist with links to the extreme right, broadcast the shooting over Facebook Live.
Facebook removed around 1.5 million videos of the attack in the first 24 hours, the company said.
The Guardian reports the plan calls for the government to “legislate for a new statutory duty of care, to be policed by an independent regulator and likely to be funded through a levy on media companies.” Enforcement will be overseen, at least initially, by the Office of Communications.
Other proposals, Guardian reported, include enhancing government powers to direct regulators to address terrorist activity and the sexual exploitation of children online; yearly “transparency reports” by social networks relating to the prevalence of harmful content on their platforms; and increased cooperation with law enforcement in areas such as “incitement of violence and sale of illegal weapons.”
In a statement to the Guardian, a government spokesperson said: “We will shortly publish a white paper which will set out the responsibilities of online platforms, how these responsibilities should be met and what would happen if they are not. We have heard calls for an internet regulator and to place a statutory ‘duty of care’ on platforms, and have seriously considered all options.”
The UK’s plan comes on the heels of a new Australian law that would make technology company executives and other individuals fail to “expeditiously” remove “abhorrent violent content” from their platforms subject to potential fines and imprisonment.
The failure of social networks to moderate extremist propaganda, as well as content glorifying self-harm, racism and violence, has been a recent focus of American lawmakers as well, and has prompted calls to reexamine the broad liability protections enjoyed by website operators.
Under Section 230 of the US Communications Decency Act, website owners cannot be held liable under most circumstances for user-generated content. Many experts believe, however, that companies such as YouTube and Facebook have put at risk this safe harbour by failing to crack down on harmful content on their own—and by acting dismissive toward the criticism of what many perceive as failed or inadequate moderation policies.
Earlier this year, US Senator Ron Wyden, the chief architect of the Section 230 liability shield, called on top websites to address the concerns before it’s too late. He told Politico his message was for companies to use the “sword”; else, he said, “there are going to be people out there who try to take away the shield.” [The Guardian]
Featured image: Justin Sullivan / Getty