EU Considering Scrapping Voluntary Social Media Code of Conduct After Losing Trust in Facebook

By Tom Pritchard on at

The EU has been pushing at social media companies to tackle hate speech for a very long time, especially in the face of companies that are seemingly disinterested in tackling the problem quickly and effectively. Until now everything was governed by a voluntary code of conduct on how to deal with the problem, but following the loss of trust in Facebook with the Cambridge Analytica scandal the EU is considering implementing actual legislation and heavy sanctions for the non-compliant.

Věra Jourová, the EU commissioner for consumers and justice, says the current system relies on trust between the EU and social media companies - with companies removing the offending material and reporting back on how they're doing. After the story broke about Cambridge Analytica, and Facebook's seemingly-cavalier approach to safeguarding user data, she admitted that trust had been eroded.

Jourová confirmed that she will be talking to Facebook COO Sheryl Sandberg later this week with plans to tackle "unanswered questions" regarding the company's past and future conduct. The agenda is mainly focused on ensuring such a scandal doesn't happen again, though she admitted questioning may touch upon hate speech:

“Maybe also we will touch upon the hate speech code of conduct. For me, of course, two things are important: that they apply to the EU laws, and the second field of my interest is where we cooperate on a voluntary basis.

A code of conduct on hate speech was an important piece of common work and I want to continue with it but it is based on trust. It must be there. They have to work on renewing this trust.”

In the past social media companies have used the free speech argument to avoid legislation governing the content on their platforms, though it hasn't always worked in their favour. Germany, for instance, fines companies up to €50 million (£43 million) if they consistently fail to remove hate speech. Jourová has admitted she's wary of adopting the German method elsewhere in the EU, due to the fact there's a very thin line between removing offending material and censorship.

“We are still working on the possible legal proposals. I still stand on the position that for terrorism, extremism and images of child abuse we should have a more reliable framework that could introduce sanctions for lack of compliance … but the line between prohibiting hate speech and censorship is very thin.”

Nothing is definitely going to happen, but the Cambridge Analytica scandal has clearly shown Facebook can't necessarily be trusted to self-regulate. That's something that's going to have lasting consequences for the company, and other social media services in general, because if they can't be trusted to safeguard something as important as user data by themselves how could you trust them to remove illegal content in the appropriate manner? Those are all things the EU and other world governments are going to have to be thinking very hard about. [The Guardian]


More Internet Posts: