EU Draft Law Would Fine Companies for Failing to Adequately Deal With Extremist Content

By Tom Pritchard on at

The EU is always getting into fights with social media companies, especially where the topic of extremist content comes up. Social Media insists that it's putting loads of effort into removing that sort of thing quickly and efficiently, but the EU (along with our own government) doesn't think that's good enough. So now there's a new draft law on the cards that would fine them for not removing flagged content quickly enough.

This news comes courtesy of the Financial Times, who discovered that the EU is in the middle of drafting the law. According to EU security commissioner Julian King, it's largely felt that letting social media companies self-police isn't working out - noting hat the EU can't afford to let itself "relax or become complacent" where the topics of terrorism and extremism are concerned.  While he wouldn't reveal any details of the draft law, he said it was "very likely" that it would end up forcing companies to adhere to existing voluntary guidelines - in other words removing flagged content within an hour.

The draft is set to be published in September, though it's likely to take some time before the European Parliament ends up voting. Until then things are just going to carry on as normal, no doubt with politicians telling companies they're not working hard enough. Meanwhile said companies will go "nuh uh, look" and claim they're actually being amazingly good at removing extremist material. Like YouTube, which claims to be able flag offending videos before the cyber police can spot it.

These new rules aren't likely to be a major issue for the big sites, particularly the ones that are already under pressure from governments to do something. As Engadget points out, however, this is more likely to be an issue for smaller services that can't afford the manpower necessary to remove large amounts of content at such short notice. While you would hope that the EU would give them some more breathing room, governments have never been particularly forward-thinking where this sort of thing is concerned.

That's likely how they'd end up implementing features that automatically delete flagged content, without a human being reviewing it all, or worse: our own government's AI system that's been designed to flag extremist material during the upload process and prevent it from being published. That's a slippery slope that we're better off avoiding at all costs. [Financial Times via Engadget]