ai

The Government Has an AI Designed to Identify Extremist Material Online

By Tom Pritchard on at

The government has made a big deal about identifying extremist material online, and seems to be involved in an uphill battle to get online communities to actually give a fuck. Now it's taking matters into its own hands, with an AI designed to identify Islamic State propaganda with a 99.995 per cent success rate.

The tech uses machine learning and actually analyses the video content during the upload process, before it even gets published online. That way it can be dealt with immediately, rather than sitting around for 36 hours while the social networks sit on their hands pretending it's not there. It's also a huge improvement on the two hour window the government demanded last year, with the Home office reporting that the bot was able to detect 94 per cent of IS material with 99.995 per cent accuracy.

To help spread the flow of the material, the government will be making this tech available to internet companies - including the smaller sites that have seen big increases in the amount of extremist material that gets published.

Home Secretary Amber Rudd says the government isn't ready to force companies to use the tech, but isn't afraid to use legislative action if it needs to. While that seems like a noble goal in this case, it doesn't get problematic if governments are able to force social media companies to remove whatever content it doesn't like. That's not a defence of extremist content, but it's the kind of thing governments like China might consider to enhance the strength of its online censorship.

The Open Rights Group has also criticised the automated nature of AI takedowns and their legal accountability. Writing in a blog post, campaigner Jim Killock has expressed concerns about the unwanted consequences. Not only because it places the definition of what is and isn't legal into the hands of private companies, but also the fact that all systems make mistakes and needs to be held accountable for them. That includes minimising the number of mistakes it can make, and fixing the ones that do slip through.

The Home Office hasn't revealed how the AI assesses what is and isn't extremist content, and while that could prevent producers of extremist content from trying to find a way to cheat the system it also means it can't be scrutinised by people who know what they're talking about. If I've learnt anything about the government, it's that it's hopelessly naive about how technology works. The Home Office has claimed that out of every million videos scanned, only 50 of them need to be reviewed by a real person. That's a small margin of error, but when you consider the billions of people that use the internet those numbers could add up.

You also have to consider the fact that Islamic State propaganda isn't the only extremist content on the web. How does the accuracy change when you factor in content from other radicalised terrorist groups? And what about content from hate groups and other different kinds of non-religious extremist organisations? Who even decides the finer points of what extremism is? Those are all questions you need to ask when this sort of thing is suggested, especially since the definition may change based on who's in power in any given country.

Let's just hope this AI has a better time than that one with a desert fetish the MET Police were using to try and identify porn. [TechCrunch | BBC News]


More Ai Posts: