Tech companies are resorting to increasingly peculiar measures to look like they’re taking action against so-called Islamic State. The latest: Google will show anti-terrorism links to people who search for extremist content.
Google executive Anthony House told The Telegraph that the company is running two pilot programs to introduce counter-programming when people search for extremist content. “One is to make sure these types of views are more discoverable,” he said. “The other is to make sure when people put potentially damaging search terms into our search engine they also find these counter narratives.”
So how will this change how the search engine works? Google isn’t hiding all results related to terrorism. It’ll just front-load some “don’t do violence!” ads.
“We offer Google AdWords Grants to NGOs so that meaningful counterspeech ads can be surfaced in response to search queries like ‘join ISIS’” a Google spokesperson told Gizmodo, perhaps gunning to win some sort of internal bet about how many buzzwords they could pack into a canned response.
This is a bizarre policy, for a number of reasons. First of all, it’s not clear which search terms will prompt the counter-programming beyond “join ISIS”. Would typing something more ambiguous like “hot ISIS teens” or “Daesh good?” cause the same response? Is this only for people who want to join the Islamic State, or does Google do the same for wannabes of less trendy terrorist organisations?
Also, is Google taking special note of people who are repeatedly searching for tips on how to be terrorists? Is it giving that information to authorities?
I’ve asked Google these questions, but the only reply I got back was that canned jargon-soup.
I also asked Google if it has any mechanisms to distinguish people doing research (LIKE ME!!! I’M NOT A TERRORIST!) and people planning a crime — because I tested this concept using House’s example.
It does not appear that I’m part of the program:
This program is very similar to another Google initiative, which shows suicide hotline results when people Google suicide. (Try it, you’ll see what I mean.) But there’s one crucial difference that makes this version of search-result meddling deeply disturbing. When someone commits suicide, they’re objectively dead. The definition isn’t up for debate. “Radicalism” is a far more nebulous and subjective concept than “being dead.”
The “join ISIS” example Google gave centered on the Islamic State, but what if the company expands this pilot program? What other groups would qualify? Google’s decision to allow organisations to alter ads* on search results to prevent people from aligning themselves with a group — no matter how odious the group is — is blatantly political. It’s a precedent-setting manoeuvre that permits the company to tailor its results based on perceptions of users’ ideologies. It’s a leap that underlines Google’s willingness to guess its users’ intentions when they search for something — and shows that Google is willing to guess its users’ intentions to commit crimes when they search for something.
I swear I’m not a terrorist though.
Update: A Google spokesperson provided an updated statement:“What was referenced is a pilot Google AdWords Grants program that’s in the works right now with a handful of eligible non-profit organisations. The program enables NGOs to place counter-radicalization ads against search queries of their choosing.”
*This statement originally implied that Google altered its search results directly. I changed the wording to clarify that Google allowed partner organisations to alter the ads on search results.
Image by AP