Alexa, Violate My Privacy

By James Bagshawe on at

Most readers of this site are probably fairly tech savvy. We are the ones who fix our family’s minor IT troubles. We adjust the settings on the TV that went haywire after that time the cat walked over the remote. Whenever we look at our parents’ computer screen, we die a little on the inside as we take in the wealth of bloatware that has crept into the browser that we cleaned just last month. We roll our eyes and wonder how clueless someone has to be to keep on getting caught out by this nonsense.

And yet we are equally – if not more – guilty. We, the tech savvy ones, like to feel smug about our smart home devices, or that IFTTT command we created. We are on top of this stuff. We know better. So why is it that we voluntarily pull down our protective pants and pay to get shafted right in the privacy?

Ever since the digital revolution went mainstream we have been utterly hopeless at protecting ourselves from intrusion. If someone offered us a minor convenience gain we’d leap at it, regardless of the consequences. Supermarket loyalty cards, social media accounts, smart devices – they would all lead to the same end result: voluntarily handing over intimate details about our lives in exchange for a fringe benefit.

AI assistants are the current trend and they are – coincidentally – the worst offenders to date. We pay up to hundreds of pounds to place a device in our homes that is always listening and that can be used to answer trivial questions, play music or provide other superficial services that are readily available elsewhere. Oh, and order more stuff for us, of course. Can’t forget that.

And what do we give up for these dubious gains in our quality of life? We happily share every last detail of our lives with these things, right down to the most intimate details that you wouldn’t want your own mother to know. We do this knowing that this wealth of information will be used to build up an exhaustive profile of our lives; a profile that is currently being used to try and sell us more stuff (Editor's Note: Amazon denies this happens with Alexa, but obviously it's not the only company running an AI assistant). We are inexplicably fine with this. We are OK with paying to wheel the mother of all trojan horses into our kitchens because it can look up recipes for us when we ask it to. We know what’s going on here, we just don’t care.

This attitude is utterly baffling. This data is kept and, we are assured, is only used in relation to keyword triggers (that’s the ‘Alexa’ or ‘Hey Google’ voice activation commands). Yet in 2017 Amazon handed over data from a user’s Echo to the police who were investigating him as a suspect in a murder trial. While Amazon says that the data was only handed over once the defendant gave them permission, and that users can delete their stored recording anytime they like, the fact is the data is still theoretically accessible. But then again how can anyone have sympathy for the suspect given they voluntarily brought this device into their life? The line “I thought my data would only be used for X reason” is small comfort when data remains eminently reachable, breachable and exploitable.

Many readers will remember 2014’s ‘The Fappening’, a giant hack into iCloud that led to the leaking of celebrity photos uploaded to the service. This led to the prosecution of five people, Apple adding a few more notifications to cloud-stored data and…. no reduction in iPhone use amongst celebrities. Or how about MyHeritage – the DNA analysis website – inadvertently leaking the data of 92 million users? How many of those would have been older users who would not have heard of the leak and not changed their passwords? How come a file with email addresses and hashed passwords was accessible in the first place?

The point is that data is never secure and that data provided for one reason may come to be used for quite another; one that we may never have envisaged at the time. The semi-mantra for online actions is that once it’s online, it’s online forever. We seem to be finally wising up about things like what we post to our social media accounts, but in the meantime we are more than happy for AI assistants to enter our lives and to repeat the cycle of idiocy anew. It’s depressing.

It stings particularly hard given there is another way. Not every AI assistant is a privacy train wreck. Mycroft is an AI assistant that is open source and – praise be – does not record you all the time. In fact, it does not record you at all. Granted, the technology is not as well developed as that of Amazon or Google. The voices aren’t as polished. The list of skills is smaller. But, and bear with me on this, it is not building up an exhaustive profile that can be used to exploit you financially. I’ll take the rough edges please, Bob, in exchange for personal liberty.

There is no justification for our perpetually weak-willed stance on privacy beyond the obvious: first of all, we are suckers for anything that we are told is something we need in our lives. If the cost is not monetary, then it’s not something we see as a barrier. Time and again we ignore dire warnings in order to get something unimportant that we think we want (obligatory ‘The Simpsons did it’ reference).

Secondly we are not adequately protected by the law. The law moves slowly and technology moves at the speed of sound. Lawmakers are old and don’t understand the implications of a technology. Corporations lie about usage or shift the goalposts in the T&Cs once they know no-one cares anymore, and politicians lack the bottle to tackle giant tech companies. It’s a perfect storm of vulnerability for the end user.

Finally, we delude ourselves that we are in control of the technologies we bring into our lives. We’re not. Not by a longshot. I lost the capability to control my technology when I upgraded my old 486 PC. At least I’m aware of that. You too, reader, should know your limits. Your privacy is precious. Have a care.