When I watched the trailer for Ex Machina, I was excited. It wasn’t just the uncanny and attractive robot Ava, either. There were androids, AI, Turing tests! This looked like the scifi movie of my dreams. But when I saw Ex Machina recently, I was terrified. Because it told the truth about what A.I. might become.
From this point forward, this post is chockfull of spoilers. You have been warned!
I should’ve expected as much. After all, the trailer hardly hides the fact that something goes very wrong in Ex Machina’s isolated artificial intelligence lab. It’s also the kind of plot twist we’re primed for in a world where some of the smartest people on the planet are warning us that computer scientists’ grand ambition to build a true AI is just plain dangerous. Of course the robot was going to turn into a psycho killer, leaving broken mirrors smeared with blood and bodies on the floor. (Sorry, I told you there would be spoilers.)
The day after I saw Ex Machina, I took a trip to Carnegie Mellon University, where I met with a handful of robotics professors. The trip had nothing to do with the movie, but I ended up spending the next few days letting snake robots climb up my leg, watching soft robotic arms wave at me, and letting autonomous robots give me tours of campus. The whole time, I was waiting for one of these machines to pull out a kitchen knife. And yet, the one common thread that tied Carnegie Mellon’s diverse robotics projects together was the pursuit of safety.
Maybe robots aren’t the most disturbing part of the AI dream. What left me speechless after Ex Machina wasn’t so much the ghastly robot-fuelled violence. It was the sheer horror of feeling human, confronted by a ghoul of my own making. But in the movie — and perhaps in real life — the AI’s not the ghoul. The ghoul is data collection.
Let me back up a second. Ex Machina is premised on a very believable scenario. A spindly young coder (Domhnall Gleeson) wins a contest to spend a week with his company’s founder, a reclusive and maybe mad billionaire (Oscar Isaac) who got rich by building a sprawling search engine called Blue Book. Once he arrives, the young coder meets and sort of falls in love with the alluring android Ava (Alicia Vikander).
Sound familiar? Well, the Google and Facebook references are obvious. The Wittgenstein reference is entirely unexpected, but thankfully, the film explains it: The Blue and Brown Books are notebooks the Viennese philosopher compiled in the mid-1930s. This nod to old boy Ludwig is a gentle reminder that this film is dripping with philosophical references. Which makes good sense in a movie about the definition of consciousness.
A search engine is the perfect starting point for an artificial intelligence since it’s based on the algorithm-fuelled organisation of real time information. Google already has some pretty fucking impressive AI software. To build a machine that thinks like a human, however, you have to comprehend how humans think. And quite conveniently, the collective search history of the entire world is a great window into the human psyche.
This still isn’t the hard part, though. As the maybe mad billionaire character points out, the traditional Turing test isn’t even that difficult for existing AIs. Robots are already writing news articles and taking care of kids, so the challenge isn’t specifically language-related. The challenge is making a robot look and act and move like a human. Human behaviour, expressions, and emotion are beyond nuanced. And these things certainly aren’t easily communicated through search terms.
So what does our maybe mad billionaire do? He collects all the data. In the film, Oscar Isaac’s character describes an alarmingly believable scenario where the government allows him to tap into the cameras and microphones of every computer and smartphone on the planet — to collect not only search data but also corresponding facial expressions and conversations. This sounds crazy, but it’s actually entirely possible. Thanks to the Snowden leaks, we even know that that governments have already built the technology to make it happen. Who knows if they’ve used it.
You can probably see where this is heading. The spindly coder didn’t win a contest to hang out with Willy Wonka because he was lucky. He was selected because his search history revealed that he was just the right candidate—with a “good moral compass”—for a new kind of Turing test.
This test isn’t just concerned with detecting the difference between a robot and a human. It’s focused on determining whether or not an AI can trick a human into doing her bidding. In the case of Ava, that means orchestrating her escape from the probably mad billionaire’s glass prison. And this requires her to convince the spindly young coder to care more about her than about his fellow human.
Well, Ava passes the test brilliantly. The outcome is very unfortunate for all humans involved.
All too appropriately, the spindly coder with a good moral compass suffers what’s possibly the worst outcome. He doesn’t get to run off with his would-be robo-lover, and he’s also seemingly left to rot in the impossibly remote research facility. Why? Because he willingly surrendered vast quantities of data about himself to a search engine (including his porn preferences) and the robot used that knowledge to manipulate him. Pretty fucked up, huh?
Hey guess what: That’s what Google does every day—minus the violent robot part, for now. Every day, you surrender details about your wants, hopes, and needs, so that Google can serve you more relevant advertising. However, imagine a near future where that data isn’t just used to power smarter banner ads, but actually to feed an artificially intelligent being all of the information it would need to defeat you. Or better yet, robots could leverage the data against humans in some AI-jujitsu manoeuvre that leads to us dumb humans destroying ourselves.
As I walked out of the cinema, I wondered how many robots knew where I was. Specifically, I wondered how many Google robots were tracking me. I’d coordinated with the movie’s publicist on Gmail. I’d Google Mapped my way to the cinema. I’d even Google-searched some details about the director and actors. I’d done most of this on my Google-powered Android phone.
We’re used to this by now. Google is a huge company that builds all kinds of useful tools, many of which cost nothing to use—nothing but your willingness to surrender as much data as possible. I don’t remember the last day that I didn’t shovel an immeasurable amount of personal data Google’s way. What’s more, I don’t remember the last time I withheld data for any reason at all.
I’m not really worried that a company like Google is going to build an artificially intelligent robot that will destroy humanity. Stephen Hawking is! As are a lot of other very smart humans. I’ve danced around this debate for a long time, partly because I’m optimistic about technology and partly because I’m self conscious about the dangers of being a luddite. That said, I had trouble sleeping after seeing Ex Machina.
I’m not afraid of robots. I’m terrified of data, and how it could be used against us in ways we haven’t even imagined.
All images via A24