Late 2016 has been a great time to be a fan of TV shows about artificial intelligence. While HBO was putting out the glossy robo-violence of Westworld, Channel 4 was giving us season two of the far more low-key Humans, which comes to an end on Sunday. Both focus on human-looking robots created to serve and entertain humanity, and in both it goes predictably awry, leaving both humans and hosts/synths facing big questions about what it means to be conscious.
While Westworld is packed full of big-budget action, twists, orgies and theory-fodder, Humans is the cheaper – and quietly thrilling – show. It gives us the breathing space to consider the big questions. So if you’re looking for a show that really lets you dig around in the grubby pit of human/AI relations, Humans is it.
Where would conscious AI first develop?
In Humans it’s as robot slaves, an upgraded Siri, whereas in Westworld robots are developed for entertainment purposes, to give humans a place they can go to blow off steam.
While the entertainment industry does have the money and the constant quest for novelty that might lead to them making a robot theme park, Humans seems to be more on the ball here. Robots seem likely to be rolled into production by the tech industry as one big up-front expense, to save on the massive future expense of having to hire a human labour force.
Season two of Humans also introduces Carrie-Anne Moss as Dr Athena Morrow, a genius scientist building an AI version of her dying daughter. Like Dr Elster before her, who created his first conscious synth to replace his dead wife, it seems believable that the big leaps in technology will come from single remarkable scientists who are driven to extremes.
Depressingly, and almost definitely truthfully, both shows decide that the main drive to create really human-looking robots will be so that humans can have sex with them. Because we are a weak, craven, horny species.
What affect would AI have on the world?
Westworld is currently only interested in the microcosm of the theme park. Given Delos’ quest to get hold of Ford’s hosts, we can assume that functioning robots don’t exist outside of the park. But, as Dolores points out, the world outside of Westworld can’t be that great – because if it was, then why are so many people clamouring to come to the park?
Humans is pretty much exclusively interested in how robots and AI would affect the world. We’ve seen synth labour protests, people losing jobs and, perhaps most interestingly, the development of a new mental illness, whereby vulnerable people adopt a synth persona to shield themselves from emotional harm.
Both shows touch upon how humans are inclined to belittle, abuse and reduce AI (both conscious and unconscious). They either view robots as a threat, or they abuse them so as to remind themselves that they are not real. But by being so abusive to something that looks so much like a real human, both shows depict a human race that is becoming desensitised and cruel.
How do AI synths/hosts feel about their consciousness?
The core conscious synths in Humans have been conscious since long before we met them, and they’re secure in their consciousness. Series two, however, has introduced newly conscious synths, plucked from their robotic lives by a sudden software update. Most seem to end up on the murderous end of the AI scale; some essentially commit suicide, choosing to return to a life of mindless duty because they can’t face the amount of choices that consciousness gives them; and the ones who meet Max turn out fine, because Max is an AI prince among men.
Over in Westworld, far more attention is given to the struggle for consciousness. However, Westworld is tricksier and by the end of season one we’re still not entirely sure whether any of the characters are truly conscious. Maeve seems to be self-aware, but her rebellion was programmed. Dolores is hearing her own internal voice – supposedly a mark of consciousness – but who’s to say that’s not programmed too? Bernard has no idea whether or not he’s conscious, and neither do we.
In Westworld, consciousness isn’t a thing to acquire, it is the very struggle to acquire it. Which, actually, is pretty close to the mark.
How do humans feel about AI?
This is where Humans begins to take big strides away from its rival. In Westworld, very few of the human characters consider the hosts to be conscious. Arnold and, later, Ford believe that the hosts will surpass humanity, but that doesn’t mean that they consider the hosts to be in any way human. They consider them to be better, or more advanced, than humans. A separate entity altogether.
Meanwhile, on Humans, the human race isn’t exactly covering itself in glory. Even as conscious synths begin to emerge, the human response is to experiment on and destroy them. Niska, whose attempt to be tried as human failed dismally, puts it best when she points out that her creator, who knew exactly how conscious and alive she was, could still abuse her as if she was nothing more than a machine when it suited him. If her own creator could do that, then why would the rest of the human race ever accept her as wholly conscious?
While some of the individual human characters, like the lovely Pete, are wonderful examples of humanity, accepting conscious synths completely, the human race as a whole is fearful and defensive. The Turing Test might work in theory, but the big question Humans raises is: Would humans ever really want to believe that an AI had passed the test? Or would it be easier to pretend they hadn’t, and continue on as normal, simply refusing to share their world with a new intelligent lifeform?
Humans is on Channel 4 at 9pm on Sunday 18th December.