“I sound like I don’t like technology, but I actually do, sorry,” Professor Noel Sharkey laughs. “But this just really upsets me.” Sharkey is just getting into the swing of his 20-minute lecture: a talk at FutureFest entitled “Sex, race and gender in robotics and AI,” and he’s not pulling any punches. Throughout the talk he reveals example after example of how human biases have crept into a technology which promised to fix our weak, fleshy human subjectivity. Justice algorithms that see black prisoners as more likely to reoffend due to past prejudiced police targeting; facial recognition software that can only spot white faces, and so on. Even our search engines are picking up on our prejudiced under and overtones, as an image search for ‘professional hair’ and ‘unprofessional hair’ depressingly demonstrates.
The conclusion of his talk? That any big decisions should be left to humans, as flawed as we are. “I've got AI in my fridge and car. I don't care about that – if it accidentally turns the freezer off, oh dear I'm going to lose a few frozen peas, but I don't mean that,” he explains to me later. “I mean somebody's job, somebody's mortgage, somebody's prison sentence, somebody's life in hospital – those things.”
Watch the whole talk in the video below:
You may well already be familiar with Sharkey. For a cumulative eight years, he’s been a judge on Robot Wars, but in the hour I talk to him, it doesn’t even come up. Sharkey is a softly-spoken man who is incredibly easy to chat with, and I start by asking him about his comments about liking technology. Does he define himself as a technological optimist? “About a lot of technology, I'm neither optimistic nor pessimistic – I just am. It's like asking a bus conductor 'are you optimistic about buses?’”, he replies giving me a handy intro should I ever interview a bus conductor.
“I actually spend most of my life now being critical about technology,” he explains. “I mean I love the technology because I was one of the developers of it for many years.” The problem is the rush to implement technology to solve problems big and small, old and new. An example: “So we're going to have drone deliveries all over London,” he muses. “But no thought about what that's going to do to bird life. Is it going to affect people's house prices? Would people plant bombs in them?
“You don't have to be a genius to sit down for ten minutes and think about it. Anybody who wants to govern the country should think 'hmm, is there going to be a bad side to this?'”
Which isn’t to say that the idea of AI taking over the world keeps him awake at night. “It’s not something that’s even on my radar,” he says, unprompted. “It’s fanciful at the moment,” he adds highlighting the fact that human-level AI has been mooted as imminent for every one of his four decades in the industry. “We don't know anything in principal that would say you couldn't do this, but the challenges are ridiculous.”
AlphaGo – the DeepMind robot which Google built to defeat the world’s Go champion – is a phenomenal achievement, but it’s still just designed for one task, while the defeated human champion can still ultimately do a lot more. “This thing doesn't know it's playing a game. It won't give you a high five when it's finished, it won't make you a cup of tea afterwards, and it can't talk to you,” he says. “You couldn't really call that intelligent – if you stopped it midway through, it wouldn't even know.”
Interestingly, Sharkey used to be a dyed in the wool believer. In 1979, he was in a “dream state,” but moving to Yale’s AI labs put him straight. “After about six months there I started getting really depressed about it because I'd bought into a load of bullshit,” he explains. “You'd find putting a comma in the wrong place would make it crash. I'd read about all these inference machines that could take language and find out the point of it, and it was all just talk.”
But in his words “if you don't change your mind as a scientist there's something wrong.” And it has improved today, but progress isn’t always a completely positive thing.
The dark side of progress
This is why Sharkey now heads up the Foundation for Responsible Robotics. “A lot of the work I do is based on stuff that you'll find in obscure philosophy journals,” he explains. “Our idea of starting a foundation was to take it out of the philosophy journals, turn it up, let people see it and turn it into policy.”
And it leads to dark places. “There's some stuff I would call quite wicked. I spend at least three weeks of the year at the UN, and we've had a campaign there for the last five years on autonomous weapons systems.” Trying to prevent their development is hard work when geopolitical advantage is so tempting. After all, Vladimir Putin once said of AI that “whoever leads the world in this sphere will become the ruler of the world.”
“It's really slow, it takes patience,” Sharkey concedes. “But now we've got 26 countries: Austria joined in April, and so did China surprisingly. That shocked everyone.”
More recently he’s been looking into the ethics of sex robots. This began innocuously enough – would they offer companionship and so on – but again became unpleasantly dark quickly: child sex robots, sold to paedophiles.
“God, that was repulsive,” Sharkey recalls, noting that these remain a nauseatingly grey area in the law. “The Crown Prosecution Service haven't been prosecuting them [buyers] for it because as the police said to me 'when we go through the door, we always find thousands of images on the computer' so the CPS would rather stick with that because it's successful,” he explains.
Child sex robots have been declared obscene objects in the UK making it illegal to import or post them anywhere, but this definition is far from adequate: it would be completely legal to produce them in the UK as things stand. “I'm meeting with parliamentarians in two weeks' time to talk about new laws for Britain to stop this,” he says, noting that his work has already successfully pushed the similar “Creeper Act” through the US House of Representatives.
Notably, despite his natural revulsion, Sharkey is following the evidence: he wants them banned only under general use at the time of writing. “I'm saying general use because if they have a therapeutic use under a licenced therapist, I'm not going to stand in its way.” That theoretical use for good is what he’s currently looking into, with a survey of paedophiles. “The survey will nearly make you cry. One of them's a 17-year-old girl who said 'I just want to kill myself, I didn't want to do this, I've tried to cut my wrist a couple of times,'” he recalls. “You just think 'poor bastard'.”
Still, early signs suggest that, once again, technology is not the answer. “The survey isn't telling me that having these robots is going to help,” he says. “Some therapists say 'yeah I think that would help', but the majority have said it's like giving beer to alcoholics.”
Even considering this theoretical positive is deeply uncomfortable for most people – Sharkey included. “It's not about you and what you think, it's about what you do. It's the objective: is it fair, is it right? I'm open-minded.
“This is about the most closed-minded on anything I could be. I have five daughters, and when I first saw this, I just went "WHAT!?," he explodes, going into a completely out-of-character mock rage. “But I realised anger isn't going to solve it. If you want to get a new law enacted, if you want to have impact, you'd better have data in your hands and a good well-argued thing. You have to go with what the evidence tells you.”
He’s entirely consistent here: follow evidence, not faith; and be open-minded when said evidence points another way. “I often find myself thinking 'oh yeah, I hadn't thought of that,'” Sharkey says. In a world where pure dogmatic feels seems to be the dominant force, I suggest that this can be a recipe for being steamrollered by louder, less thoughtful voices. “I can do quite good steam rollering, I can tell you,” he laughs. “I'm not frightened of difficult issues like paedophile robots: I just go for them."