This Throwaway Google I/O Moment Left Me Both Terrified and Inspired

By Gerald Lynch on at

It was a passing, throwaway moment shared by Google CEO Sundar Pichai, a footnote to the home automation talk, the Android updates and VR noise close to the end of last night’s Google I/O 2016 keynote, that had the most profound impact for me. And I can’t decide whether or not to be inspired by it, or simply terrified.

The CEO revealed to the millions worldwide watching the opening of his company’s annual developer event that Google had allowed a team of its “20 Percenters” (employees at Google are afforded 20 per cent of their working time to be allotted to personal projects) to help train a group of robotic arms to pick up objects. A clip was shown, showcasing a room full of what looked like an army of unmanned grabber claws at a fairground arcade that were too smart to lose.

Going Out On A Limb

Picking up a selected object is not a particularly difficult task for a robot these days, on the condition that an adept coder lays out the parameters, and the details of what object needs grabbing and where it is placed don’t deviate from what the confines of the code dictate. So Google took a different approach: if set the task (and given the insane computing resources of Google’s neural network, deep learning AI tech) could the arms figure out how to do the job by themselves? With a continuous feedback cycle of networked failure and success, could a robot teach itself the basics of hand-eye co-ordination?

The answer was yes. Presented with a tray of objects of various size and colour, randomly placed, the arms could move the items from one box to another with relatively little effort. There was something breathtaking about seeing the AI not only solve the problem set by Google’s engineers, but to evolve its solution for greater efficiency. The immediate application of what was learnt could be varied and incredibly useful, from intelligent production lines to prosthetics that work in tandem with patient’s commands to achieve everyday tasks the loss of a limb prevents.

Those applications would be impressive by themselves, but there was something more profound going on, too. By virtue of being hooked up to a physical robotic arm analogous to our own fleshy ones, seeing the bot make a small gesture that was startlingly human in its execution, made me sit bolt upright – a little nudge of indifference towards the black stapler blocking its goal of moving the yellow brick.

Take a look. It happens at about 2 hours, 4 minutes and 20 seconds into the keynote.

Google’s engineers didn’t program that.

It was a small moment, but in that casual sweep was a blink-and-you’ll miss it glimpse at a self-formed personality. The physical realisation of that “meh” feeling you have that leads you to push desk clutter to one side, searching for the one working biro in a sea of crap.

Hold On Now

Maybe I’m reaching. Pichai himself, along with his Google co-workers, saw a more precise moment as worth highlighting in greater depth – the 37th move of the groundbreaking Go match between Google’s AlphaGo DeepMind AI and human champion Lee Sedol. In a game that contains more variables than there are atoms in the universe, AlphaGo’s 37th move was significant as it showed a moment of true creativity, a turning point that would lead to the eventual downfall of the bot’s human competitor.

“We normally don’t associate computers with making creative choices,” said Pichai at the end of the Google I/O keynote. “So for us this represents a significant achievement in AI.”

Google is doing something truly incredible here, taking its first baby steps towards a fully-realised artificial intelligence, with almost limitless potential for good. So why does it, contradictorily to what I’ve just said, make me feel so uneasy?

It’s about trust – how much trust do we have in Google’s ability to not only create a useful AI, but to wield it with humility? Some of the other announcements last night don’t fill me with much hope.

Take the new messaging app, Allo. It can respond in a conversation in the place of the user, which suggests that Google believes chit chat is pointless, that a bot can do the work of maintaining a relationship for us between those important work deadlines. Even if that does prove useful, there’s something dehumanising about turning communication into efficient ones and zeroes. Where’s the poetry? It’s certainly not going to come from Google’s AI at this point, at least if it’s nascent efforts are anything to go by.

Google’s new video chat app Duo had a hint of naivety to its presentation, too. Its “knock knock” feature lets you take a peek at what the caller is doing before you answer (though they can’t see you in return). Google reckons that’ll help you get hyped for whoever is calling down the line, giving a glimpse of what they’re up to. Maybe it’s the cynic in me, but it just seems like an even more efficient way of screening, and ultimately ignoring, calls.

“We’ve been building these incredible capabilities, be it search, the knowledge graph, our understanding of natural language, image recognition, voice recognition, translation,” Pichai told Forbes.

“Particularly over the last three years, we have felt that with machine learning and artificial intelligence, we can do these things better than ever before. They are progressing at an incredible rate.”

There may be a grand philanthropic scheme underlying Google’s intentions, but there’s also a feeling that, in its race towards 100 per cent efficiency in all aspects of life, Google is chipping away at all the mistakes and cock-ups that make humans, well, human.

Reaching for the SkyNet

If that attitude is applied to an AI that’s expected to integrate fully with our daily routines, there’s no wonder that the Terminator-inspired SkyNet fear always rears its head during these discussions. Google is careful to steer any conversation around AI to promote its apparent limitless benefits to mankind, but fails to see the dehumanising aspects of some of the products it already produces. Sci-fi sceptics for decades have been warning us of the hubris associated with handing over too much responsibility to AI helpers, the disastrous, self preservation tactics that could (worst case scenario, natch) ensue if a sentient computer realised that these fleshy mortals have fingers hovering over its plug.

(Perhaps this too is a move of self preservation on behalf of human creatives? What will the makers of Terminator Genisys do when a computer can write, produce, computer-generate, film and promote a Terminator sequel better than the woeful last one? Actually, that wouldn’t be too difficult or impressive a task really. I digress.)

The question is, can Google overcome the 50 or 60 years of popular culture which have planted the seeds of distrust which now sprout every time the words “artificial intelligence” are uttered? Is a robot butler and a networked mind beavering away on a cure for cancer a fair trade for our independence?

Perhaps it’ll take us each having a Johnny 5 at home, hearing a neighbour tell the tale of when their R2 unit saved them from a gas leak, or the time their personal Jarvis alerted the police before a home intruder could get away with the rainy day fund, before we’ll be able to see the potential good overcome the historical, fictional nightmare.