Artificial intelligence (AI) has been looming large in the public consciousness recently, thanks to the likes of Elon Musk and Stephen Hawking telling us how we're going to die at the hands of robots (the upcoming Terminator reboot probably doesn't help, either). But amid the techpocalypse talk, there's been limited discussion of what constitutes AI, and how it might look completely different to Skynet.
As Benjamin H. Bratton explains, our idea of artificial intelligence has been engineered from the beginning to be anthropomorphic: a truly 'intelligent' computer is one that reflects humanity back at us. The Turing test, the flawed but oft-quoted determination of artificial intelligence, really just requires a computer to pose as a human for a few minutes, which is something that Bratton finds bizarre:
That we would wish to define the very existence of A.I. in relation to its ability to mimic how humans think that humans think will be looked back upon as a weird sort of speciesism. The legacy of that conceit helped to steer some older A.I. research down disappointingly fruitless paths, hoping to recreate human minds from available parts. It just doesn't work that way.
He goes on to point out that planes don't fly like birds, so why should computers be hamstrung by human impressionism?
When it comes to the matter of the dangers of A.I, Bratton is concerned, but not about a robot coup. Rather, "what we really fear, even more than a Big Machine that wants to kill us, is one that sees us as irrelevant."
In a technology landscape a little overrun with faux-humanoid digital assistants and a decades-old public perception of AI, Bratton's essay is an insightful take on an incredibly important topic. And, it might make you stop and think next time you swear at Siri. [New York Times]
Image: Shutterstock/Olga Nikonova