How Do We Know That We Aren't Actually Robots?

By Gizmodo on at

Welcome to Giz Asks, a series where we ask suddenly urgent questions and experts try to answer them. Today, we’re wondering if it’s possible to be a robotic artificial intelligence entity and not know it.

On Westworld, some humanoid androids are catching glimpses of a horrifying reality behind their artificial self-perception—that they, the robot “park hosts,” were created to be fucked and killed by rich assholes who go to Westworld to play cowboy for a few days. As their reality begins to crumble, we wonder: How do we know whether we’re really human and not some sort of artificial intelligence in a humanoid shell, convinced that we are human? What does it really mean to have free will, to be a product of nature and not human design? If you can’t tell the difference between a robot and a human, does it even matter? We asked philosophers, computer scientists, and writers to give their thoughts.

Evan Selinger

Professor in the Department of Philosophy, Rochester Institute of Technology

Far from being new, the question of whether we actually know that we’re not robots has been asked at least since early modernity when Rene Descartes wondered if he could know for sure that others who looked and behaved like him weren’t in fact automata. Descartes arrived at this problem because he realized he had direct first-person access to his own thoughts but couldn’t get inside anybody else’s head in the same way. The best he could do is infer that he’s actually surrounded by fellow humans and anchor that belief in a conviction about an all-good God ensuring that he isn’t being deceived.

But if we bracket the God argument and stick to our understanding that we can’t doubt the existence of our own consciousness, we can still struggle with whether we’re brains in vats (think The Matrix) or highly sophisticated artificial intelligences embodied in robot form. From an introspective vantage point we can’t solve this problem. Nor can we learn anything certain by asking others. They might be robots, too, and also unaware of it.

Then there’s the issue of childbirth. Anyone who has given birth can attest to all the messy human biology involved. But we can’t rule out that a super advanced race could build robots with human (or human-like) anatomies. Hypothetically such sophisticated constructed physiology could fool contemporary medical imaging as well.

Given these and other complications, I think the way out of the dilemma is to distinguish the attitude of philosophical skepticism from the outlook of everyday pragmatism. Intellectually it seems like we sure can spin our wheels over this question forever. But for practical purposes— like getting stuff done and taking others and ourselves seriously as autonomous moral beings-we just have to assume that we’re carbon and not silicon based. Without that practical leap of faith (that we are who we take ourselves to be), we’d likely be stymied with an identity crisis and wind up dysfunctional.

Bruce Sterling

Science fiction writer, journalist, theorist

Well, I’m not buying it. Any intelligent robot would figure out in two minutes that he couldn’t possibly be human. He can’t inhale, exhale, eat, or excrete. He has no parents, no childhood memories and doesn’t age. He can’t get infected or sick, and he has no pulse. He doesn’t sleep, he’s not warm-blooded, and has no body heat or fingerprints.

So even if he’s somehow programmed with fake memories of all those many intrinsically human qualities, the fact that he’s just not made of living human flesh should be obvious to him. If he is made of living human flesh, then he’s not a robot.

He might be entirely a software construct and not a physical being at all, but I’m inclined to think that you can’t possibly simulate a human being without simulating the physical world that creates us. We’re products of sunlight, oxygen, the rain, the bacteria inside us. We’re embodied, material creatures, like crows and dolphins. Crows and dolphins are pretty smart, like us, but if somebody said, “How about a robot that sincerely believes it’s a crow,” that scheme would sound absurd.

Susan Scheider

Associate professor of philosophy and cognitive science at the University of Connecticut, a member of the Interdisciplinary Center for Bioethics at Yale University, writer

Find out whether machines can be conscious—whether it can feel a certain way to be them. If they cannot, then you are not an AI of any sort, including a robot. That’s because you can tell that right now, you are conscious.

David Auerbach

Writer, computer scientist and former software engineer at Google and Microsoft

Absurdity is the mark of the human. If humans are natural beings and robots are artificial creations, then any designer that would create me has such an arbitrary and ridiculous approach that he or she is indistinguishable from capricious nature. So I do not think we can be robots in the sense of serving some secret master. We are barely able to serve ourselves, much less anyone else.

But while I can’t imagine myself being a robot in the sense of having a hidden purpose, there is a greater anxiety here, which is the fear of inauthenticity. I think this is why we really care about the question. To be a robot would mean that we’re somehow being tricked: that despite our feelings of being free, autonomous beings, we are actually tools of someone or something else. What we fear is not being robots, but that our existence is a fraud, and that we are frauds.

Perhaps we are just simulations in an AI that’s been asked to project what would happen if Trump were elected president. But if we are living, breathing creatures, if we are acting and suffering and living through a world, then that world is as real as any world could be. Calling it a simulation would not make our lives and our suffering any less real. If we behave as we think humans do, if we feel and think as humans do, then we fit our definition of the human, which is all we have. Perhaps we are ultimately robots, but we are still human for all practical purposes.

Our real worry, then, is that being human is not what we collectively think it is—that we fail to live up to our own definition of being human. And that, I’m afraid, is almost certainly true. Cultures have had many different senses of the soul, of human essence, and of humanity, and all of them are either wrong or unproven. It is unlikely that we have it right today. We are unlikely to be robots, but neither are we what we think we are.