Should Driverless Cars Kill You if it Means Saving the Lives of Many Others?

By Gerald Lynch on at

Self-driving, driverless cars are no longer the reserve of science fiction. For several years now, a number of companies, lead by Google, have been experimenting with the feasibility of cars that don't require a human behind the wheel. From aiding sleepy long-haul drivers to reducing the likelihood of drink-drive accidents, driverless cars offer many potential benefits. But they also pose some interesting ethical questions.

Google's computer-controlled cars have clocked up over a MILLION miles out on the open road since 2009 without a driver, and roughly the same distance again with humans sitting behind the wheel, ready to take control. 13 accidents have been recorded during the tests but, interestingly, these were never the fault of the Google cars, but rather the mistakes of human drivers who shared the same roads. Using complex algorithms and sensor systems, Google's cars have, so far, proved relatively safe.

Which leads to an interesting question: At what point does a driverless car's algorithm stop protecting you, and starts protecting those outside of your vehicle? Would (or even should) a driverless car sacrifice its passengers in the case of an accident if it thinks it will save the lives of even more people not in the self-driving vehicle?

It's a conundrum that researchers at the University of Alabama, Birmingham, are discussing. UAB researcher Armeen Barghi brings up this analogy:

Imagine you are in charge of the switch on a trolley track. The express is due any minute; but as you glance down the line you see a school bus, filled with children, stalled at the level crossing. No problem; that’s why you have this switch. But on the alternate track there’s more trouble: Your child, who has come to work with you, has fallen down on the rails and can’t get up. That switch can save your child or a bus-full of others, but not both. What do you do?

Barghi continues:

“Utilitarianism tells us that we should always do what will produce the greatest happiness for the greatest number of people” [...] In other words, if it comes down to a choice between sending you into a concrete wall or swerving into the path of an oncoming bus, your car should be programmed to do the former. Deontology, on the other hand, argues that some values are simply categorically always true. For example, murder is always wrong, and we should never do it. Even if shifting the trolley will save five lives, we shouldn’t do it because we would be actively killing one.

For many human drivers, this doesn't even become a conscious argument – in the event of a potential accident, our flight-or-fight reflexes automatically kick in, we swerve the wheel and hope to God we've saved ourselves. There's no shame in it even – in many cases, you've carried out an action before you even had time to think of the moral repercussions. But driverless cars will have to be programmed to act with mathematical certainty.

So where do you stand. Should a driverless car kill sacrifice the safety of its passenger in the event of an accident for the greater good? Or should its priority be the lives of those it is ferrying, whatever the cost to those around it?