Joshua Brown was just one of the more than 37,000 people who died in car crashes in the US last year—but his death continues to make headlines. Brown became the first person killed by an autonomous vehicle when his Tesla Model S collided with a truck while in Autopilot mode, and his crash launched a debate about the risks and rewards of allowing self-driving cars on the road.
People are freaked out about sharing roads with self-driving cars, particularly when those cars crash (never mind the distinct possibility that they can be hacked). But according to new research from the RAND Corporation’s Science, Technology, and Policy program, waiting for self-driving cars to achieve perfection before allowing them on public roads will lead to more overall fatalities in the long run. Allowing autonomous vehicles as soon as they’re pretty good, if not perfect, will save lives over time, the researchers found—basically, a mostly-good robot car still sucks less than a drunk or distracted person-car.
But how do we decide when self-driving cars are good enough?
“There are two questions we need to answer: How safe do they need to be? And how will we know?” Nidhi Kalra, a senior information scientist at RAND, explained. According to her research, self-driving cars only need to be a little bit safer than the average human driver in order to save hundreds of thousands of lives.
In the research, Kalra and RAND senior policy researcher David Groves modelled several possible futures—one in which self-driving cars are only 10% safer than human drivers, one in which they are 75% safer, and one in which they are 90% safer. They came up with 500 different scenarios to help model these futures, considering different ways the technology could develop over time.
It turns out that being just a little less dangerous than humans pays off over time, the researchers found. After 15 years, thousands of lives would be saved, and after 30 years, the number would grow to hundreds of thousands.
Kalra and Groves also believe that autonomous vehicles will become safer as they’re deployed, learning from their real-world experience and improving upon it. Right now, self-driving cars are deployed in small test fleets and drive millions more miles in computer simulation. “The simulators are one component of development but they are not at a point where we could use them to prove safety or demonstrate safety to high confidence. They have their place but they are no replacement for real world driving,” Kalra said. “You would have to be really optimistic about simulators to think we can simulate our way to perfection.”
But given the fear around self-driving cars, allowing them on the road when they’re still prone to crashing is a difficult reality to accept.
“Our main objective here is to help inform the debate with an objective analysis. We think this particular topic really needed an objective look at fatalities because there is so much hand-wringing about how safe these cars need to be,” Groves said. “We can’t find a scenario where waiting for perfection of autonomous vehicles is the smart thing to do for saving lives.”