In a way, the pace of the self-driving car revolution will really be determined by a single technology: how quickly 3D laser scanners will improve until they’re as good as the old-fashioned 3D scanners in our human eyes.
Lots of companies are working specifically on improving that scanning tech, also known as lidar. For a story on driverless vehicles in this week’s New York Times Magazine, one of those companies, ScanLAB Projects, drove their 3D laser scanner through London to show how well cars can see already—and how much work their programmers still have to do.
The detail is stunningly beautiful and manages to capture all the architectural nuance of London’s streets—in fact, it’s the same technology I wrote about a few months ago that archaeologists are using to reconstruct ancient landmarks after events like Nepal’s devastating earthquake.
But it’s not only about outfitting cars with better lidar, as Geoff Manaugh writes, it’s also about teaching cars the cultural context of what they’re seeing. He includes this hypothetical scenario by Illah Nourbakhsh, robotics professor at Carnegie Mellon University:
Imagine someone wearing a T-shirt with a stop sign printed on it, he told me. “If they’re outside walking, and the sun is at just the right glare level, and there’s a mirrored truck stopped next to you, and the sun bounces off that truck and hits the guy so that you can’t see his face anymore — well, now your car just sees a stop sign. The chances of all that happening are diminishingly small — it’s very, very unlikely — but the problem is we will have millions of these cars. The very unlikely will happen all the time.”
Just a few weeks ago, our own Kelsey Campbell-Dollaghan wrote about a cyclist befuddling Google’s car with his track stand—something the car couldn’t figure out on its own. How long will humans need to hold the hands of driverless cars and teach them not only to see like us, but to think like us, too?
Also, should we just ban all shirts with stop signs on them right now? [NYT Mag]