Elon Musk isn't afraid to speak his mind — and it always feels like he's about to change the world. So when the Tesla, SpaceX and PayPal billionaire sat down with Nvidia's CEO to talk about his self-driving cars, we listened carefully.
Here's how Musk described the future of autonomous cars—a future he believes Tesla will help usher in within the next few decades.
The following is transcribed from an interview between Nvidia CEO Jen-Hsun Huang and Elon Musk at the 2015 GPU Technology Conference in San Jose, California. You can watch the original recording here. We've taken the liberty of bolding our favourite lines and stripping out the banter between Musk and Huang. Sorry Jen-Hsun!
I don't think we have to worry about autonomous cars, because that's sort of a narrow form of AI, and not something I think is very difficult to do actually—to do autonomous driving to the degree that's much safer than a person is much easier than people think.
I think it's just going to become normal. Like an elevator. They used to have elevator operators, and then we developed some simple circuitry to have elevators just come to the floor that you're at, you just press the button. Nobody needs to operate the elevator. The car is just going to be like that.
You'll be able to tell your car "take me home," go here, go there, anything, and it'll just do it.
It'll be an order of magnitude safer than a person. In fact, in the distant future, I think it's probably going to be... people may outlaw driving cars, because it's too dangerous. You can't have a person driving a two-ton death machine.
If you can count on not having an accident, you can get rid of a huge amount of the crash structure and the airbags... We're a very long way from that, because there's always going to be some — for a very long time there will be some legacy cars on the road.
And it is important to just appreciate the size of the automotive industrial base. It's not as though when somebody makes an autonomous car, that suddenly all the cars will be autonomous. There's two billion of them. The total number of cars and trucks on the road is two billion and climbing... The capacity of car/truck production is about 100 million a year.
So if tomorrow all cars were autonomous, it would take 20 years to replace the fleet, assuming the fleet stayed the same size. Arguably it could get smaller if things were autonomous, but still it's maybe 15 years or something and it's not all going to transition immediately. It's going to take quite a while. And it's the same for electrification of cars. Changing that industrial base to be electric — if all cars tomorrow produced were electric, it would still take 20 years to replace the fleet. And right now it's 1 per cent.
You kind of need the hardware foundation, the sensory computing foundation, and then you can just keep updating software. At least you can with a Tesla, the features are steadily improving... we now have active cruise control, we use radar and camera fusion to track the car in front of you... it looks at the brake lights, so it anticipates when the brakes are active... it's going to get smarter and smarter even with the current hardware suite.
The current hardware suite is 360-degree ultrasonic sensors that go up to just over 5 metres. There's a forward camera and forward radar. Even with just that sensor suite we can actually make huge progress in autonomy. We can definitely make the car steer itself on a freeway and do lane changes.
Autonomy is really about what level of reliability and safety do you want. Even with the current sensor suite we could make the car go fully autonomous, but not to a level of reliability that would be safe in, say, a complex urban environment at 30 miles per hour where the lane markings aren't there and children are playing and things could be coming at you from the side. In order to solve that you need a bigger sensor suite, and you need more computing power.
Where it gets tricky is that urban environment around 30 or 40 miles per hour. Right now it's fairly easy to deal with things that are below 5 to 10 miles per hour, because we can do that with the ultrasonics—we just make sure it doesn't hit anything, because you can always brake. At 5-10 miles per hour you can stop within the range of the ultrasonics.
Then from, let's say 10 miles an hour to 50 miles an hour — that area in complex suburban environments — that's where you can get a lot of unexpected things happening. Let's say there's a road closure or a manhole cover open, children playing is a big issue, bicycles... once you get about 50 miles per hour and you're in a motorway environment, it gets easier again. The set of possibilities is much reduced. Highway cruise is easy, low speed is easy, intermediate is hard. Being able to recognise what you're seeing and make the right decision in that 10 to 50 miles per hour zone is the challenging portion.
But, and this may sound a little complacent, I almost view it as a solved problem. We know exactly what to do, and we'll be there in a few years.
We'll take autonomous cars for granted in quite a short period of time. It's amazing how comfortable you get, and how quickly you get comfortable with it.
From the point at which a car is definitely safer than a person, there's a least another two or three years after that before regulators allow it to be the case. They all want to see a large amount of statistical proof that it's not merely as safe as a person, but much safer.
So I think what you can do is run it in shadow mode, and essentially say "OK, this is what the computer would have done in all these circumstances," and was there a crash or was there not? What are the false positives and false negatives? Then achieve a large population group, and make a really clear statistical argument with the regulators, and then they're going to digest that, observe it for a while, see if they agree with it—and then I think they will, because the evidence will be overwhelming.
I think when it comes to public safety, there's an argument for being quite cautious and making sure that things are OK before there's a change, and I don't think it's the case that right now that there is a fully autonomous system—and regulators are not approving it—that can be a substitute for people. But there will be in a few years.
I think [security] becomes really important when the cars are fully autonomous. The way the cars work right now, every system in the car, it's assumed, could have a mechanical failure of some kind, or a fundamental logic failure. You can always overwhelm the braking with your foot or overwhelm the steering wheel with your hands.
But when there isn't a steering wheel or a brake pedal or something, then it's really, really dangerous.
Even as it is right now, where we spend most of our time on how difficult it is to do a multi-car hack. If you have direct access to a car, just like you've got direct access to a computer, you can do a lot of things to it, but that's less of a concern than somebody being able to hack an arbitrary car or multiple cars. So that's what we focus our energy on. In that way it's a lot like a mobile phone or a laptop, you focus on making sure that there can't be a system-wide hack. So we put a lot of effort into that and we have a lot of third parties try to attack it.
And then certain parts of the car, at a very fundamental level—like the drive unit controller, or the steering controller—have an additional level of security. So someone may be able to hack something that's cosmetic, but it's much harder to hack something that's actually physically dangerous. You may be able to display a funny message or something, but you won't be able to control the steering or the motor.
A lot of people think of us as the leader in electric cars, but I think we'll also be the leader in autonomous cars... at least autonomous cars that people can buy.
We're going to put a lot of effort into autonomous driving, because it's going to be the default thing. It could save a lot of lives.
When it comes to AI, I'm not really worried about something narrow like autonomous cars or like a smart air conditioning unit at the house. It's the deep intelligence stuff where we need to be cautious. I think there's many different potential layers of artificial intelligence, and it's odd that we're so close to the advent of AI. It seems strange that we'd be alive in this time.
I just hope there's something left for us humans to do.