Driverless Cars Could be Hacked With a Pen and Some Stickers

By Gary Cutlack on at

Creating logic traps for self-driving cars and making them think that a plastic plate is a mini roundabout could become the real-world hacking hobby of the future, as researchers and jokers line up to prove that it's easy to trick the algorithms that know how to drive at least as well as Richard Hammond.

One such examination by a team at the University of Washington found that an autonomous vehicle camera could be made to think that a US STOP sign was in fact a 45mph limit warning, which could have had extremely non-hilarious consequences had it happened in the real world.

The paper says the hack uses "spatially-constrained perturbations," which, we think, means things that are seemingly random enough to be ignored by human brains used to papering over gaps in data, but which are seized upon by neural networks as having more importance than they should.

Hence sticking a few extra bits over a STOP sign still makes it look like a stop sign to us, albeit one an idiot has vandalised, but to an AI that's trying too hard to please and analyse everything it takes on a new meaning.

It's a more extreme version of the amazing Autonomous trap 001 concept that pokes fun at how our supposedly world-endangering, super-intelligent AI friends are still lacking even the most basic form of human common sense and the ability to just work stuff out. [Techradar, TechCrunch]


More Cars Posts: