Dramatic Drone Chase is Test Video for Self-Driving Flying Machine

By Kelsey Campbell-Dollaghan on at

Drones are fun, but they’re also just a dumb sack of batteries and plastic. Proof? There's a whole chapter of YouTube devoted to their crashes. But two PhD students think they’ve figured out a way to give the flying machines brains, or, at least the next best thing.

As PhD candidate Andrew Barry of Massachusetts Institute of Tech’s (MIT) Computer Science and Artificial Intelligence Lab puts it, it’s not hard to build a drone these days. The hard part is “how to get them to stop running into things”.

The students' alternative, an algorithm detailed in a paper available on ArXiv, now has a high-speed action video to go with it, involving some very dramatic chase scenes through a sun-lit forest.

The problem itself is fairly simple: Small-scale UAVs like the ones many amateurs and tinkerers own aren’t designed to autonomously avoid obstacles because they aren’t capable of carrying the weight of the processors they’d need to analyse the world around them and react to it. A drone’s camera might record hundreds of frames per second, and analysing the depth of field for every object in each frame takes some serious firepower. So instead, human pilots on the ground – who rely on sight, camera feeds, or software – steer, and frequently wreck, their steeds.

Barry and his collaborator, Russ Tedrake, built an algorithm that takes a different approach. Rather than try to analyse every object in every frame of a drone’s camera, they set up a threshold distance of 10 metres. Using two stereo onboard cameras recording at 120fps, their software only analyses objects that are 10 metres away. “As you fly, you push that 10-metre horizon forward, and, as long as your first 10 metres are clear, you can build a full map of the world around you,” Barry writes on CSAIL’s website.

Everything closer, it ignores. Everything further, it ignores. But because it’s flying at 30mph, it doesn’t need to spend time analysing all those extra frames and objects. “While this might seem limiting, our cameras are on a moving platform (in this case, an aircraft), so we can quickly recover the missing depth information by integrating our odometry and previous single-disparity results,” they write in a paper published on ArXiv.

This selective approach cuts out a huge amount of the processing as it can run on a mobile CPU with minimal weight for its various parts. It could enable, they say, “a new class of autonomous UAVs” that can fly through complex environments without any help from the ground. The great thing about this research? They’ve put the algorithm online for anyone to try. You can get it here.