Deep Learning Can Now Flawlessly Correct Photos Taken in Almost Complete Darkness

By Andrew Liszewski on at

There are typically two approaches to taking usable photos in low-light conditions. You can either use a slow shutter, which requires a tripod to eliminate blur, or electronically increase the sensitivity of a camera’s sensor, which introduces ugly noise artefacts. But there’s now a third approach that takes advantage of machine learning to artificially boost the brightness of a dark photo afterwards—with stunning results.

Researchers at Intel and the University of Illinois Urbana–Champaign have come up with what might be the ultimate post-production tool for photographers who often find themselves shooting in low-light scenarios like performances at concert venues, or capturing nocturnal wildlife at night. But it can even be used to improve the quality of the smartphone photos you snapped at a dark and seedy bar.

As with countless other image processing innovations as of late, the research, which was recently published in a paper titled “Learning to See in the Dark,” takes advantage of deep learning techniques to train an algorithm on how a poorly exposed image should be properly brightened and colour-corrected during post-processing. The researchers provided a neural network with a dataset containing 5,094 overly dark short-exposure images, as well as an equal number of long-exposure images that showed what the scene should look like with proper lighting and exposure. The images were snapped with Sony α7S II and Fujifilm X-T2 cameras, which use different sensor technologies.

(Left) An under-exposed photo corrected using traditional image processing tools. (Right) The same image corrected using the deep learning algorithms. (Photo: University of Illinois Urbana–Champaign & Intel)

As someone who’s long battled with Photoshop to fix dark and grainy images, the results from this algorithm, even in its early research stages, is staggeringly impressive. The photos go from something destined for a computer’s recycle bin, to images that are genuinely usable, to a certain degree.

(Left) An under-exposed photo corrected using traditional image processing tools. (Right) The same image corrected using the deep learning algorithms. (Photo: University of Illinois Urbana–Champaign & Intel)

The processed photos still don’t look as good as if the same scene had been photographed with a long exposure and a tripod, but who really wants to carry around all that extra gear when all you really want is some quick shots of a night out with your friends? It will be a long time before the tiny sensors inside smartphones are as capable as the comparatively giant sensors used in DSLR cameras, but with this algorithm running in the background of your smartphone’s camera app, they may never have to be.

Digital cameras are as much about the physical hardware as they are the software that processes what the sensor captures. It’s conceivable that a simple software update could one day make your iPhone’s snapshots rival pics from Canon and Nikon’s most expensive shooters. [Learning to See in the Dark via BoingBoing]


More Photography Posts: