How often have you had a great photo-op ruined by there being a window in-between you and your subject resulting in reflections in your shot? So far there’s just no easy way to fix that in post-production, but researchers at MIT, led by YiChang Shih, have developed an algorithm that can extract and automatically remove reflections in an image.
There is a catch, though. The algorithm that Shih developed, while completing his PhD in computer science at MIT, relies on the window producing the reflection to be double-paned, or very, very thick. Why is that important? A double-pane window actually ends up producing two slightly offset reflections that Shih’s algorithm can use to extract what parts of the image are an unwanted reflection, and subsequently apply corrections to remove it. The technique works with thicker windows as well because of the distance between the outer and inner surfaces of the glass that also produce two misaligned reflections.
Of course, the algorithm does much more than just look for double reflections. It also relies on a technique co-developed by Daniel Zoran and Yair Weiss that breaks down a digital image into bite-size eight-by-eight blocks of pixels that help it pinpoint which ones stand out and don’t belong, and are probably part of a reflection.
The obvious application for this new image processing software is to allow digital cameras to automatically eliminate a reflection from a photo taken through a window as it’s being snapped, or to give image-processing apps like Photoshop another powerful tool for improving shots. But there are also applications when it comes to artificial vision, allowing robots to peer through windows without getting confused or misinformed about what they’re see on the other side.
In other words, yep, there’s always a military application if you look hard enough. [MIT]