The team at Google Brain has made an impressive breakthrough for increasing the resolution of images. They’ve managed to turn 8x8 grids of pixels into monstrous approximations of human beings.
Neural networks are our best chance at being able to truly increase the level of detail in a low-resolution image. We’re stuck with the pixel information that a photo contains but deep learning can add detail through what are commonly referred to as “hallucinations.” This essentially means a piece of software making guesses about an image based on the information it’s learned from other images.
The Google Brain folks recently published the results of their latest progress with “pixel recursive super resolution” and despite the results looking horrifying, they’re extremely impressive.
Here’s an example of what they’ve managed to do:
On the right, there’s an actual 32 x 32 photo of a celebrity. On the left, the same image has been crunched down to 8 x 8. In the centre, you can see what Google Brain guessed the original image looked like based on the low-resolution example.
A two-pronged approach was used. First, a conditioning network compared the low-res image to high-res photos in its database. It rapidly lowered the quality of each to match up the colour of the pixels with similar images.
Next, a prior network makes guesses about what details might go into a higher res photo. Utilising PixelCNN, the network looks at probabilities of what would be in a given pixel of a given class of images at that size. In this case, the classes were pics of celebrities and bedrooms. Say the prior network has determined that it’s working with shots of celebrities. It decides that between the low and hi-res versions, a nostril tends to go in one spot. That’s the spot that it will try to stick a nostril.
Then, both neural networks best guesses are combined and voila, something like this pops out:
Here are some more examples with variations of super-resolution output.
Before you start thinking “that doesn’t look real, this A.I. is dumb,” remember that people are dumb too. A test audience was brought in and shown a downgraded photo and pic generated by Google Brain. They were asked “which image, would you guess, is from a camera?” In about 10% of the celebrity examples, people chose the Google Brain pic as legitimate. About 28% chose the computer generated image when guessing the bedroom examples.
As cool as the technology is, however, it could end up having some scary implementations. It’s easy to imagine law enforcement jumping on this software and grabbing suspects like they’re Reddit investigating the Boston Bombers. What’s more, various examples of artificial intelligence have proven to be racist because human biases are often inadvertently part of the programming. Combine this image tech with analysis A.I., and we’ll definitely be running into a rough debugging process. [Ars Technica]