Last summer, the internet was overrun with six-eyed dog faces, human legs that are actually slugs, and other images reminiscent of the day you ate magic mushrooms and feverishly explored your kitchen floor. In fact, these were the dreams of an AI developed by Google. And it was only a matter of time before the technology inspired new forms of art.
That day, my friends, has come. The Gray Area Foundation, in collaboration with Google Research, has put together the first of several art exhibits that make use of biologically-inspired forms of computing called artificial neural networks. The most famous of these is Deep Dream, an algorithm that takes everyday images of, say, clouds, and enhances contours until it’s sussed out hidden pig-snails and camel-birds. (Here’s how that works.)
But the exhibit also features other neural network-based tools, including style transfer, which “uses neural representations to separate and recombine content and style of arbitrary images”. This allows the artist to mash up a Manet and a Picasso the way a DJ might mix a house and a pop song.
The exhibition, which takes place tomorrow night at the Gray Area Art & Technology Theater in San Fransisco, includes 29 neural network artworks, created by artists at Google and around the world. It’s a one-off event, tickets are limited, and the pieces are going to be auctioned to the highest bidders. But for those who can’t make the trip, the Gray Area Foundation has agreed to share a sneak peak with us.
Here is the future of fine art. Embrace it, or be destroyed in the robot apocalypse.