Imagine waking up in the middle of the night with this image fresh in your head:
At best you feel uneasy. At worst you're afraid of going back to sleep.
Why is it that we find this image so disturbing? Probably because disembodied eyeballs and creepy dog faces are not part of our reality.
Computers don't see it that way. Eyeballs, dog faces, cacti or agaves, it's all about shapes and patterns to them--one no more or less strange than the other.
NOTE: Please click on each image to see a larger version that shows much more detail.
As part of an ambitious project designed to improve object recognition in photos (code-named “Deep Dream”), Google trained an artificial neural network by feeding it a massive collection of 1.2 million images, each of them precisely classified as to what it represented (bird, building, dog, etc.).
After Deep Dream had “learned” these shapes, Google tested it by giving it new images to classify, i.e. to determine whether a photo was of a bird, building, dog, etc. As part of this process, Deep Dream was asked to visualize the patterns it “saw” in images so the developers could get a better understanding of what was going on in the various layers of the neutral network. This is where things got interesting. Deep Dream output bizarre creations—disembodied dog heads, reptilian legs, floating pagodas—attached to or embedded in the shapes that existed in the actual photo.
To us it may seem like a nightmare from the mind of Hieronymus Bosch, but to Deep Dream it’s just how it learns. And since Deep Dream must have been fed a disproportionally high number of dog and bird photos as part of its training, dog and bird heads—or parts thereof—show up in virtually every image “dreamt” up by the engine.
A few weeks ago Google made some of the Deep Dream code available as open source. Several web sites have since popped up that allow you to submit your own photos for Deep Dream processing. Needless to say I was curious to see what Deep Dream would do to my own photos so I uploaded some to http://deepdreams.zainshah.net/ (other sites are listed here).
The results are as bizarre as I had expected, sometimes even more so. On human faces the effects are even more unsettling because our tolerance of what is "normal" is so narrow. A quick Google search will produce scores of Deep Dream images of people, including politicians, celebrities and folks like you and me. This selfie of yours truly is a good example:
As interesting as all of this is in the abstract, you might be wondering what this has to do with plants and gardening.
Not much at the moment, but potentially a lot down the line.
As object recognition advances thanks to projects like Google Deep Dream, software programs of the future will be able to analyze the photos you take and classify them according to what they show: plants, buildings, people. Taking it a step further, the software may even be able to determine that a photo is of an agave, an aloe, or cactus—and possibly even of which species.
Imagine how handy it will be to tell your computer, or smartphone, or whatever else we may be using then, to search through your thousands of photos and pull those that show a saguaro cactus at sunset, or an agave in a red container, or a flowering water lily, or your sleeping dog. The search results will be based on the actual image content, analyzed and recognized by software, instead of text keywords you laboriously entered yourself (the only way this kind of search could be done in the past).
Or imagine taking a photo of a plant you don’t recognize and asking your computer to identify it? Like reverse image search in Google Images, but delivering infinitely more useful and accurate results.
All of that will be possible someday.
And maybe androids will dream of electric sheep after all.
The results are particularly mesmerizing on photos with lots of sky or even-toned blank space:
Picacho Peak State Park, Arizona
Our house in the fog
Bamboo in the front yard in the fog
Canada Day fireworks, Victoria, BC