Google Labs Deep Dream Fractal Animation

July 7, 2015
by Tim Rabjohns


I really wanted to share this.new project coming out of Google Labs at the moment called Deep Dream.  No only is this fractal style imaging amazingly trippy to look at, but it’s been developed as part of a project that looks at neural networks.  Why should we care about this? you might ask – well it’s all part of research that looks at how the human brain processes images and delivers visual recognition to us.

An artificial neural network can be thought of a massively simplified version of the brain.  Information is stored in this network as ‘weights’ (strengths) of connections between neurons.  Low layers (i.e. closer to the input, e.g. ‘eyes’) store and recognise low level abstract features eg corners, edges, orientations etc. Higher layers store and recognise higher level features. All of this is similar as to how information is stored in the mammalian brain.

Google Deep Dream for dotmogo

In our brains our neural networks are ‘trained’ on millions of images from birth.  When the network is fed a new unknown image eg a new person, it tries to make sense or recognise this new image in context of what it already knows.  This can be thought of as asking the network “Based on what you’ve seen / what you know, what do you think this is?”, and is analogous to you recognising objects in clouds or ink / rorschach tests etc.

The effect is further exaggerated by encouraging the algorithm to generate an image of what it ‘thinks’ it is seeing, and feeding that image back into the input (which is what you see in this video) Then it’s asked to reevaluate, creating a positive feedback loop, reinforcing the biased misinterpretation.

Suffice to say – its quite a complex process that all the image processing goes through – make sure you read the full Google blogpost if you are interested in geeking out on this.

Deep Dream visualisation from Google Lab for dotmogoIf we let the AI  make the decision we can feed the network an arbitrary image or photo and let the network analyze the picture. We then pick a layer and ask the network to enhance whatever it detected. Each layer of the network deals with features at a different level of abstraction, so the complexity of features we generate depends on which layer we choose to enhance. For example, lower layers tend to produce strokes or simple ornament-like patterns, because those layers are sensitive to basic features such as edges and their orientations.

What I really want to know is could this imaging process be used commercially and artistically or not?  By feeding in various images and then have the Deep Dream do its work on them… my mind is boggling from the possibilities – any one else got any ideas on this…?

You can find out more detail from this Google blogpost on Deep Dream

All based on the google research by Alexander Mordvintsev, Software Engineer, Christopher Olah, Software Engineering Intern and Mike Tyka, Software Engineer

Leave a Comment