Deep Dreaming Fear & Loathing in Las Vegas: the Great San Francisco Acid Wave





The interactive transcript could not be loaded.


Rating is available when the video has been rented.
This feature is not available right now. Please try again later.
Published on Jul 3, 2015

Deep neural network hallucinating Fear & Loathing in Las Vegas: how meta is that? Visualizing the internals of a deep net we let it develop further what it think it sees.
code: https://github.com/graphific/DeepDrea...

Another investigation of mainly landscapes: https://www.youtube.com/watch?v=6IgbM...
2001: A Space Odyssey: https://www.youtube.com/watch?v=tbTJH...

We're using the #deepdream technique developed by Google and others: http://googleresearch.blogspot.nl/201... & code: https://github.com/google/deepdream

parameters used (and useful to play with):
- network: standard reference GoogLeNet model trained on ImageNet from the Caffe Model Zoo (https://github.com/BVLC/caffe/wiki/Mo...)
- iterations: 5
- jitter: 32 (default)
- octaves: 4 (default)
- layers locked to moving upwards from inception_4c/output to inception_5b/output (only the output layers, as they are most sensitive to visualizing "objects", where reduce layers are more like "edge detectors") and back again
- every next unprocessed frame in the movie clip is blended with the previous processed frame before being "dreamed" on, moving the alpha from 0.5 to 1 and back again (so 50% previous image net created, 50% the movie frame, to taking 100% of the movie frame only). This takes care of "overfitting" on the frames and makes sure we don't iteratively build more and more "hallucinations" of the net and move away from the original movie clip.


When autoplay is enabled, a suggested video will automatically play next.

Up next

to add this to Watch Later

Add to

Loading playlists...