Profile cover photo
Profile photo
Daniel Nouri
1,240 followers -
Machine Learning, Analytics, Software Development
Machine Learning, Analytics, Software Development

1,240 followers
About
Daniel's posts

Post has shared content

Post has shared content
All of these images were computer generated!

For the last few weeks, Googlers have been obsessed with an internal visualization tool that Alexander Mordvintsev in our Zurich office created to help us visually understand some of the things happening inside our deep neural networks for computer vision.  The tool essentially starts with an image, runs the model forwards and backwards, and then makes adjustments to the starting image in weird and magnificent ways.  

In the same way that when you are staring at clouds, and you can convince yourself that some part of the cloud looks like a head, maybe with some ears, and then your mind starts to reinforce that opinion, by seeing even more parts that fit that story ("wow, now I even see arms and a leg!"), the optimization process works in a similar manner, reinforcing what it thinks it is seeing.  Since the model is very deep, we can tap into it at various levels and get all kinds of remarkable effects.

Alexander, +Christopher Olah, and Mike Tyka wrote up a very nice blog post describing how this works:

http://googleresearch.blogspot.com/2015/06/inceptionism-going-deeper-into-neural.html

There's also a bigger album of more of these pictures linked from the blog post:

https://goo.gl/photos/fFcivHZ2CDhqCkZdA

I just picked a few of my favorites here.
PhotoPhotoPhotoPhotoPhoto
2015-06-17
18 Photos - View album

Post has attachment

Post has shared content
Learning to Execute and Neural Turing Machines

I'd like to draw your attention to two papers that have been posted in the last few days from some of my colleagues at Google that I think are pretty interesting and exciting:

  Learning to Execute: http://arxiv.org/abs/1410.4615

  Neural Turing Machines: http://arxiv.org/abs/1410.5401

The first paper, "Learning to Execute", by +Wojciech Zaremba and +Ilya Sutskever attacks the problem of trying to train a neural network to take in a small Python program, one character at a time, and to predict its output.  For example, as input, it might take:

"i=8827
c=(i-5347)
print((c+8704) if 2641<8500 else 5308)"

During training, the model is given that the desired output for this program is "12185".  During inference, though, the model is able to generalize to completely new programs and does a pretty good of learning a simple Python interpreter from examples.


The second paper, "Neural Turing Machines", by +alex graves, Greg Wayne, and +Ivo Danihelka from Google's DeepMind group in London, couples an external memory ("the tape") with a neural network in a way that the whole system, including the memory access, is differentiable from end-to-end.  This allows the system to be trained via gradient descent, and the system is able to learn a number of interesting algorithms, including copying, priority sorting, and associative recall.

Both of these are interesting steps along the way of having systems learn more complex behavior, such as learning entire algorithms, rather than being used for just learning functions.

(Edit: changed link to Learning to Execute paper to point to the top-level Arxiv HTML page, rather than to the PDF).

Post has shared content
Animal species are going extinct anywhere from 100 to 1,000 times the rates that would be expected under natural conditions. According to Elizabeth Kolbert's The Sixth Extinction and other recent studies, the increase results from a variety of human-caused effects including climate change, habitat destruction, and species displacement. Today's extinction rates rival those during the mass extinction event that wiped out the dinosaurs 65 million years ago.

Post has shared content
Oh, boy! NVIDIA launches cuDNN, a GPU accelerated Deep Neural Networks library. It's already available with Caffe v1.0

Post has shared content
This summer, I’m interning at Spotify in New York City, where I’m working on content-based music recommendation using convolutional neural networks. I wrote a blog post to explain my approach and show some preliminary results.

Post has shared content
Chris Olah has written a brief but beautiful and pedagogical tutorial on the principles, motivations and amazing results obtained with word embeddings. This is a must-read for those who are newly interested in deep learning for NLP and also worth reading for the experts.
http://colah.github.io/posts/2014-07-NLP-RNNs-Representations/

Post has shared content
Deep content-based music recommendation

We trained a latent factor model on listening data from one million users for just under 400k songs, and then trained a deep convolutional neural network to predict the latent factors from audio. We showed that we can make sensible recommendations using these predicted factors, despite the large semantic gap between the characteristics of a song that affect user preference and the corresponding audio signal. We used the Million Song Dataset for this work.

Below is a t-SNE visualization of the distribution of predicted usage patterns, using latent factors predicted from audio. A few close-ups show artists whose songs are projected in specific areas. 

We will be demonstrating our approach at NIPS 2013: users will be able to specify YouTube clips. The demo will predict factors for these clips and try to find other clips with similar predicted usage patterns in a large database of 600,000 songs (a subset of the Million Song Dataset).

This is work with +Aäron van den Oord and +Benjamin Schrauwen

Paper: http://bit.ly/1csNpuG
Demo link: https://nips.cc/Conferences/2013/Program/event.php?ID=4174
Photo

Post has shared content
I'll be talking about classifying galaxies with deep learning at the Machine Learning meetup in Berlin next Tuesday. It will be a remote presentation seeing as I'm in New York for the summer.
Wait while more posts are being loaded