Profile

Cover photo
Daniel Nouri
Lives in Berlin
1,235 followers|124,005 views
AboutPostsPhotosYouTube

Stream

Daniel Nouri

Shared publicly  - 
 
 
Blog post on KDnuggets clarifying some misconceptions about adversarial examples: http://www.kdnuggets.com/2015/07/deep-learning-adversarial-examples-misconceptions.html
Google scientist clarifies misconceptions and myths around Deep Learning Adversarial Examples, including: they do not occur in practice, Deep Learning is more vulnerable to them, they can be easily solved, and human brains make similar mistakes. c comments. By Ian Goodfellow (Google).
5 comments on original post
2
Add a comment...

Daniel Nouri

Discussion  - 
This is a hands-on tutorial on deep learning. Step by step, we'll go about building a solution for the Facial Keypoint Detection Kaggle challenge. The tutorial introduces Lasagne, a new library for building neural networks with Python and Theano. We'll use Lasagne to implement a couple of ...
49
12
VINZ GIRAUDON's profile photoHao Wooi Lim's profile photoMohammad Havaei's profile photoXu Jia's profile photo
4 comments
 
Very cool. Concerning data augmentation have you tried image rotations? How that could be implemented?
Add a comment...

Daniel Nouri

Shared publicly  - 
 
 
Animal species are going extinct anywhere from 100 to 1,000 times the rates that would be expected under natural conditions. According to Elizabeth Kolbert's The Sixth Extinction and other recent studies, the increase results from a variety of human-caused effects including climate change, habitat destruction, and species displacement. Today's extinction rates rival those during the mass extinction event that wiped out the dinosaurs 65 million years ago.
Animals are dying out at a rate that rivals the dinosaur era. We catalogued the species at risk.
1
1
Mathieu Lonjaret's profile photo
Add a comment...

Daniel Nouri

Shared publicly  - 
 
 
This summer, I’m interning at Spotify in New York City, where I’m working on content-based music recommendation using convolutional neural networks. I wrote a blog post to explain my approach and show some preliminary results.
1
Add a comment...

Daniel Nouri

Shared publicly  - 
 
 
Deep content-based music recommendation

We trained a latent factor model on listening data from one million users for just under 400k songs, and then trained a deep convolutional neural network to predict the latent factors from audio. We showed that we can make sensible recommendations using these predicted factors, despite the large semantic gap between the characteristics of a song that affect user preference and the corresponding audio signal. We used the Million Song Dataset for this work.

Below is a t-SNE visualization of the distribution of predicted usage patterns, using latent factors predicted from audio. A few close-ups show artists whose songs are projected in specific areas. 

We will be demonstrating our approach at NIPS 2013: users will be able to specify YouTube clips. The demo will predict factors for these clips and try to find other clips with similar predicted usage patterns in a large database of 600,000 songs (a subset of the Million Song Dataset).

This is work with +Aäron van den Oord and +Benjamin Schrauwen

Paper: http://bit.ly/1csNpuG
Demo link: https://nips.cc/Conferences/2013/Program/event.php?ID=4174
6
1
Eric Casteleijn's profile photo
Add a comment...

Daniel Nouri

Discussion  - 
 
So here's my little ConvNet Twitter bot that will classify your wildflower images; tell you which species you're looking at.  It currently knows around 150 flowers, mostly from Central Europe.  Use Twitter's built-in media upload and address the bot in your tweet using its name "@WildflowerID" to get an answer.
The latest from Wildflower bot (@WildFlowerID). Send me a pic of a wild flower, and I tell you which one it is. I currently know about 150 species from *Central Europe*. I'm a bot made by @dnouri and Teemu. Berlin
5
Add a comment...
Have him in circles
1,235 people
gao young's profile photo
Antonio Sagliocco's profile photo
Larry Hernandez's profile photo
Luca Fabbri's profile photo
Godefroid Chapelle's profile photo
Jayson St Jean's profile photo
Sp Saly's profile photo
Reinout van Rees's profile photo
Hans “hansemann” Bickhofe's profile photo

Daniel Nouri

Shared publicly  - 
 
 
All of these images were computer generated!

For the last few weeks, Googlers have been obsessed with an internal visualization tool that Alexander Mordvintsev in our Zurich office created to help us visually understand some of the things happening inside our deep neural networks for computer vision.  The tool essentially starts with an image, runs the model forwards and backwards, and then makes adjustments to the starting image in weird and magnificent ways.  

In the same way that when you are staring at clouds, and you can convince yourself that some part of the cloud looks like a head, maybe with some ears, and then your mind starts to reinforce that opinion, by seeing even more parts that fit that story ("wow, now I even see arms and a leg!"), the optimization process works in a similar manner, reinforcing what it thinks it is seeing.  Since the model is very deep, we can tap into it at various levels and get all kinds of remarkable effects.

Alexander, +Christopher Olah, and Mike Tyka wrote up a very nice blog post describing how this works:

http://googleresearch.blogspot.com/2015/06/inceptionism-going-deeper-into-neural.html

There's also a bigger album of more of these pictures linked from the blog post:

https://goo.gl/photos/fFcivHZ2CDhqCkZdA

I just picked a few of my favorites here.
25 comments on original post
2
Add a comment...

Daniel Nouri

Shared publicly  - 
 
 
Learning to Execute and Neural Turing Machines

I'd like to draw your attention to two papers that have been posted in the last few days from some of my colleagues at Google that I think are pretty interesting and exciting:

  Learning to Execute: http://arxiv.org/abs/1410.4615

  Neural Turing Machines: http://arxiv.org/abs/1410.5401

The first paper, "Learning to Execute", by +Wojciech Zaremba and +Ilya Sutskever attacks the problem of trying to train a neural network to take in a small Python program, one character at a time, and to predict its output.  For example, as input, it might take:

"i=8827
c=(i-5347)
print((c+8704) if 2641<8500 else 5308)"

During training, the model is given that the desired output for this program is "12185".  During inference, though, the model is able to generalize to completely new programs and does a pretty good of learning a simple Python interpreter from examples.


The second paper, "Neural Turing Machines", by +alex graves, Greg Wayne, and +Ivo Danihelka from Google's DeepMind group in London, couples an external memory ("the tape") with a neural network in a way that the whole system, including the memory access, is differentiable from end-to-end.  This allows the system to be trained via gradient descent, and the system is able to learn a number of interesting algorithms, including copying, priority sorting, and associative recall.

Both of these are interesting steps along the way of having systems learn more complex behavior, such as learning entire algorithms, rather than being used for just learning functions.

(Edit: changed link to Learning to Execute paper to point to the top-level Arxiv HTML page, rather than to the PDF).
Abstract: We extend the capabilities of neural networks by coupling them to external memory resources, which they can interact with by attentional processes. The combined system is analogous to a Turing Machine or Von Neumann architecture but is differentiable end-to-end, allowing it to be ...
2
Add a comment...

Daniel Nouri

Shared publicly  - 
 
 
Oh, boy! NVIDIA launches cuDNN, a GPU accelerated Deep Neural Networks library. It's already available with Caffe v1.0
Machine Learning (ML) has its origins in the field of Artificial Intelligence, which started out decades ago with the lofty goals of creating a computer that could do any work a human can do.  Whil...
1
Add a comment...

Daniel Nouri

Shared publicly  - 
 
 
Chris Olah has written a brief but beautiful and pedagogical tutorial on the principles, motivations and amazing results obtained with word embeddings. This is a must-read for those who are newly interested in deep learning for NLP and also worth reading for the experts.
http://colah.github.io/posts/2014-07-NLP-RNNs-Representations/
Introduction. In the last few years, deep neural networks have dominated pattern recognition. They blew the previous state of the art out of the water for many computer vision tasks. Voice recognition is also moving that way. But despite the results, we have to wonder… why do they work so well?
1
Add a comment...

Daniel Nouri

Shared publicly  - 
 
 
I'll be talking about classifying galaxies with deep learning at the Machine Learning meetup in Berlin next Tuesday. It will be a remote presentation seeing as I'm in New York for the summer.
1. Classifying galaxies with deep learning (Sander Dieleman, 45min) Deep learning has become a very popular approach for solving computer vision problems in recent years, with record-breaking results in object classification and detection. In this talk we'll explore a different but related application: galaxy morphology prediction. By automatically classifying galaxies based on their shape, astronomers can come to new insights about their origin...
1
Add a comment...

Daniel Nouri

Shared publicly  - 
 
So here's my little ConvNet Twitter bot that will classify your wildflower images; tell you which species you're looking at.  It currently knows around 150 flowers, mostly from Central Europe.  Use Twitter's built-in media upload and address the bot in your tweet using its name "@WildflowerID" to get an answer.
The latest from Wild Flower ID (@WildFlowerID). Send me a pic of a wild flower, and I tell you which one it is. I currently know about 150 species from Central Europe. I'm a bot made by Daniel and Teemu. Berlin
4
2
Mathieu Lonjaret's profile photoMykola Aleshchanov's profile photo
Add a comment...
People
Have him in circles
1,235 people
gao young's profile photo
Antonio Sagliocco's profile photo
Larry Hernandez's profile photo
Luca Fabbri's profile photo
Godefroid Chapelle's profile photo
Jayson St Jean's profile photo
Sp Saly's profile photo
Reinout van Rees's profile photo
Hans “hansemann” Bickhofe's profile photo
Places
Map of the places this user has livedMap of the places this user has livedMap of the places this user has lived
Currently
Berlin
Previously
Innsbruck - Porto - Rotterdam - Den Haag - Copenhagen
Story
Tagline
Machine Learning, Analytics, Software Development
Basic Information
Gender
Male