Profile cover photo
Profile photo
David Dao

Post has shared content
Juergen Schmidhuber's Talk at our Deep Learning Event! Check it out. He is a great speaker.
How to Learn an Algorithm (video). I review 3 decades of our research on both gradient-based and more general problem solvers that search the space of algorithms running on general purpose computers with internal memory. Architectures include traditional computers, Turing machines, recurrent neural networks, fast weight networks, stack machines, and others. Some of our algorithm searchers are based on algorithmic information theory and are optimal in asymptotic or other senses. Most can learn to direct internal and external spotlights of attention. Some of them are self-referential and can even learn the learning algorithm itself (recursive self-improvement). Without a teacher, some of them can reinforcement-learn to solve very deep algorithmic problems (involving billions of steps) infeasible for more recent memory-based deep learners. And algorithms learned by our Long Short-Term Memory recurrent networks defined the state-of-the-art in handwriting recognition, speech recognition, natural language processing, machine translation, image caption generation, etc. Google and other companies made them available to over a billion users.

The video was taped on Oct 7 2015 during MICCAI 2015 at the Deep Learning Meetup Munich:  Link to video:

Similar talk at the Deep Learning London Meetup of Nov 4 2015: (video not quite ready yet)

Most of the slides for these talks are here:

These also includes slides for the AGI keynote in Berlin, the IEEE distinguished lecture in Seattle (Microsoft Research, Amazon), the INNS BigData plenary talk in San Francisco, the keynote for the Swiss eHealth summit, two MICCAI 2015 workshops, and a recent talk for CERN (some of the above were videotaped as well).

Parts of these talks (and some of the slides) are also relevant for upcoming talks in the NYC area (Dec 4-6 and 13-16) and at NIPS workshops in Montreal:

1. Reasoning, Attention, Memory (RAM) Workshop, NIPS 2015

2. Deep Reinforcement Learning Workshop, NIPS 2015

3. Applying (machine) Learning to Experimental Physics (ALEPH) Workshop, NIPS 2015

More videos:

Also available now: Scholarpedia article on Deep Learning:

Finally, a recent arXiv preprint: On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models.


Post has attachment

Post has attachment
My Google Summer of Code Project 2014 for BioJS 

I will teach a tutorial for BioJS at ECCB in Strasbourg next month :) - don't miss it!
Wait while more posts are being loaded