Profile cover photo
Profile photo
Chris Hammerschmidt
Scientist and Risk Taker · math, cs and their applications
Scientist and Risk Taker · math, cs and their applications


Post has attachment
u'u. cron job was broken
I apologize for the lack for new posts, the cron job was broken. I'll fix it asap.
Add a comment...

Post has attachment

Post has attachment

Post has shared content
I've just heard the terrible news that David MacKay has died of cancer aged 48. Although it was public news that he was ill, I had not heard about it: the link below is to his blog, which contains a number of incredible, moving, and humorous posts about the progress of his illness.

He was a hero of mine for more than one reason. For one thing, he was an expert on Bayesian reasoning, neural networks and other topics that I find fascinating, and wrote an excellent textbook on the subject. Better still, he made the book freely available online. (I myself bought a physical copy.) Here's the link: .

But the main reason I admired him was that he campaigned for better action on climate change. He wrote a wonderful book on the subject in which he urged people to discuss it quantitatively and not just qualitatively. For example, it sticks in my mind that if you leave a phone charger on overnight, that will lead to carbon emissions roughly equivalent to those caused by two seconds of a car idling. (There are two ways of taking that. I have taken to switching my car engine off when I am at traffic lights -- and of course using my car as little as possible.) He became a government adviser, and although his advice was probably treated in the way politicians usually treat scientific advice, I can't imagine anyone better than him doing that job. He also practised what he preached, doing far more than most people to reduce his own personal carbon footprint. His climate book Sustainable Energy Without The Hot Air is also freely available online:  
Everything is Connected
Everything is Connected
Add a comment...

Post has attachment

Post has shared content
Interesting experiment!
This year for NIPS we selected 10% of the papers to be reviewed twice, independently, by two parts of the program committee. We will reveal results of the experiment on Monday evening. However, in the meantime we'd like to know what the community thinks. +Nicolò Fusi  kindly set up a prediction market for the question "What % of NIPS decisions are inconsistent?". If you would like to participate, you can do so here:!/questions/1083/trades/create/power
Add a comment...

Post has attachment

Post has attachment
Integrating Machine Learning Models into your Existing Workflow (using openscoring and PMML)
In today's world, understanding customers and learning from their behavior is a key component in a company's competitive edge in the market. This not only refers to lower user retention costs in marketing through intelligently timed re-engagement and a high...
Add a comment...

Post has shared content
Draft of invited Deep Learning overview (75 pages, 866 references):

LATEX source:
The complete BIBTEX file is also public:

Abstract. In recent years, deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarises relevant work, much of it from the previous millennium. Shallow and deep learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpropagation), unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

As a machine learning researcher, I am obsessed with credit assignment. In case you know of references to add or correct, please send them with brief explanations to, preferably together with URL links to PDFs for verification.

Juergen Schmidhuber

Table of Contents:

1 Introduction to Deep Learning (DL) in Neural Networks (NNs)

2 Event-Oriented Notation for Activation Spreading in Feedforward NNs (FNNs) and Recurrent NNs (RNNs)

3 Depth of Credit Assignment Paths (CAPs) and of Problems

4 Recurring Themes of Deep Learning

4.1 Dynamic Programming for Supervised / Reinforcement Learning (SL / RL)
4.2 Unsupervised Learning (UL) Facilitating SL and RL
4.3 Learning Hierarchical Representations Through Deep SL, UL, RL
4.4 Occam’s Razor: Compression and Minimum Description Length (MDL)
4.5 Fast Graphics Processing Units (GPUs) for DL in NNs

5 Supervised NNs, Some Helped by Unsupervised NNs

5.1 Early NNs Since the 1940s (and the 1800s)
5.2 Around 1960: Visual Cortex Provides Inspiration for DL (Compare Sec. 5.4, 5.11)
5.3 1965: Deep Networks Based on the Group Method of Data Handling (GMDH)
5.4 1979: Convolution + Weight Replication + Subsampling (Neocognitron)
5.5 1960-1981 and Beyond: Development of Backpropagation (BP) for NNs
5.5.1 BP for Weight-Sharing Feedforward NNs (FNNs) and Recurrent NNs (RNNs)
5.6 Late 1980s-2000: Numerous Improvements of NNs
5.6.1 Ideas for Dealing with Long Time Lags and Deep CAPs
5.6.2 Better BP Through Advanced Gradient Descent (Compare Sec. 5.24)
5.6.3 Searching For Simple, Low-Complexity, Problem-Solving NNs (Compare Sec. 5.24)
5.6.4 Potential Benefits of UL for SL (Compare Sec. 5.7, 5.10, 5.15)
5.7 1987: UL Through Autoencoder (AE) Hierarchies (Compare Sec. 5.15)
5.8 1989: BP for Convolutional NNs (CNNs, Sec. 5.4)
5.9 1991: Fundamental Deep Learning Problem of Gradient Descent
5.10 1991: UL-Based History Compression Through a Deep Hierarchy of RNNs
5.11 1992: Max-Pooling (MP): Towards MPCNNs (Compare Sec. 5.16, 5.19)
5.12 1994: Early Contest-Winning NNs
5.13 1995: Supervised Recurrent Very Deep Learner (LSTM RNN)
5.14 2003: More Contest-Winning/Record-Setting NNs
5.15 2006/7: UL For Deep Belief Networks (DBNs) / AE Stacks Fine-Tuned by BP
5.16 2006/7: Improved CNNs / GPU-CNNs / BP-Trained MPCNNs / LSTM Stacks
5.17 2009: First Official Competitions Won by RNNs, and with MPCNNs
5.18 2010: Plain Backprop (+Distortions) on GPU Yields Excellent Results
5.19 2011: MPCNNs on GPU Achieve Superhuman Vision Performance
5.20 2011: Hessian-Free Optimization for RNNs
5.21 2012: First Contests Won on ImageNet & Object Detection & Segmentation
5.22 2013-: More Contests and Benchmark Records
5.23 Currently Successful Supervised Techniques: LSTM RNNs / GPU-MPCNNs
5.24 Recent Tricks for Improving SL Deep NNs (Compare Sec. 5.6.2, 5.6.3)
5.25 Consequences for Neuroscience
5.26 DL with Spiking Neurons?

6 DL in FNNs and RNNs for Reinforcement Learning (RL)

6.1 RL Through NN World Models Yields RNNs With Deep CAPs
6.2 Deep FNNs for Traditional RL and Markov Decision Processes (MDPs) .
6.3 Deep RL RNNs for Partially Observable MDPs (POMDPs)
6.4 RL Facilitated by Deep UL in FNNs and RNNs
6.5 Deep Hierarchical RL (HRL) and Subgoal Learning with FNNs and RNNs
6.6 Deep RL by Direct NN Search / Policy Gradients / Evolution
6.7 Deep RL by Indirect Policy Search / Compressed NN Search
6.8 Universal RL

Since 16 April 2014, drafts of this paper have undergone massive open online peer review through public mailing lists including,,,,, Thanks to numerous experts for valuable comments! The contents of this paper may be used for educational and non-commercial purposes, including articles for Wikipedia and similar sites.

Add a comment...

Post has attachment
Wait while more posts are being loaded