Our Deep Learning Neural Networks just became the best artificial recognisers of Chinese characters (from the ICDAR 2013 competition), approaching human performance . First author is Dan Claudiu Cireșan.
Why is this important? For example, all major smartphone companies want you to point your cell phone camera to texts written in foreign languages, say, Chinese metro signs or lunch menus, and get reliable translations.
As always in such competitions, GPU-based pure supervised gradient descent (40-year-old backprop à la Paul Werbos) was applied to our deep and wide multi-column networks with alternating max-pooling layers and convolutional layers (multi-column MPCNN) [2,3]. Most if not all leading IT companies and research labs are now using this technique, too.
In 2011, such multi-column MPCNN became the first artificial devices to achieve human-competitive performance  on major benchmarks, including the MNIST handwritten digits of +Yann LeCun
, possibly the most famous benchmark of machine learning. Chinese handwriting à la ICDAR, however, is much harder, as there are not only 10 classes (one for each digit), but 3755.
None of us speaks a word of Chinese.
The report  also mentions a funny preprocessing bug.
When we started Deep Learning research over 20 years ago , slow computers forced us to focus on toy applications. How things have changed! Today, deep NN can already learn to rival human pattern recognisers in certain domains. And each decade we gain another factor of 100-1000 in terms of raw computational power per cent.
(1 September 2013)
 D. C. Cireșan, U. Meier, J. Masci, L. M. Gambardella, J. Schmidhuber. Flexible, High Performance Convolutional Neural Networks for Image Classification. IJCAI-2011, Barcelona, 2011. Preprint http://arxiv.org/abs/1102.0183
 D. C. Cireșan, U. Meier, J. Schmidhuber. Multi-column Deep Neural Networks for Image Classification. CVPR 2012, p 3642-3649, 2012. http://www.idsia.ch/~juergen/cvpr2012.pdf
, preprint http://arxiv.org/abs/1202.2745