Profile

Scrapbook photo 1
Benoit Maison
Works at Vision Smarts
Attended Université catholique de Louvain
Lives in Wavre, Belgium
76 followers|16,837 views
AboutPostsPhotosVideosReviews

Stream

Benoit Maison

Shared publicly  - 
 
At the International Conference on Machine Learning this week. Please drop me a line if you'd like to meet!
http://icml.cc/2015/
1
Add a comment...

Benoit Maison

Shared publicly  - 
 
As you go around the house scanning large batches of items with the built-i...
1
Add a comment...

Benoit Maison

Shared publicly  - 
 
Best take I have read on Microsoft's prospects under Nadella. In a nutshell, MSFT can't do both services and devices without its Windows monopoly, and is culture won't let go of Windows.
Nadella is saying all of the right things, but Microsoft's culture has always been Windows first. The solution is to get rid of Windows.
1
Add a comment...

Benoit Maison

Shared publicly  - 
 
Barcode and QR scanning with the Vuzix M100 Smart Glasses (+ downloadable demo) http://www.visionsmarts.com/blog/barcode-scanning-with-the-vuzix-m100-smart-glasses/ #vuzix   #smartglasses   #barcodes #qr
Barcode Scanning with the Vuzix M100 Smart Glasses. Benoit. Vuzix-M100. We have been following the various Smart Glasses product announcements with a lot of interest. So when the M100 became commercially available, we decided to give it a try and take our image processing algorithms for a spin.
1
Add a comment...

Benoit Maison

Shared publicly  - 
 
I wonder whether +Christopher Chabris did find anything worthwhile or correct in Malcom Gladwell's new book. I hear there is much to be criticized, but is there anything good in it? #gladwell   #davidandgoliath  
Malcolm Gladwell, the New Yorker writer and perennial best-selling author, has a new book out. It's called David and Goliath: Misfits, Underdogs, and the Art of Battling Giants. I reviewed it on Sept. 28 in The Wall Street Journal. (Other reviews have appeared in the Atlantic, the New York Times, the Guardian, the Financial Times,...
1
Add a comment...

Benoit Maison

Shared publicly  - 
 
Today is talk like a pirate day, Mateys!
1
Add a comment...

Benoit Maison

Shared publicly  - 
 
Nice write-up. I believe what is sorely missing from many discussions of MOOCs or Peter Thiel's 20 under 20 is how to motivate the average student. Self-motivated bright students will succeed in almost any setting, inside or outside the higher education system. What about all the others?
1
Add a comment...
Have him in circles
76 people
Matthieu Leroy's profile photo
Vincent Vandenberghe's profile photo
Cindy Cunningham's profile photo
Georgeus Roger's profile photo
Константин Самойленко's profile photo
Fatima Vané Simian's profile photo
Dimitri Kanevsky's profile photo
L Desiron's profile photo
Linden Darling's profile photo

Benoit Maison

Shared publicly  - 
 
There is gold in the 888 references compiled by Schmidhuber, and I fully agree that more attention should be paid to early work. On the other hand, most deep learning papers are not mathematics papers at all. They are engineering papers focussing on getting good results on specific tasks and benchmarks. One can hardly demand the same credit assignment standards from them. Their claim is not (or at least should not be) about the original idea but about making the techniques work in practice.
 
Critique of Paper by "Deep Learning Conspiracy" (Nature 521 p 436)

Machine learning is the science of credit assignment. The machine learning community itself profits from proper credit assignment to its members. The inventor of an important method should get credit for inventing it. She may not always be the one who popularizes it. Then the popularizer should get credit for popularizing it (but not for inventing it). Relatively young research areas such as machine learning should adopt the honor code of mature fields such as mathematics: if you have a new theorem, but use a proof technique similar to somebody else's, you must make this very clear. If you "re-invent" something that was already known, and only later become aware of this, you must at least make it clear later.

As a case in point, let me now comment on a recent article in Nature (2015) about "deep learning" in artificial neural networks (NNs), by LeCun & Bengio & Hinton (LBH for short), three CIFAR-funded collaborators who call themselves the "deep learning conspiracy" (e.g., LeCun, 2015). They heavily cite each other. Unfortunately, however, they fail to credit the pioneers of the field, which originated half a century ago. All references below are taken from the recent deep learning overview (Schmidhuber, 2015), except for a few papers listed beneath this critique focusing on nine items.

1. LBH's survey does not even mention the father of deep learning, Alexey Grigorevich Ivakhnenko, who published the first general, working learning algorithms for deep networks (e.g., Ivakhnenko and Lapa, 1965). A paper from 1971 already described a deep learning net with 8 layers (Ivakhnenko, 1971), trained by a highly cited method still popular in the new millennium. Given a training set of input vectors with corresponding target output vectors, layers of additive and multiplicative neuron-like nodes are incrementally grown and trained by regression analysis, then pruned with the help of a separate validation set, where regularisation is used to weed out superfluous nodes. The numbers of layers and nodes per layer can be learned in problem-dependent fashion.

2. LBH discuss the importance and problems of gradient descent-based learning through backpropagation (BP), and cite their own papers on BP, plus a few others, but fail to mention BP's inventors. BP's continuous form was derived in the early 1960s (Bryson, 1961; Kelley, 1960; Bryson and Ho, 1969). Dreyfus (1962) published the elegant derivation of BP based on the chain rule only. BP's modern efficient version for discrete sparse networks (including FORTRAN code) was published by Linnainmaa (1970). Dreyfus (1973) used BP to change weights of controllers in proportion to such gradients. By 1980, automatic differentiation could derive BP for any differentiable graph (Speelpenning, 1980). Werbos (1982) published the first application of BP to NNs, extending thoughts in his 1974 thesis (cited by LBH), which did not have Linnainmaa's (1970) modern, efficient form of BP. BP for NNs on computers 10,000 times faster per Dollar than those of the 1960s can yield useful internal representations, as shown by Rumelhart et al. (1986), who also did not cite BP's inventors.

3. LBH claim: "Interest in deep feedforward networks [FNNs] was revived around 2006 (refs 31-34) by a group of researchers brought together by the Canadian Institute for Advanced Research (CIFAR)." Here they refer exclusively to their own labs, which is misleading. For example, by 2006, many researchers had used deep nets of the Ivakhnenko type for decades. LBH also ignore earlier, closely related work funded by other sources, such as the deep hierarchical convolutional neural abstraction pyramid (e.g., Behnke, 2003b), which was trained to reconstruct images corrupted by structured noise, enforcing increasingly abstract image representations in deeper and deeper layers. (BTW, the term "Deep Learning" (the very title of LBH's paper) was introduced to Machine Learning by Dechter (1986), and to NNs by Aizenberg et al (2000), none of them cited by LBH.)

4. LBH point to their own work (since 2006) on unsupervised pre-training of deep FNNs prior to BP-based fine-tuning, but fail to clarify that this was very similar in spirit and justification to the much earlier successful work on unsupervised pre-training of deep recurrent NNs (RNNs) called neural history compressors (Schmidhuber, 1992b, 1993b). Such RNNs are even more general than FNNs. A first RNN uses unsupervised learning to predict its next input. Each higher level RNN tries to learn a compressed representation of the information in the RNN below, to minimise the description length (or negative log probability) of the data. The top RNN may then find it easy to classify the data by supervised learning. One can even "distill" a higher, slow RNN (the teacher) into a lower, fast RNN (the student), by forcing the latter to predict the hidden units of the former. Such systems could solve previously unsolvable very deep learning tasks, and started our long series of successful deep learning methods since the early 1990s (funded by Swiss SNF, German DFG, EU and others), long before 2006, although everybody had to wait for faster computers to make very deep learning commercially viable. LBH also ignore earlier FNNs that profit from unsupervised pre-training prior to BP-based fine-tuning (e.g., Maclin and Shavlik, 1995). They cite Bengio et al.'s post-2006 papers on unsupervised stacks of autoencoders, but omit the original work on this (Ballard, 1987).

5. LBH write that "unsupervised learning (refs 91-98) had a catalytic effect in reviving interest in deep learning, but has since been overshadowed by the successes of purely supervised learning." Again they almost exclusively cite post-2005 papers co-authored by themselves. By 2005, however, this transition from unsupervised to supervised learning was an old hat, because back in the 1990s, our unsupervised RNN-based history compressors (see above) were largely phased out by our purely supervised Long Short-Term Memory (LSTM) RNNs, now widely used in industry and academia for processing sequences such as speech and video. Around 2010, history repeated itself, as unsupervised FNNs were largely replaced by purely supervised FNNs, after our plain GPU-based deep FNN (Ciresan et al., 2010) trained by BP with pattern distortions (Baird, 1990) set a new record on the famous MNIST handwritten digit dataset, suggesting that advances in exploiting modern computing hardware were more important than advances in algorithms. While LBH mention the significance of fast GPU-based NN implementations, they fail to cite the originators of this approach (Oh and Jung, 2004).

6. In the context of convolutional neural networks (ConvNets), LBH mention pooling, but not its pioneer (Weng, 1992), who replaced Fukushima's (1979) spatial averaging by max-pooling, today widely used by many, including LBH, who write: "ConvNets were largely forsaken by the mainstream computer-vision and machine-learning communities until the ImageNet competition in 2012," citing Hinton's 2012 paper (Krizhevsky et al., 2012). This is misleading. Earlier, committees of max-pooling ConvNets were accelerated on GPU (Ciresan et al., 2011a), and used to achieve the first superhuman visual pattern recognition in a controlled machine learning competition, namely, the highly visible IJCNN 2011 traffic sign recognition contest in Silicon Valley (relevant for self-driving cars). The system was twice better than humans, and three times better than the nearest non-human competitor (co-authored by LeCun of LBH). It also broke several other machine learning records, and surely was not "forsaken" by the machine-learning community. In fact, the later system (Krizhevsky et al. 2012) was very similar to the earlier 2011 system. Here one must also mention that the first official international contests won with the help of ConvNets actually date back to 2009 (three TRECVID competitions) - compare Ji et al. (2013). A GPU-based max-pooling ConvNet committee also was the first deep learner to win a contest on visual object discovery in large images, namely, the ICPR 2012 Contest on Mitosis Detection in Breast Cancer Histological Images (Ciresan et al., 2013). A similar system was the first deep learning FNN to win a pure image segmentation contest (Ciresan et al., 2012a), namely, the ISBI 2012 Segmentation of Neuronal Structures in EM Stacks Challenge.

7. LBH discuss their FNN-based speech recognition successes in 2009 and 2012, but fail to mention that deep LSTM RNNs had outperformed traditional speech recognizers on certain tasks already in 2007 (Fernández et al., 2007) (and traditional connected handwriting recognisers by 2009), and that today's speech recognition conferences are dominated by (LSTM) RNNs, not by FNNs of 2009 etc. While LBH cite work co-authored by Hinton on LSTM RNNs with several LSTM layers, this approach was pioneered much earlier (e.g., Fernandez et al., 2007).

8. LBH mention recent proposals such as "memory networks" and the somewhat misnamed "Neural Turing Machines" (which do not have an unlimited number of memory cells like real Turing machines), but ignore very similar proposals of the early 1990s, on neural stack machines, fast weight networks, self-referential RNNs that can address and rapidly modify their own weights during runtime, etc (e.g., AMAmemory 2015). They write that "Neural Turing machines can be taught algorithms," as if this was something new, although LSTM RNNs were taught algorithms many years earlier, even entire learning algorithms (e.g., Hochreiter et al., 2001b).

9. In their outlook, LBH mention "RNNs that use reinforcement learning to decide where to look" but not that they were introduced a quarter-century ago (Schmidhuber & Huber, 1991). Compare the more recent Compressed NN Search for large attention-directing RNNs (Koutnik et al., 2013).

One more little quibble: While LBH suggest that "the earliest days of pattern recognition" date back to the 1950s, the cited methods are actually very similar to linear regressors of the early 1800s, by Gauss and Legendre. Gauss famously used such techniques to recognize predictive patterns in observations of the asteroid Ceres.

LBH may be backed by the best PR machines of the Western world (Google hired Hinton; Facebook hired LeCun). In the long run, however, historic scientific facts (as evident from the published record) will be stronger than any PR. There is a long tradition of insights into deep learning, and the community as a whole will benefit from appreciating the historical foundations.

The contents of this critique may be used (also verbatim) for educational and non-commercial purposes, including articles for Wikipedia and similar sites.

References not yet in the survey (Schmidhuber, 2015):

Y. LeCun, Y. Bengio, G. Hinton (2015). Deep Learning. Nature 521, 436-444. http://www.nature.com/nature/journal/v521/n7553/full/nature14539.html

Y. LeCun (2015). IEEE Spectrum Interview by L. Gomes, Feb 2015: http://spectrum.ieee.org/automaton/robotics/artificial-intelligence/facebook-ai-director-yann-lecun-on-deep-learning

R. Dechter (1986). Learning while searching in constraint-satisfaction problems. University of California, Computer Science Department, Cognitive Systems Laboratory. First paper to introduce the term "Deep Learning" to Machine Learning.

I. Aizenberg, N.N. Aizenberg, and J. P.L. Vandewalle (2000). Multi-Valued and Universal Binary Neurons: Theory, Learning and Applications. Springer Science & Business Media. First paper to introduce the term "Deep Learning" to Neural Networks. Compare a popular G+ post on this: https://plus.google.com/100849856540000067209/posts/7N6z251w2Wd?pid=6127540521703625346&oid=100849856540000067209.

J. Schmidhuber (2015). Deep learning in neural networks: An overview. Neural Networks, 61, 85-117. Preprint: http://arxiv.org/abs/1404.7828

AMAmemory (2015): Answer at reddit AMA (Ask Me Anything) on "memory networks" etc (with references): http://www.reddit.com/r/MachineLearning/comments/2xcyrl/i_am_j%C3%BCrgen_schmidhuber_ama/cp0q12t


#machinelearning
#artificialintelligence
#computervision
#deeplearning

Link: http://people.idsia.ch/~juergen/deep-learning-conspiracy.html
1
Add a comment...

Benoit Maison

Shared publicly  - 
 
I kind of agree with the article, but the quote from +MIT Sloan School of Management professor “But, with software, marginal costs are close to zero. That makes it easy for new competitors to enter the business” makes me wonder what is the definition of marginal cost they are teaching at MIT? Maybe +Chris Dixon should ask them to add scare quotes around "marginal costs"? ;-)
The world of pop culture contains many more one-hit wonders than hit factories. King Digital Entertainment has done a great job of making money from Candy Crush Saga, but the game is still a fad, and, like all fads, it will fade.
1
Add a comment...

Benoit Maison

Shared publicly  - 
 
I have read the author's rough notes not long ago, I believe this is going to be a reference book on Deep Learning. Plus the code is in Python. What's not to like?
 
I just launched an Indiegogo campaign for my new book about "Neural Networks and Deep Learning".  I don't have a paid faculty position, so the funds from the campaign are essential to making the book openly available online (which I intend to do).  If you're interested in the book, I hope you'll consider supporting the campaign:

http://www.indiegogo.com/projects/neural-networks-and-deep-learning-book-project/x/2791974
1
Add a comment...

Benoit Maison

Shared publicly  - 
 
Great post, why writing sets the bar higher.
 
I wrote this post after getting too many bad emails...
1
Add a comment...

Benoit Maison

Shared publicly  - 
 
 
This was great. Part high-level algorithms description, part magic show.
1
Add a comment...
People
Have him in circles
76 people
Matthieu Leroy's profile photo
Vincent Vandenberghe's profile photo
Cindy Cunningham's profile photo
Georgeus Roger's profile photo
Константин Самойленко's profile photo
Fatima Vané Simian's profile photo
Dimitri Kanevsky's profile photo
L Desiron's profile photo
Linden Darling's profile photo
Work
Occupation
I research, develop, debug and sell computer vision software.
Employment
  • Vision Smarts
    Managing Director, present
  • IBM Research
    Research Staff Member
  • KLA-Tencor
    Application Engineer
  • Alterface
    Senior Researcher
Places
Map of the places this user has livedMap of the places this user has livedMap of the places this user has lived
Currently
Wavre, Belgium
Previously
White Plains, NY
Links
Contributor to
Story
Bragging rights
Corner office, Personal Chef, Complete collection of Smurf figurines
Education
  • Université catholique de Louvain
    PhD in Engineering
Basic Information
Gender
Male
Looking for
Networking
Le restaurant à Wavre où nous retournons le plus souvent. Les plats sont toujours très frais, variés et délicieux. Les patrons sont super sympathiques et accueillants.
Public - a year ago
reviewed a year ago
3 reviews
Map
Map
Map