HI, all, are you guys familiar with Extreme Learning Machine? I'm reading about it recently. It's simple, fast and obtaining good performance. However, I didn't even notice it before my supervisor accidentally found it. It looks like a simple regression model to me. Is there any reason that nobody promotes this idea to deep learning community (besides the guys who are working on it in NTU)? Another thing I don't feel comfortable is that it's heavily computing inverse of a matrix. Traditionally we would like to avoid this in Machine Learning.
9 plus ones
Shared publicly•View activity
View 21 previous comments
Well, that's an interesting observation, but incorrect.
The random encoding from ELM has not been implicitly carried through in this case.May 4, 2015
ELM is NOT an affine transformationMay 4, 2015
The final thing you mention - you really think typical sparse data is ever one sparse activation per sample?
That's totally silly, sir. Yes, in that case, the data has already been encoded and almost nothing can be done with it. But I've never in my life seen data like that.May 4, 2015
- The method: connecting the first layer randomly is just about the stupidest thing you could do. People have spent the almost 60 years since the Perceptron to come up with better schemes to non-linearly expand the dimension of an input vector so as to make the data more separable (many of which are documented in the 1974 edition of Duda & Hart).
- Yann LeCunMay 8, 2015
- I see Yan's point, but he's missing the point as well.
He's probably only concerned with accuracy. ELM won't beat Yan's methods on accuracy with tons of data.
However, in the low data regime, ELM might win. It's also much faster to train, easier to set up and probably easier to troubleshoot.
It has its place; he just doesn't see it.May 8, 2015
- May 9, 2015
Add a comment...