Interesting history of NNets here, some of which I didn't know. I do remember how fringe it was considered by the rest of computer science, including by AI conferences, and that it was risky to be working on them. The remarkable thing is how persistent Geoffrey Hinton was in his eventually proven correct belief in the tremendous value of these algorithms. From the article: "Ignoring the brain is probably a bad idea ... The only working system that could solve these problems was the brain ... [In grad school] Hinton had to work in secret ... His first paper on neural nets wouldn’t pass peer review if it mentioned 'neural nets' ... After he graduated, he couldn’t find full-time academic work ... In the 1990s ... Neural nets could learn, but not well. They slurped up computing power and needed a bevy of examples to learn. If a neural net failed, the reasons were opaque—like our own brain. If two people applied the same algorithm, they’d get different results ... Coders opted for learning algorithms that behaved predictably and seemed to do as well ... It was difficult to publish anything that had to do with neural nets at the major machine-learning conferences ... By 2006 ... Google and Facebook began to pile up hoards of data about their users, and it became easier to run programs across a huge web of computers ... Instant success, outperforming voice-recognition algorithms that had been tweaked for decades ... No other algorithm scaled up like these nets ... It was a just a question of the amount of data and the amount of computations."
A nice and largely accurate article in The Chronicle of Higher Education about the history of neural nets and deep learning, with quotes from +Geoffrey Hinton, +Terrence Sejnowski, +Yoshua Bengio, and yours truly.

http://chronicle.com/article/The-Believers/190147/
The Believers
The Believers
chronicle.com
Shared publiclyView activity