In his latest column in the New Yorker, my NYU colleague (and regular New Yorker columnist) Gary Marcus attempts to dampen the hype surrounding AI in recent press articles.
In particular, he takes issue with John Markoff's front-page article in the NYT about neuromorphic chips. I agree with many of Gary's comments. John's article is slightly misleading in that the chips that are mentioned in the article, and the type of technology they employ are very far from practical applications.
Let's take the example of a typical convolutional net for image recognition, of the type that Google, Baidu and Facebook use. Such networks typically have around one billion connections, and 50 to 100 million parameters. Once they are trained, running them on an image takes a few milliseconds on a standard GPU card (e.g. a high-end gaming card that can be had for about $1000). The current breed of neuromorphic chips mentioned in the article are quite far from being able to run such networks. Some of the chips implement learning algorithms. But the algorithms they implement are simplistic, and are not the ones that actually work in practice (e.g. backprop+SGD).
I'm not saying research on this topic should be stopped. But it's quite far from being practical, and it's not ready for prime time (certainly not ready for the front page of the NYT).
Furthermore, I do think that there is value in building specialized hardware for neural nets. I believe that there are embedded applications for which a compact, low-cost, low-power consumption chip that could run large pre-trained convnets could be very useful. We have done work in that direction in my lab at NYU (see http://www.neuflow.org/