I just read the interview with Chomsky in the Atlantic.  It poses an interesting dillema.  Chomsky makes the point that with most AI research today you get a statistically derived "conception of success" which is of practical value without any real understanding of how the natural world fundamentally works.  Specifically, with regard to statistical modeling, just being able to predict outputs based on inputs (ala Google Now) does not provide you with the effective structure and algorithms of a thing.

Chomsky takes a very long term view of the value equation in prioritizing research.   

In contrast to his contentions, it can be argued that if you either create a statistical model of a thing, or if you completely reverse engineer that thing, and build something like it, you will likely get practical value from this research.  Is it possible that you will leapfrog any practical value you would get from the deeper understanding of fundamental workings of a thing?  There is a value question here.  

If these guys (link below) build bees that do exactly what humans want from bees, might this be a greater good than the understanding of how bees work?  The time sensitive research value equation is an important consideration, and one that Chomsky would acknowledge. I think he doesn't discuss it much because in the very long term, he's right about the value of fundamental knowledge.  

But hey, humans aren't going to live forever.  You know, no disrespect, but we're building bees and stuff.  Hopefully we won't assume we understand how bees work from that, and we won't assume our bees are harmless, and we'll get some value out of it, and just maybe we should feel free to assume that our bees are kinda smart.

http://gigaom.com/data/researchers-using-ai-to-build-robotic-bees/

http://www.theatlantic.com/technology/archive/2012/11/noam-chomsky-on-where-artificial-intelligence-went-wrong/261637/?single_page=true
Shared publiclyView activity