work on visualization in ANNs has received a lot of attention recently, and with good reason. But in terms of advancement in Artificial General
Intelligence, (AGI) I think the news that more quietly came out of Google's DeepMind lab about Natural Language Processing is much more significant. Here's why:
amount of the human brain is dedicated or related to visual processing - more than half. But it isn't our ability to see that separates humans from the rest of the animal kingdom. That separation was triggered by a much smaller portion of our brain: those small parts dedicated to language
Human civilization was borne out of language
. Language gave us the "self," logic, tools, agriculture, industry, technology and the scientific method. I would argue that language is more fundamental to general
intelligence than any other area.
If cracking the AI language nut is a shortcut to general intelligence, we may be arriving there much sooner than we expected. Until now, we haven't had very good data-sets for training NLP ANNs. Karl Moritz Hermann and his colleagues at DeepMind recently realized that they could use the structured data in CNN and Daily Mail articles in order to generate the first large scale structured corpus to date.
Apparently it's paying off:Teaching machines to read natural language documents remains an elusive challenge. Machine reading systems can be tested on their ability to answer questions posed on the contents of documents that they have seen, but until now large scale training and test datasets have been missing for this type of evaluation. In this work we define a new methodology that resolves this bottleneck and provides large scale supervised reading comprehension data. This allows us to develop a class of attention based deep neural networks that learn to read real documents and answer complex questions with minimal prior knowledge of language structure.
And from the article:The results clearly show how powerful neural nets have become. Hermann and co say the best neural nets can answer 60 percent of the queries put to them. They suggest that these machines can answer all queries that are structured in a simple way and struggle only with queries that have more complex grammatical structures.