Profile cover photo
Profile photo
Gary Marcus
2,688 followers -
author of Guitar Zero,, frequent blogger for The New Yorker
author of Guitar Zero,, frequent blogger for The New Yorker

2,688 followers
About
Posts

Post has attachment
Gary Marcus commented on a post on Blogger.
You are addressing I position I would never endorse. I wasn’t for a minute arguing that artificial general intelligence needs to be an exact replica of the “particular weird mix” that is human mind (see my book Kluge on my views re the weird mix), and you don’t need to go to poker to see to that. Deep Blue beat Kasparov without playing at all like him.

But Poker and Deep Blue are narrow AI use cases that can be brute forced. Although we were mercilessly cut by our editor - down to less than 940 words from 1800 — we still tried to make clear that AI’s need not be full replicas of people; the key word here is “some”:

Rather than merely imitating the results of our thinking, machines would actually share some of our core cognitive abilities.

The way I usually put this, when I have more space is, this: in many domains, computers (like chess, memory, and arithmetic) that people can’t, and there is no reason to make them behave as mistake-filled humans would. But in language, common sense, transfer (except at a superficial level) and open-ended reasoning, humans remain vastly superior. Even there, the goal for AI shouldn’t be replicas, but because humans do have vast advantages, we should learn from them

In the domain of language, for example, people construct cognitive models of what they are understanding, whereas current QA systems do (only) more primitive and vastly less flexible things like highlight relevant passages of text. It is not clear that you can ever build an open-ended linguistic system without constructing cognitive models or there equivalent, and that itself would seem to depend on having internal representations of things like time, space, object, agent, event and so forth that are the core of human cognition.

Of course AI’s might incorporate lots of other stuff, too, but systems that lack such representations, even when coupled with brute force, do exceedingly poorly at natural language understanding, common reasoning, planning, transfer, etc.
Add a comment...

Post has attachment
Is the brain a computer, after all? My latest in the NYTimes.
Add a comment...

Post has attachment
How to study the brain. We don't just need Big Data, we need big ideas. My latest, in the Chronicle of Higher Ed.

Post has attachment
How to study the brain. We don't just need Big Data, we need big ideas. My latest, in the Chronicle of Higher Ed.

Post has attachment
How to study the brain. We don't just need Big Data, we need big ideas. My latest, in the Chronicle of Higher Ed.
Add a comment...

Post has attachment
Cracking the brain's codes! Christof Koch and I discuss in a special issue on the brain at +MIT Technology Review 

Post has attachment
Cracking the brain's codes! Christof Koch and I discuss in a special issue on the brain at +MIT Technology Review  

Post has attachment
Cracking the brain's codes! Christof Koch and I discuss in a special issue on the brain at +MIT Technology Review 
Add a comment...

Post has attachment
Turing Test 2.0? What would be better? I discuss on NPR
Add a comment...

Post has attachment
Why people are still better programmers than machines are; an essay I wrote for The New Yorker website
 
Wait while more posts are being loaded