A couple of weeks ago, I got in a relatively long discussion with +Daniel Estrada about Watson, and whether goal-orientation and task-capability in a human domain were sufficient to define "intelligence."

Watson is a very fast, very cool natural-language index with a stack of SVMs in the middle. But it's just a dynamic index on top of a static, non-synthetic knowledge pool -- and the way it fills its index means that there's basically no feedback or dynamic linking from an evolving understanding of query syntax to its own knowledge base.

It's a natural-language search appliance with a well-curated knowledge base -- something like a Google search. That's not nothing! That's actually really great. But insofar as it's like a human intelligence, it's like one particular part of human intelligence: natural language query parsing and fast lookup. It's likely to produce gibberish when given an un-curated knowledge base, can't answer out-of-band questions through synthesizing data, can't produce dynamic feedback from query structure to create new data for its base, and has chronic problems with nonlinear model fit.

This is not some sort of "we're way better" triumphalism.

We're plainly not better at Jeopardy, for instance. We've gotten to the point where we understand discrete functions of the human brain well enough to approximate them or, in some cases, improve on them, but by building those approximations, we have learned that our own categories for describing human intelligence are non-atomic, and that our objectives are harder to achieve than we thought when we set out to achieve them.

It's like the science of heat and fire: we used to think that heat was a basic force, like electromagnetism, and that fire was a simple, fundamental phenomenon. Neither turned out to be true. Both are complicated. We're in the same space now with artificial intelligence: by attempting to build intelligences like our own, we are learning what parts of human intelligence are dumb tricks done quickly (most of it), and which parts are hard problems we don't know the answers to.

+Daniel Estrada finds this unnecessarily reductive and essentialist, and argues for a quacks-like-a-duck definition: if does a task which humans do, and effectively orients itself toward a goal, then it's "intelligence." After sitting on the question for a while, I think I agree -- for some purposes. If your purpose is to build a philosophical category, "intelligence," which at some point will entitle nonhuman intelligences to be treated as independent agents and valid objects of moral concern, reductive examination of the precise properties of nonhuman intelligences will yield consistently negative results. Human intelligence is largely illegible and was not, at any point, "built." A capabilities approach which operates at a higher level of abstraction will flag the properties of a possibly-legitimate moral subject long before a close-to-the-metal approach will. (I do not believe we are near that point, but that's also beyond the scope of this post.)

But if your purpose is to build artificial intelligences, the reductive details matter in terms of practical ontology, but not necessarily ethics: a capabilities ontology creates a giant, muddy categorical mess which disallows engineers from distinguishing trivial parlor tricks like Eugene Goostman from meaningful accomplishments. The underspecified capabilities approach, without particulars, simply hands the reins over to the part of the human brain which draws faces in the clouds.

Which is a problem. Because we are apparently built to greedily anthropomorphize. Historically, humans have treated states, natural objects, tools, the weather, their own thoughts, and their own unconscious actions as legitimate "persons." (Seldom all at the same time, but still.) If we assigned the trait "intelligence" to every category which we had historically anthropomorphized, that would leave us treating the United States, Icelandic elf-stones, Watson, Zeus, our internal models of other peoples' actions, and Ouija boards as being "intelligent."

Which leads to not being able to express the way in which Eliza, a relatively simple stateless text parser which returns "conversational" results, meaningfully differs from a human. Which makes it difficult to define additional problems. Which makes the definition not necessarily helpful for that particular purpose.

Shared publiclyView activity