Shared publicly  - 
 
Turing’s intelligent machines

This will be the first in a series of essays discussing Turing’s view of artificial intelligence. You can find some relevant links for further consideration at the bottom of the post. Questions, comments, and suggestions are appreciated!

!: Turing’s prediction

In his 1950’s paper Computing Machinery and Intelligence, Turing gives one of the first systematic philosophical treatments of the question of artificial intelligence. Philosophers back to Descartes have worried about whether “automatons” were capable of thinking, but Turing pioneered the invention of a new kind of machine that was capable of performances unlike any machine that had come before. This new machine was called the digital computer, and instead of doing physical work like all other machines before, the digital computer was capable for doing logical work. This capacity for abstract symbolic processing, for reasoning, was taken as the fundamentally unique distinction of the human mind since the time of Aristotle, and yet suddenly we were building machines that were capable of automating the same formal processes. When Turing wrote his essay, computers were still largely the stuff of science fiction; the term “computer” hadn’t really settled into popular use, mostly because people weren’t really using computers. Univac’s introduction in the 1950’s census effort and its prediction of the 1952 presidential election was still a few years into the future, and computing played virtually no role in the daily lives of the vast majority of people. In lieu of a better name, the press would describe the new digital computers as “mechanical brains”, and this rhetoric fed into the public’s uncertainty and fear of these unfamiliar machines.

Despite his short life, Turing’s vision was long. His private letters show that he felt some personal stake in the popular acceptance of these “thinking machines”, and his 1950 essay was clearly written to some extent with these popular goals in mind. He offers a series of predictions in his paper, which begins the section entitled “Contrary views on the Main Question”. I’m quoting the passage in its entirety below, with the predictions in bold.

“It will simplify matters for the reader if I explain first my own beliefs in the matter. Consider first the more accurate form of the question. I believe that in about fifty years' time it will be possible, to programme computers, with a storage capacity of about 10^9, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning. The original question, "Can machines think?" I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted. I believe further that no useful purpose is served by concealing these beliefs. The popular view that scientists proceed inexorably from well-established fact to well-established fact, never being influenced by any improved conjecture, is quite mistaken. Provided it is made clear which are proved facts and which are conjectures, no harm can result. Conjectures are of great importance since they suggest useful lines of research.”

I’m quoting the full passage because Turing gives some context for the predictions, and his attitude towards these questions will be very important for sorting out exactly what Turing’s view is. In this series of Turing essays, I will discuss Turing’s views on artificial intelligence in detail. I believe that Turing’s position on artificial intelligence has been largely misunderstood and overly simplified, especially in academic and philosophical discussions but also by AI enthusiasts and professionals who take themselves to be compatible with, or even executing, Turing’s specific proposals. One clear sign that both academics and enthusiasts alike have Turing’s position wrong is that it is almost universally acknowledged that Turing’s predictions were mistaken. AI researchers and enthusiasts alike still hold out for what they call “Strong AI”, and no one thinks we have it yet. Turing predicted intelligent machines by the turn of the century, and it is almost universally acknowledged that these predictions have failed. In this essay, I will argue that the received view is mistaken, and that on the contrary Turing’s predictions were surprisingly accurate. If Turing were alive today he would absolutely convinced that our machines far exceed even his unconventional expectations, and he would be marvelled by the dynamic social relationships we’ve formed with our computers. Appreciating Turing’s insight here will, I hope, motivate a closer look at Turing’s philosophy of technology, and perhaps relieve us of the unfortunate mysticism and uncertainty that still surrounds the discussion of artificial intelligence.

@: Getting clear on the prediction

There are two distinct predictive sentences in the above passage, so let’s consider them in turn. I want to separate out the first prediction into two claims, as indicated below.

1a) I believe that in about fifty years' time it will be possible, to programme computers, with a storage capacity of about 10^9,

1b) ... to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning.

From the grammar of the sentence, I don’t think Turing is actually making a prediction about the storage capacity of computers near the turn of the century. In other words, I think the prediction is clearly concerning the imitation game referenced in (1b), and not about the storage of computers as such. But we can go ahead and check Turing’s work here anyway.

It is somewhat technical to work out exactly what Turing means by “storage capacity of 10^9”; he’s writing this paper around the same time the von Neumann architecture was developed, which is how we think about computers today, but he’s not describing the computers in those terms. For Turing, a computer’s storage capacity is the “logarithm to the base two of the number of states”, and that’s something slightly different from “storage” on a von Neumann machine. But now we are just being fussy. 10^9 bits is just under 1000 Mb, or about 120 MB. If Turing meant 10^9 bytes (which he didn’t, but let’s pretend), then that’s about 1 GB. At the turn of the century, memory in the hundreds of MB and hard drives in the GB range were available at the high end of consumer electronics, to say nothing of the much larger industrial computing applications. The so-called “Kryder’s law” describing the rate of storage growth shows that it wildly outpaces processor speed and other metrics of computer growth, a fact which surprised (and continues to surprise) just about everyone. Turing is working 15 years before Moore’s law was explicitly stated, or indeed before there was any significant data about the industrial production of computing machinery; the Moore after which the law is named was still in college as Turing wrote this paper. Nevertheless, Turing’s guess here is conservative, so I think we can safely count it in his favor. Our machines ended up doing far better here than even Turing imagined. The fact that Turing’s predictions are surprisingly conservative will be a constant refrain in this essay.

On to the prediction itself, (1b). The substantive part of the prediction concerns the phrase “playing the imitation game”; the rest of the prediction clarifies what Turing means by playing “so well”. A detailed discussion of the imitation game will be put off until the next post. Calling into question the received interpretation of Turing’s view will require a detailed discussion of what the imitation game is designed to do, and right now I just want to get clear on what the prediction actually is. So let’s assume the standard interpretation of the imitation game, where a machine is made to behave like a human being, in the form of a conversation involving questions asked by an interrogator and answers given by the machine. Turing’s prediction is that within a five minute dialog, if the average person cannot identify the machine as a machine more than 70% of the time, then Turing’s prediction will have been successful.

The first thing to note is that Turing isn’t predicting perfect indistinguishability; in fact, he’s only predicting a 30% success rate, quite a low bar to meet. More importantly he isn’t claiming that we would identify the machine as intelligent even if we to fail to distinguish the machine from a human. It take us six minutes to make the right identification, and once we do we can immediately judge the machine to lack intelligence; nevertheless, if the machine is capable of making us suspend our disbelief for those first few minutes, his prediction would be successful. It isn’t completely irrelevant, then, that machine routinely dupe us into treating them as intelligent beings. The people who fall for Nigerian Prince scam emails are an easy example; perhaps the automated responses of voice-recognizing automated calling centers is a better one. In fact, machines that fool us into thinking they are human have become so sophisticated that we’ve had to generate complex methods for distinguishing the two; they are called CAPTCHAs, and they have become a routine part of life in the Digital Age. Captcha have been around since, you guessed it, the year 2000, and for nearly as long we’ve had bots that can beat CAPTCHAs with greater than 30% success rate. I’ll say that one more time just to be clear: the very tools we use to distinguish human beings are routinely fooled by machines more than 30% of the time. This seems to be very much in line with the spirit of Turing’s prediction and the low bars he sets.

No one thinks that such machines are “intelligent”, though, at least not in the sense of so-called “Strong AI”. To pass the Turing test, it is commonly understood that we need an artificial intelligence capable of fully engaging in intelligent conversation and problem solving at a level “comparable to a human brain”. Strong AI might also require conscious experience, and we definitely don’t have anything like that yet. The gold standard for Turing tests is the annual Loebner Prize, which recruits a team of high profile philosophers and technologists to test the latest bots. Every year they give out a prize to the most convincing chatterbots; some of them, like Cleverbot, learn from interactions and are available to chat with online. Some of these bots are quick and engaging, and are definitely worth five minutes of entertainment whether or not you think it is intelligent. The Loebner also has two outstanding, one-time-only prizes, one for complete conversational fluency and one for conversational fluency in a multimedia environment. Both prizes remain unclaimed to date; this is more or less taken as the final word on the question of Turing’s prediction, and contributes in no small part to the received view that Turing’s predictions have failed.

As I will argue in these essays, these are standards that Turing himself would have rejected. The very idea of “Strong AI” is antithetical to his approach to the question of artificial intelligence, and he would find the whole discussion surrounding artificial intelligence today to be precisely the sort that is “too meaningless to deserve discussion.” It is commonplace at this point in the argument to dismiss Turing’s views on the mind. After all, he was working at a time when psychology was still in its nascent stages, and where Skinnerian Behaviorism ruled the labs. This was before the cognitive revolution (that is, computationalism) gave way to the contemporary understanding of the mind and brain. To give you some sense of just how impoverished psychology was at the time, Turing devotes a few hundred words in his essay to the possibility that human brains have some form of extrasensory perception unavailable to computers, like telepathy and telekinesis. Turing discusses these possibilities at some length because these possibilities were actually taken seriously at the time, especially by the military, a community with which Turing had some close ties. So it is standard practice to dismiss Turing’s behaviorism as a product of an outdated psychological theory and method, and to treat his trust in the power of logical computation as a naive presumption we now know to be false.

I think this dismissal is misguided on both charges, and that Turing deserves to be read more charitably. Turing’s test did not only stem from a belief that the inner workings of the mind and brain didn’t matter; in fact, he understood quite well the implications that computer would have for thinking about the mechanisms of our own mind. Turing’s primary objective in providing his test was as a way of screening out human biases against the machines, and not so much for demonstrating anything in particular that a machine can do. Turing believed that humans have an incredibly strong bias against these “mechanical brains”, and would immediately single them out as distinct from the kinds of creatures we are, even when there are no substantive reasons for drawing such distinctions except for our own prejudice. Turing’s imitation game was as much a test of human judgment and bias as it was a test for the capabilities of machines; success for Turing’s predictions would be a failure on our part to successfully distinguish the machines from the real human beings. This leads us into the discussion of the imitation game, which will occupy the next post in this series. A proper understanding of Turing’s game will reveal just how accurate his first prediction was.

However, we already have enough material on the table to discharge the second of Turing’s predictions. In his second prediction, Turing says,

2) I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted

Notice that this claim immediately follows the claim that “the original question, "Can machines think?" I believe to be too meaningless to deserve discussion.” So Turing is quite clearly stating that despite the meaninglessness of the question, general educated opinion would come to a general consensus that these machines can think. Turing clearly doesn’t expect his views on the mind to have been widely adopted even among his educated peers, so the fact that contemporary educated opinions on the human mind disagrees with Turing’s views don’t count against Turing’s predictions themselves.

I want to submit, in closing, that general educated opinion around the turn of the century has definitely come to accept the reality of artificial intelligence; the use of machines for intelligent tasks has not only been widely adopted, but our understanding and use of intelligent machines has transformed the global human population in ways that Turing could have only dreamed of. From anecdotal evidence, the engineers and technologists I’ve talked to over the last few decades have no hesitation speaking about what their computers know or understand (and what they don’t), what they think and what they sense. The vocabulary in which even technical specialists discuss their computers is filled with mentalistic and intentional vocabulary of precisely the sort we use for describing our own brains; not only has this vocabulary become widely adopted in these fields, but it is fundamental to academic research across nearly all disciplines. The few entrenched skeptics that remain would dismiss such vocabulary as mere metaphor, as if other uses of mentalistic vocabulary aren’t. Still, these few minor voices don’t undermine the widespread consensus that artificial intelligence is an important, if not foundational, aspect of contemporary scientific practice.

I could give any number of examples of such vocabulary in standard use, but my favorite comes from a 1999 paper by Dan Dennett, which puts it in just inside Turing’s forecast. A tiny bit of philosophy background is necessary, but I’ll let Dennett do most of the talking. Here he is arguing against Leibniz’s Mill, one of the more famous arguments for dualism. Leibniz attempts to prove that your thoughts aren’t in your brain. Leibniz says that I can think of the color blue and perceive it distinctly in my mind, but if you crack open my skull and poke around, you won’t see anything blue in the brain. This isn’t just because the blue things are really small; Leibniz asks you to imagine the brain blown up to the size of a factory, or a mill. Leibniz says that if you walk around that mill, you will still see only the mechanisms of the physical brain, just white and grey matter pulsing around. Nowhere in that mill will you find a single blue thing; thus, your thoughts of blue must lie elsewhere.

Dennett refutes Leibniz thusly:

“In the first half of the century, many scientists and philosophers might have agreed with Leibniz about the mind, simply because the mind seemed to consist of phenomena utterly unlike the phenomena in the rest of biology. The inner lives of mindless plants and simple organisms (and our bodies below the neck) might yield without residue to normal biological science, but nothing remotely mindlike could be accounted for in such mechanical terms. Or so it must have seemed until something came along in midcentury to break the spell of Leibniz’s intuition pump. Computers. Computers are mindlike in ways that no earlier artifacts were: they can control processes that perform tasks that call for discrimination, inference, memory, judgment, anticipation; they are generators of new knowledge, finders of patterns–in poetry, astronomy, and mathematics, for instance–that heretofore only human beings could even hope to find. We now have real world artifacts that dwarf Leibniz’s giant mill both in speed and intricacy. And we have come to appreciate that what is well nigh invisible at the level of the meshing of billions of gears may nevertheless be readily comprehensible at higher levels of analysis–at any of many nested “software” levels, where the patterns of patterns of patterns of organization (of organization of organization) can render salient and explain the marvelous competences of the mill. The sheer existence of computers has provided an existence proof of undeniable influence: there are mechanisms–brute, unmysterious mechanisms operating according to routinely well-understood physical principles–that have many of the competences heretofore assigned only to minds.”

I challenge anyone to argue that the general educated consensus would disagree with any of these claims.

_______

http://www.loebner.net/Prizef/TuringArticle.html
http://en.wikipedia.org/wiki/Philosophy_of_artificial_intelligence
http://en.wikipedia.org/wiki/Computer
http://en.wikipedia.org/wiki/Von_Neumann_architecture
http://www.scientificamerican.com/article.cfm?id=kryders-law
http://en.wikipedia.org/wiki/Moore%27s_law
http://en.wikipedia.org/wiki/Strong_AI
http://en.wikipedia.org/wiki/Dreyfus%27_critique_of_artificial_intelligence#Vindicated
http://en.wikipedia.org/wiki/Turing_Test
http://en.wikipedia.org/wiki/Loebner_Prize
http://cleverbot.com/
http://en.wikipedia.org/wiki/Captcha
http://en.wikipedia.org/wiki/Philosophy_of_artificial_intelligence#Related_arguments:_Leibniz.27_mill.2C_Block.27s_telephone_exchange_and_blockhead
http://ase.tufts.edu/cogstud/papers/zombic.htm

Turing picture taken from here: http://www.guardian.co.uk/uk/the-northerner/2011/dec/05/alan-turing-universityofmanchester

#turing #artificialintelligence
7
Gunther Cox's profile photoDaniel Estrada's profile photoDiane Bruce's profile photo
7 comments
 
Fantastic post! I tried to find the video on youtube but I a while ago I saw this clip of a conversation between two chatterbots that was very interesting because they actually got violent towards each other!
 
YES! I think its a bit amusing but the original article I read about this questioned if the robots actions showed anything more about humanity as a whole.
 
I don't mind giving away the conclusions of these essays; all the fun is in getting to the conclusion anyway, but it will help see the path if you know where I'm going.

Our machines are not distinct entities, any more than human beings are distinct individuals. Both humans and machines share existence within a complex and collaborative network, and it is the network itself that allows each of us to identify as intelligent, autonomous agents whose behaviors again give rise to the network. Human societies are capable of self-organization precisely because of the assistance of machines. Turing didn't quite have the vocabulary to frame his point in this way at the time, but he was clearly pointed in this direction; it likewise explains his simultaneous interest in computational biology around the same time as his work on AI, and unfortunately he did not live long enough to see these budding fields mature. But I think the texts that Turing leaves us already contain sufficient clues for putting this picture together, even while it cuts against the predominant interpretation of Turing's views.

There are two basic moves in my argument that will distinguish my interpretation of Turing from most of the standard interpretations. The first is to discuss the issue of what it means for a machine to "play a game", independent of the question of the machine's intelligence.. The way Turing sets up the game, the machine is playing it whether or not we ultimately identify it as a machine; so the very presumption of playing the imitation game assumes participation on the machine's behalf.

Participation for Turing is a low bar to meet, so calling the machine a participant at this stage doesn't get you much. The second move is to discuss the issue of machine autonomy, and what it would mean for a machine to be an autonomous participant. The question of machine autonomy is the entire point of Turing's extended discussion of Lovelace in the essay, and yet the connection between Lovelace's objection and the issue of autonomous machines is virtually absent from the literature. Not all machines are autonomous, but Turing gives some pretty clear ideas of what he takes machine autonomy to be. Working out Turing's views on autonomy will have significant application to the way we understand artificial intelligence today.

Anyway, that's just a preview of the rest of the essay. I know I'm long-winded, and G+ (and social networking in general) is a little too fast-paced for these essays to quite fit yet. But if anyone actually bothers to read all the way through: Thank you! I'd really love to have feedback in any form, and I sincerely appreciate the attention!
 
There's lots of reasons for rejecting Dualism, of course.

For me, the amazing thing about the denial of existing AI isn't the superstitious alternatives. Rather, it is the sheer pervasiveness of the belief, even among those who would otherwise staunchly reject dualist treatments. For instance, just today I came across an article in SA filled with praise for Turing's approach:

http://blogs.scientificamerican.com/guest-blog/2012/04/26/how-alan-turing-invented-the-computer-age/

The article contains the following passage:

"*Although Turing’s vision of AI has not yet been achieved*, aspects of AI are increasingly entering our daily lives. Car satellite navigation systems and Google search algorithms use AI. Apple’s Siri on the iPhone can understand your voice and intelligently respond. Car manufacturers are developing cars that drive themselves; some U.S. states are drafting legislation that would allow autonomous vehicles on the roads. Turing’s vision of AI will soon be a reality."

This isn't the fault of metaphysical superstitions clouding judgment. This is the result of systematic biases against the very idea of thinking machines, despite the superficial acceptance of the "possibility".
Add a comment...