[Next] [Up] [Previous]
Next: Responding to Searle Up: AI and the Philosophy Previous: The Intentional Stance

Searle's Chinese Room

Not surprisingly, not all philosophers are happy with the idea of eliminating our `folk psychology'. There has been a strong tradition in recent decades of English-speaking philosophy of giving the concepts of common sense the greatest amount of respect. The idea that AI or neurophysiology or any other new-fangled subject could simply come along and supplant our time-honoured concepts of mind is regarded by these philosophers with great suspicion. In his famous paper ``Minds, Brains and Programs'' (Searle, 1980), the philosopher John Searle has offered a sustained counterblast to the eliminativist's dismissal of commonsense psychological notions. In it (and subsequently in his Reith Lectures -- Searle, 1984) he attempts to show why intentional notions are not so easy to give up, and why human intentionality must necessarily be in a privileged position with respect to machine intentionality. It is Searle's arguments that we shall now consider, after a brief diversion to the Turing Test.

It is now becoming common for people to communicate with each other by electronic mail, and no doubt it will soon be relatively easy for people to write AI programs which can join in on the act in a way that makes their contributions indistinguishable, to a greater or lesser extent, from those of humans. As we have already mentioned, Alan Turing (Turing, 1950) proposed a criterion, or benchmark, for machine intelligence and hence, by extension, for machine intentionality. The ideas that Turing put forward were outlined in chapter 1 of this book, so we need not dwell on them here. Turing's claim was, in essence, that if a program could be written which was flexible enough to participate in an extended electronic mail dialogue, and if it could do so in a way which was not easily ascertainable by other participants in the dialogue, then we could take the machine in question as displaying genuine intelligence. (This is a generalization of the test proposed by Turing in his paper, but one which keeps to the spirit of his proposal.)

It is probable that Turing would have been sympathetic to the eliminativist's claim that our common notions of `folk psychology' are confused and not capable of scientific justification. It was, indeed, partly to bypass the, as he saw it, hopelessly vague nature of questions such as ``Can machines think?'' that Turing proposed his test in the first place. Also he believed that our psychological terms were liable to change their meanings as a result of advances in computing and other sciences, and he predicted that, by the end of the current century, most educated people would agree without question that computers were capable of thinking.

Searle, by contrast, does not think there is anything vague about questions like ``Can machines think?'' Sure, he says, machines can think. We are machines, and we can think. But the key question is, ``Can digital computers think?'' or more precisely, ``Could a machine think merely by virtue of the fact that it was a digital computer programmed in a certain way?'' This is the question which Searle thinks is neither vague nor difficult to answer, and the answer is negative.

In order to show why that question has to be answered negatively, Searle describes an imaginary situation, rather like Turing's Imitation Game, but with some special features added. Searle pictures someone (whom we shall call `the operator') in a room equipped with a large number of pieces of paper on which are written various symbols that are unintelligible to the operator. There are slots in the wall of the room, through which more slips of paper with such symbols can be passed, both into and out of the room. The operator also has an elaborate set of rules giving precise instructions on how to build, compare, and manipulate symbol-structures, using the pieces of paper inside the room, in conjunction with those coming in from the outside. These instructions also tell the operator to send sets of symbols out of the room on occasion. The instructions are all expressed in terms of the formal properties of the symbols which, as we have said, have no significance in themselves to the operator.

In fact, the instructions correspond to a computer program which simulates the linguistic ability and understanding of a native speaker of Chinese. The sets of symbols being passed into and out of the room correspond to sentences in a meaningful dialogue (rather like a Turing Test dialogue, but in Chinese). The operator, however, understands no Chinese; the instructions for manipulating the Chinese symbols are, we assume, written in English or some other language which the operator understands. We are to suppose that the behaviour of the operator inside the room (the `Chinese Room', as it has been dubbed) is identical to the behaviour of an electronic computer running the same program. (In order to get round problems of speed we shall suppose that the operator is special in being able to work prodigiously fast.) The operator is, in effect, one particular implementation (albeit a rather strange one) of a particular program which supposedly `understands' Chinese.

To summarize: Searle's position can best be understood by imagining the following three changes made to Turing's imitation game:

We assume that the program we are dealing with can pass the Turing Test with flying colours: so that, for example, many native Chinese speakers fail to tell that the person they are apparently having extended intelligent conversations with over the electronic mail system is in fact a person simulating a computer simulating a native Chinese speaker.

Turing's claim was, in essence, this. Providing the program is of a sufficient degree of richness or complexity, the computer playing the imitation game will have, roughly, the mental states which we would have attributed to the human whose dialogue has been imitated. One such mental state is understanding the words of the language which you are using to communicate in. So if the dialogue is going on in English, you would expect the computer (or its human simulator) to understand English. If the dialogue is going on in Chinese, you would expect the computer, or its human simulator, to understand Chinese. But under our assumptions, this surely will not be the case. The human computer simulator, sitting inside the `Chinese Room', will be operating with various symbols, perhaps written on bits of paper or card. These symbol manipulation operations will be formally equivalent to those performed by our computer program which passes the Turing Test. However, it is apparently neither a necessary nor a sufficient condition of the human opertator's successfully replicating the computer's performance that the operator be able to understand any of the input or output sentences in the dialogue.

As Searle puts it, the person inside the Chinese Room has (or need have) only a syntax, as opposed to a semantics -- that is, a knowledge of various formal properties of the collections of linguistic tokens being manipulated, rather than an understanding of how the symbols relate to a reality lying outside the symbols themselves. Yet it is semantics, rather than syntax alone, which would be necessary in order for the symbol operator to be able to fix any content to the symbols -- that is, in order for there to be any genuine intentionality. But, says Searle, semantics cannot be derived from syntax.

Of course, the program whose operations are being simulated inside the Chinese Room is likely to contain many so-called `semantical' rules. But these rules are not `semantical' in the true sense, says Searle: all they can do is establish interrelationships between various purely formal operations. We, the human users of this formal system, can apply semantic interpretations to the input and output strings which are passing through the computer (or its analogue, the Chinese Room). But the semantics, or intentionality, which is thus generated is on the outside of the system -- imposed upon it rather than contained within it. If you directly ask the person inside the Chinese Room what all these symbols mean, the answer is likely to be: ``Search me -- they're just a bunch of meaningless squiggles to me.'' If you ask the system, which the operator is operating, whether it understands Chinese, the answer will come back, in Chinese: ``Of course I do, what do you think I'm speaking now?''

Here, then, we have a very clear proposal of how to demonstrate the difference between intentionality in humans and intentionality in computers. If Searle is right, the intentionality of computers must be derivative upon the intentionality of the human creators and users of those machines -- more like the derivative intentionality of a book or letter than like the intrinsic intentionality of real thinkers and communicators.


[Next] [Up] [Previous]
Next: Responding to Searle Up: AI and the Philosophy Previous: The Intentional Stance

Cogsweb Project: luisgh@cogs.susx.ac.uk