[Next] [Up] [Previous]
Next: `Pure' and `Impure' Functionalism Up: AI and the Philosophy Previous: Responding to Searle

Artefacts

At the centre of the debate between followers of Searle and his opponents is the question of the status of artefacts. The Chinese Room argument seems to depend for at least some of its plausibility upon the picture of computers, together with their programs, as objects which originate in us, their human creators, and whose properties must therefore be derivative upon our purposes. There can be no such thing as machine intentionality, so the claim goes, because the states of machines are purely dependent upon our ends: they cannot have their own goals. Everything they do, it is claimed, is done (ultimately, at least) because we created them and designed them to follow our `instructions'. Even if we were to imagine computers which were themselves designed and built by other computers -- and which were, perhaps, the product of very many generations of computer design and manufacture -- they would still be our artefacts originally. Therefore, it is argued, since intentionality and meaning are intimately bound up with goals and purposes, such machines cannot have their own inherent intentionality. The symbols that they operate with have meaning only when given interpretations which map on to our, human, purposes.

Opposed to this is a contrasting picture of machines which can and do have their own independent purposes and goals: machines to which it is, so it is believed, perfectly proper to ascribe a full-blooded `intentionality'. Anyone who has wrestled with even a moderately complex AI program will have experienced the feeling that one is confronting a being with a will of its own, with its own purposes and finality. For Searle this would just be a case of over-enthusiastic anthropomorphism, similar to that of the enraged driver who begins to attribute malign purposes to a misbehaving car.

And yet: it seems conceivable that, at least in principle, we might well be obliged to attribute independent or intrinsic goals or purposes to certain kinds of artefacts. Suppose it became possible one day to create human beings by some process of biochemical synthesis. We can imagine the people so created to be as similar to us as you like in physiological terms (that is, they are not composed of miniaturized electronic circuits, for instance, but of the same kind of DNA-based cell tissue as us). It is quite possible that such beings would possess all the same sorts of mental states that we do: they would have pains, emotions, cognitions, purposes, and desires, just like us, since (unlike computers, as presently conceived) they share exactly our physiology.

So it is not the fact that something is an artefact which prevents it from having intrinsic goals, or intrinsic intentionality. Can we, then, conclude, after all, that ordinary digital computers -- at least if they are capable of passing something like the Turing Test -- do, after all, possess intrinsic intentionality (or at least as much of it as we do)? This is doubtful. What makes the suggestion unconvincing is that the sorts of systems which are being considered as falling within the scope of the Turing Test (at least as it is usually conceived) have a very limited mode of operation. All that they are designed to do is to receive, manipulate, and transmit symbols or tokens, and this really is a very slender basis on which to assert that they have a cognitive existence of a sort which is equivalent to our own. Of course we might attribute to them a sort of intentionality: something which is more than the completely secondary, or derived, intentionality of the contents of a filing cabinet, while yet not in any way approaching the full-blooded intentionality of real human lives. But because their existence is so purely to do with the operation of symbols, their intentionality would be of a decidedly attenuated kind. When dealing with purely `intellectual', or symbol-related, activities -- such as playing games or solving problems in logic -- their intentionality more closely approaches ours, since the nature of the purposes in hand are more purely encapsulated in a world of pure symbols. But insofar as they are simulating our world-directed cognitions, as opposed to our symbol-directed cognitions, their intentionality would be but a poor simulacrum of our own.

Of course we can adorn such symbol-processing mechanisms with various accessories which will enable them to engage, in ever more sophisticated ways, with the real world. We can give them sensors which will provide environmental inputs, and motor effectors to act on the environment in various ways. As we do so we will progressively deepen their intentionality. But of course it will also be true that we will progressively be weakening the sense in which we are dealing with merely computing systems, as opposed to systems of a rather richer sort. How far could we take such a progression without wandering away from the original perspective and philosophy of the computational approach to mind? How far would it still be true that we were offering a computational account of mentality at all?

And again: what limits might there be to this progressive enrichment of such systems? How far could we reproduce real emotions, consciousness, pleasures, pains? It might well be considered that even the most sophisticated and agile robotic system would not really possess intentionality in its fullest sense if the only world it shared with us was the outer physical spatial world, rather than the inner world of feelings, hopes, fears, and satisfactions. There does seem to be an intimate relationship between notions, such as `purpose', `goal', `intention', and so on, and experiential notions, such as `satisfaction', `pleasure', and `grief'. Perhaps the notions of intentionality and semantics cannot be dissociated from consciousness.

Further, perhaps it might be said that there were at least two distinct kinds of purposes or goals: purposes related to output (for want of a better word) and purposes related to satisfaction. For an example of a goal (or a hierarchy of goals) of the first sort, consider an expert system which has asked you a question concerning a patient's condition for whom you are seeking a diagnosis. If you ask it why it asked you the question, it may explain the hypothesis it is currently seeking to confirm or disconfirm; it is, in other words, explaining its goal. You might then ask it why it wants to confirm that hypothesis, and it might reply in terms of some higher-level goal or goals. Eventually, such explanations will peter out; in the end, the highest-level goal that it can have (at least if it is the sort of expert system currently in use) will be that of producing the answer that you requested. In this sense all of its goals deal with generating a certain sort of output.

A doctor, on the other hand, conducting a similar kind of diagnostic dialogue, is likely to have additional sorts of goals, expressible perhaps in terms of job satisfaction, desire to eradicate disease or ease suffering, intellectual fascination with the scientific issues involved, and so on. People will claim, no doubt, that one day expert systems (or their successors) will possess such inner, satisfaction-oriented goals. But it is not clear what it would be for a system really to have such goals (as opposed to giving outward verbal expression of them). And it may be said that unless they were to do so, the sense in which you can ascribe goals or purposes to them at all is rather weak.


[Next] [Up] [Previous]
Next: `Pure' and `Impure' Functionalism Up: AI and the Philosophy Previous: Responding to Searle

Cogsweb Project: luisgh@cogs.susx.ac.uk