[Next] [Up] [Previous]
Next: AI as an Account Up: AI and the Philosophy Previous: Artefacts

`Pure' and `Impure' Functionalism

It has become increasingly fashionable, in recent years, for people in AI to embrace the view that hardware, or the computer architecture, has to be borne in mind in the understanding of how computational models relate to human thinking -- particularly in relation to parallel or distributed processing, as opposed to the sequential processing which has characterized most computing systems up to now. We saw in chapter 8 how recent work on connectionist machines has suggested some extremely rich (if, at the moment, rather embryonic) models for cognition. The bulk of the work in connection with parallel distributed processing is still to be done. However it does suggest an alternative current of philosophical thinking which may recommend itself to people who wish to use AI as a stepping-stone towards understanding and explaining the mind.

In the dawn of the computer era, and of the era of AI, it was supposed that the notion of mind could be explained entirely in terms of the formal, or functional, properties of a computational system. The term `functional', as used here, communicates the idea that, in order to have a mind, a system did not have to have a specific physical makeup; it just had to be so organized that it was capable of realizing a specific sort of abstract computational structure. This computational structure might be implemented on different kinds of physical device: electronic circuits or human brains, or on other kinds of more exotic hardwares yet to be discovered. This sort of approach -- which might be called `pure' functionalism, or `pure' computationalism -- claimed that the mind could be discussed in terms which were entirely implementation-independent. Pure functionalism offered an exciting new approach to the solution of those age-old problems in the philosophy of mind which we talked about at the beginning of this chapter (Putnam, 1960, 1965).

It was partly in order to show up the inadequacy of `pure functionalism' that Searle produced his `Chinese Room' argument. If the nature of the hardware was irrelevant to the origination of mental states, then a human agent could substitute for the electronic mechanism performing the operations in a given computational system without altering the supposed thinking states of that system. It was the absurdity of that result that Searle was attempting to demonstrate.

While many of Searle's opponents strove to defend `pure functionalism' against his attack, many replies to him in effect conceded that that rather extreme view had to be given up. That is, they conceded that while, in the case of his imaginary Chinese Room symbol operator, no genuine mental states would be present, it may still be the case that, in an electronic realization of the same computational operations, real mentality would result. One typical response along these lines was given by Aaron Sloman, in a paper entitled ``Did Searle Refute Strong Strong or Weak Strong AI?'' (Sloman, 1986), in which he argued that what differentiated the Chinese Room case from a standard case of a computational mechanism was, essentially, the free will of the human operator inside the room. In a conventional computer system the processing device is under the control of the program, whereas, in the Chinese Room case, the operator's actions are only guided by the rules in the program. This is one of a number of different ways in which it could be argued, with some plausibility, that what Searle says may be true as far as his example goes, but irrelevant to the larger issue of whether a `real' computer -- a VAX, or a Cray, say -- could possess mental states.

As might have been expected, many people have also made appeal to the new developments in parallel and neural architectures in order to block the sorts of conclusions Searle wishes to draw. The argument that is often used is that human cognitive processing appears to possess certain characteristics -- such as being able to carry distributed representations, susceptibility to graceful degradation, and so on -- that could be practically realizable only within an architecture which, like that of the human brain, involves a network of very many simple processors linked up together. Although, abstractly, the computational characteristics of such connectionist networks could be simulated on a serial machine, in practice the time taken for the processing would be far too slow to be appropriate for the real time needs of an intelligent agent operating in a real time environment. Insofar as we interpret connectionism as proposing a criterion for the occurrence of genuine mentality (as opposed merely to a fertile new paradigm for the modelling of cognition), this involves an even more radical departure from `pure functionalism' than Sloman's suggestion considered above. For it would rule out the possibility that today's large single-processor machines (even if their speed and memories were considerably extended) could ever genuinely possess mental states, since their hardware architecture is so different from that of a genuine cognizer (Thagard, 1986). For some excellent philosophical discussions of the implications of connectionism, see Boden (1984), Clark (1987) and Clark (forthcoming).

Clearly, there is room for equivocation here. In practice many of those people who have defended an AI-flavoured philosophical account of mind have tended to vacillate somewhat between the `pure functionalist' position and the comparatively impure position which is suggested by some of the new developments in connectionism. Obviously you could depart quite radically from `pure functionalism', while still holding that certain sorts of electronic computing machines were capable of being the subjects of genuine mental states.


[Next] [Up] [Previous]
Next: AI as an Account Up: AI and the Philosophy Previous: Artefacts

Cogsweb Project: luisgh@cogs.susx.ac.uk