[Next] [Up] [Previous]
Next: About this document Up: AI and the Philosophy Previous: `Pure' and `Impure' Functionalism

AI as an Account of Mind

So there is a lot of scope for variation on the issue of how much relevance you are going to give to hardware considerations in formulating a computational account of mental states. But this is not the only issue on which there is scope for different views. Another issue concerns the range of mental states which are considered to be subject to some sort of computational explanation. As we saw earlier, there seems to be a radical difference between experiential mental states, such as pains, and cognitive mental states, such as planning moves in a chess game. Mental states of the former kind seem to be much more resistant to computational explanation than do those of the latter. (This distinction between just two kinds of mental states is only intended to be a rough-and-ready preliminary division: there may be lots of other categories which it is important to delineate -- but it is a start.)

Many supporters of a computationalist account of mind would want to hold that all mental states are ultimately explicable in computational terms; others hold that only some are (the cognitive ones). We could distinguish between `full' and `partial' computationalism here. There are obviously difficulties with partial computationalism. The main problem is that it seems rather uncomfortable to hold that `the Mind' encapsulates processes of (at least) two such radically different types: those which are computational and those which are not. We certainly do not seem to experience such a division in our own firsthand experience: when we undergo a complex of states -- such as, for example, anger when some possessions are stolen -- the cognitive and the experiential aspects appear to be interwoven into a single unity.

On the other hand, it does seem to be utterly implausible to suppose that the nature of states of experiential awareness can be explained in computational terms. The considerable degree of credit that AI earns for itself in the context of, for example, sentence parsing or visual recognition, completely evaporates in the context of pleasure or pain. So if any kind of computationalism is to be accepted -- that is, any view which maintains that a given class of mental state is computational in nature -- then maybe one casualty will be the assumption that the mind is a unified field, and that all those processes traditionally classed as `mental' (and distinguished from the `physical') have a single fundamental essential characteristic.

Where does this leave us? How much weight should we give to AI, and computer science, in deepening our understanding of the mind? One thing is clear: there are a number of different alternative philosophical conclusions that might be drawn from AI. Rather more tentatively, we can conclude that pure functionalism is very unlikely to be a correct view. It is very unlikely that mental states can be explained in the purely abstract terms of computational structures, without any reference at all to how those computations are implemented. Another suggestion is that computationalism is unlikely to provide an account of all those phenomena which have traditionally been grouped under the notion of `mind'.

It may be, then, that one casualty in all of this will be our traditional notion of `mind' itself. We have been hanging on to this notion since Descartes, and indeed since long before. Even the most urbane of modern materialists -- who rejects the divinity, the immortality, or the immateriality of the soul, who rejects the idea of the mind as any kind of independent entity not completely explicable ultimately in physical terms -- may still be sufficiently imprisoned by the historical vestiges of the metaphysical, religious, and ethical roles once played by that grand old notion as to be incapable of believing that perhaps there is, after all, no such thing as a correct theory of mind as such. Perhaps the truth is that there is no such sphere as `the mental', but rather only a collection of disparate phenomena for which different sorts of explanations are appropriate. For the understanding of some among these phenomena AI is likely to be central; for the understanding of others it may play only a relatively minor role.


[Next] [Up] [Previous]
Next: About this document Up: AI and the Philosophy Previous: `Pure' and `Impure' Functionalism

Cogsweb Project: luisgh@cogs.susx.ac.uk