This sort of diagram conceals much hidden complexity. Each of the named sub-processes may have a range of internal structures and sub-processes, some relatively permanent, some very short term.
However, even this kind of complexity does not do justice to the kind of intelligence that we find in human beings and many animals. For example, there is a need for internal self-monitoring processes as well as external sensory processes. A richer set of connections may be needed between sub-processes. For example, planning may require reasoning, and perception may need to be influenced by beliefs, current goals, and current motor plans (see figure 1).
It is also necessary to be able to learn from experience, and that requires processes that do some kind of retrospective analysis of past successes and failures. The goals of an autonomous intelligent system are not static, but are generated dynamically in the light of new information and existing policies, preferences, and the like. There will also be conflicts between different sorts of goals that need to be resolved. Thus `goal-generators' and `goal-comparators' will be needed, and mechanisms for improving these in the light of experience.
Further complexities arise from the need to be able to deal with new information and new goals by interrupting, modifying, temporarily suspending, or aborting current processes. I believe that these are the kinds of requirements that explain some kinds of emotional states in human beings, and we can expect similar states in intelligent machines.
Whether or not the design sketched above is accurate, ideas developed in exploring such designs may prove to be essential for developing correct theories about how the mind works. This may be so even if the human mind is embodied in a physical system whose basic mechanisms are very different from a modern digital computer.