In 1936 Alan Turing proposed the idea, related to Godel's, that any effective procedure that can be accomplished by human problem solvers can be formalized. Since a machine is an embodiment of a formalism, the procedure can then be duplicated by a machine. Hence the idea of the universal machine-- the computer.
Unlike Godel's theorem, this is held to be only a conjecture that works for a large but indefinite class of procedures. It seems to involve some circularity, in that an "effective procedure" is already a formalism and problem solving already implies a restriction to certain kinds of activities which may be formalized. It may not be true that all human thought and behaviour, including cognition, is formalizable and therefore replicable by a machine. The Turing hypothesis is the cornerstone of artificial intelligence. Ultimately it is the notion that the functioning of a human brain can be understood and abstracted in sufficient detail to be replicated by a computer.
But the grand dreams of classical AI have essentially met with failure, and we should be skeptical in principle about the idea that intelligent behavior can be completely formalized, abstracted out of its embodied context. When a "piece" of behavior (of a machine or of another creature) seems to resemble a human action, implicitly each is being compared to a common formalism, which is the essence of that action as the human mind has identified and abstracted it. One simulates the other because they both embody the same formalism. But one does not thereby replicate the other, whose being has not been exhausted in the formalism. In addition, there is the danger of indulging a pathetic fallacy-- by judging the purposiveness or intelligence of the action in terms of human premises. For an action to be considered genuinely intelligent, it would have to be evaluated in terms of the premises of the intelligent system itself (which, of course, must have such premises of its own!).
An organism is a causally closed system-- in the sense that every normal process within the organism has a cause within the organism itself. In contrast, a machine is causally open, in that changes within it are driven by an outside environment. A machine, like an individual organism, makes waste-products which are not part of it, but in Nature-- that is, the biosphere as a whole-- there is no waste. Everything is recycled.
A computer program can indeed simulate some human mental processes and actions, and the optimism of classical AI theorists was founded on their success at simulating obviously intentional systems like logic. But objectified notions of structure, function and information can mislead us to the conclusion that an information processing system processes the same information as a brain, embodies the same structure, performs the same operations. In some cases this assumption may be justified. These seem to be cases that involve aspects of human intelligence that have already been formalized (like logic) or for which effective procedures have been found.
The limits of simulation hinge on just how much of human (or any) activity and structure can be formalized. We could draw a distinction between the organism's activity as a causal system and as an intentional system. It may be that a natural phenomenon can never be formally exhausted, while certainly an intentional system can be. The behavior of an intentional system may be carried on the richer analog substrate-- for instance, of neural activity-- just as a message is carried on a physical signal. If so, then there is yet some hope for reproduction through formalization and traditional approaches to AI, since the elusive wealth of the analog carrier is essentially irrelevant to the intentional system. But this rests, in any case, on the assumptions that behavior can be treated purely as intentional, and that the intentional system can be correctly identified and exhaustively understood by an outside agent. This is far from certain. There are, of course, other reasons for the failure of traditional AI. For instance, it has become apparent that simulations of the brain based on linear processing are unrealistic or false.
Some critics of AI believe that intentionality is an inherently biological phenomenon. Perhaps what they mean is that it is an inherently embodied phenomenon, a product of natural selection. No artificial system has, as of today, its own intentionality-- no doubt because no artificial system has come close to duplicating the conditions of embodiment which determine the intentionality of organisms. Little effort in this direction has been made in traditional AI, work proceeding instead on the simulation of fragments of behavior.
Developments in the new field of Artificial Life (AL) may circumvent the limitations of formalization and linear processing. If artificial systems can be evolved through circumstances equivalent to natural selection, then the conditions of embodiment could be met, resulting in systems possessing their own intentionality, and hence intelligence. However, by definition, this could not be a process entirely within human control. It could not therefore be used for the purpose of exhaustively producing or replicating a specified organism or structure. Human purposes would have the same relationship to artificial evolution as they do to natural evolution. In other words, the process could be partially controlled through applied selective breeding, as it presently is with natural organisms. Evolution could be guided but not strictly determined.
At present, the conditions of embodiment are being simulated in computers, through programs which simulate both the defining properties of life and the process of natural selection. At this point, these artificial organisms are only simulations, principally because embodiment itself is only simulated. These are virtual, not physical, organisms. The selection rules are arbitrary inventions of programmers, not conditions in the real world, or resulting from a competitive environment of other preternatural self-defining entities. We do not yet have the capacity to construct self-replicating physical machines which could be turned loose to compete with each other (and probably with us!) for survival. But this capability is not far off. Developments in microtechnology may soon render it possible to physically embody the virtual simulations of AL, resulting in true artificial organisms, and a new chapter in the history of the biosphere. Observers of this scene already warn of the inevitability of superintelligent machines and a "singularity"-- a big crunch in the exponentially developing future of technology-- at which a point of no return is reached where technology is no longer within human control. New entities might arise as the principal players in a game with analogous but unknown rules.
From the beginning of time the fact of mortality has rubbed our self-conscious noses in the vulnerabilities of embodied existence. Certainly this has contributed to the desire to control Nature and even to create life. The mechanistic metaphor has proven a powerful instrument of human control. On the other hand, it has added to the injury of mortality a succession of insults to human centrality in the scheme of things. Copernicus, Newton, Darwin, Einstein and Freud facilitated major downward revisions of our special human status. Perhaps the final nail in the coffin of our superiority will be driven home, ironically, by the advent of superintelligent artificial organisms. Whatever consciousness is, it will then be finally clear that it is not a uniquely human prerogative. It could turn out that we are not the pinnacle of evolution, but merely an expendable steppingstone toward inconceivable new forms of life.
What are the limits to mechanism as a metaphor of biological reality? Can a living cell, for example, be considered a self-replicating machine? If so, we are faced with the possibility of constructing or evolving microscopic self-replicating machines modelled on living cells. If these little "factories" were controllable from the macro level, they might in their ensemble constitute programmable matter that can reassemble itself into any desired shape and function! They might also become a rogue new form of quasi-life, a new plague dangerously out of human control.
The program of instructions by which the cell reduplicates may indeed be an exhaustible formalism-- since it appears to consist of a finite structure. But can it be assumed that all information governing the process of self-replication is contained strictly in the genetic material within the cell? Even in the case of a computer, by analogy, not all the information is contained in the program. The functioning of a computer is an interaction of software, hardware and human user. Is there not a similar figure-background relationship between DNA and the total environment of the cell, which includes the materials from which the cell constructs a duplicate of itself and chemical instructions in the environment or in the body as a whole? As for multi-celled organisms, the process of cell differentiation remains one of the great mysteries of biology. It may turn out that there is a big difference between an organism-- natural or artificial-- and a machine, however sophisticated, which makes products specified by human intent and under human control. It is misleading to consider a cell a factory, because the principle product of an organism is itself. This is not only because it self-replicates, but because its whole endeavor is to maintain its existence and identity. As it own intentional system, it is only incidentally an instrument of human intention. It is not a tool, except in the way that nature is presently managed for human purposes.
As long as the process of fabrication is understood in the traditional sense, we are on familiar ground. We already know, for instance, that manufactured products are subject to various imperfections, and that industry makes pollution (precisely because natural processes do not conform totally to human intention). Thought is an approximation, and never encompasses the whole of physical process. There can always be an unforeseen and perhaps undesirable by-product, simply because thought is simplistic. But when we assume the perfectability of thought, our footing is precarious-- as though we believed that Nature owes it to us to conform to our ideas. If we wish to make tiny factories that operate in the traditional way as instruments under human control, then we can expect of nanotechnology more or less what we get from present technology-- multiplied incredibly by speed and capacity. But any technology that truly imitates the wisdom of Nature will necessarily be out of human control.
What about the problems of control between levels of scale? How would nanocomputers, controlling microscopic molecular factories, be controlled on the macroscopic level? It is one thing to imagine nano-factories that are self-contained, self-programming, self-modifying and self-evolved just like microbes. Like organisms, these could only be controlled indirectly. It is quite another thing to imagine tiny factories fully programmable from the macro level. How would communication take place? Through radio transmissions, each microscopic factory on a different frequency? Practical difficulties aside, might there be theoretical limits as well? Perhaps it could be argued that such communication between hierarchical levels already exists-- in the human body, for instance. But this is no one-way directive from an external source of intelligence. The cells of the body are not "operated by" the brain, nor does the person enter into conscious communication with them. Rather, the organism is a continuous cycling of influences, through many channels, from bottom up and from top down.