This paper examines the ideas set forth by Alan Newell and Herbert Simon in their seminal paper . The evidence for their hypotheses will be analyzed and the implications of their argument will be discussed. This paper begins with a concise summary of the physical symbol system and heuristic search hypotheses and then discusses the shortcomings in their argument.
The key point that Newell and Simon argue is that symbols are at the core of intelligence, where intelligence is defined as the ability of an entity to achieve its goals in a difficult, variable environment. This idea is supported by two hypotheses, the physical symbol system hypothesis and the heuristic search hypothesis, which will be examined. Newell and Simon reject the idea that intelligence is due to an "intelligence principle", thus rejecting a hidden homunculus, and suggest instead that intelligence emerges from a composite system which can store and manipulate symbols. From this belief emerges the hypothesis of physical symbol systems.
Before delving into their hypotheses, specific terminology and its ramifications must be covered. Symbols are physical patterns which can be arranged into data structures, called expressions. Within an expression, symbols are related to one another with physical descriptions, e.g., symbol α is next to symbol β. A physical symbol system contains a collection of expressions as well as a collection of processes which operate on such expressions, returning new expressions (which might be empty) after manipulation.
A physical symbol system, then, is a type of system which contains a set of expressions which change over time. Newell and Simon consider both humans and computers examples of physical symbol systems. Expressions are said to designate an object when given the expression, the system can affect the object or act in a manner dependent on that object. Essentially, designation means an object has been accessed by the physical symbol system in some way. Interpretation is a special form of designation where the system can perform a process referred to by an expression.
Certain properties are entailed by designation and interpretation: 1) symbols may designate any process, even those unknown at system start; 2) there is a symbol for each executable process; 3) expressions may be modified arbitrarily by some process; 4) expressions exists until modified; 5) there are essentially no limits on the number of expressions.
If one has a background in computers, it becomes clear that this description is a programming language. The terminology just presented further hints at a Lisp-like language for describing symbol systems, and frames the argument for both humans and computers being physical symbol systems. It is with this terminology that Newell and Simon present their hypotheses.
The first hypothesis is the physical symbol system hypothesis, which states that "a physical symbol system has the necessary and sufficient means for general intelligent action." General intelligent action means intelligence similar to that of humans, as recreation of human-level intelligence is the ultimate goal. By necessary, the hypothesis means that any generally intelligent system must be a physical symbol system. And by sufficiency, the hypothesis means a physical symbol system of enough complexity is enough to create general intelligence. It is important to note that it is not claimed that any given physical symbol system would give rise to general intelligence, for Newell and Simon do not consider intelligence so trivial, and that creating a generally intelligent physical symbol system would be an immense systems building task.
Evidence for the symbol system hypothesis is presented from both computer science and psychology. Programs successful at solving operational research problems, playing games, and theorem proving are used to make the case for their hypothesis. From psychological experiments on humans, evidence comes from verbal introspection during problem solving. According to Newell and Simon, when humans are solving tasks out loud, they are performing symbol manipulation. This is their evidence that general intelligence requires symbol manipulation. Further evidence given is simply the lack of any competing hypotheses.
The key idea of the physical symbol system hypothesis is that any physical symbol system of sufficient power and complexity can be made to exhibit general intelligence. To reiterate, both humans and computers are considered examples of physical symbol systems, so it is argued that it is possible, through creating symbol manipulating programs, to create a computer program with general intelligence. Intelligence requires more than just representation, though, and Simon and Newell postulate a second hypothesis as a mechanism for utilizing this representation.
The second hypothesis is the heuristic search hypothesis which, briefly stated, means that problem solving, and thus intelligence by its previous definition, is heuristic-driven search through a problem space. The measure of intelligence is the measure of the quality of the heuristics guiding the search. Furthermore, because these heuristics are themselves symbols, they can be modified and improved over time, allowing for more efficient search through the problem space.
Heuristics are necessary for search due to the size of the problem space and the limited processing power of any physical symbol system. Furthermore, it is not possible to ever construct a physical symbol system which could perform brute force search through the problem space with enough speed to give it any appearances of general intelligence. Brute force search would, for all but the most trivial of problems, force a physical symbol system to wait for the heat death of the universe before it could perform any action—only to restart its search now that a new world state had to be considered.
There are two components to heuristic search, a testing fuction and a generator function. The testing function determines if the current expression being examined is a solution, and the generator function creates new candidate solutions from the problem space. In order for heuristic search to be possible, some conditions must hold over the problem space. First, the problem space must have some order. Without this condition, no heuristic could do better than random search, thus precluding the creation of any intelligent system. Second, it must be possible to detect patterns in the problem space. Third, the generator must be able to change its behavior depending on the patterns detected. The final two conditions imply that the search process must be able to guide itself through problem space to find a solution efficiently. For heuristic search to be tractable, the generator must be able to selectively generate candidate solutions from the problem space in order to create a sparse tree. Without this selectivity, the search process would be under threat of combinatorial explosion.
The ideas presented by Newell and Simon offer a view which differed considerably from Behaviorism, another hypothesis popular at the time of the paper's presentation. Taken together, these two hypotheses form an elegant idea: that heuristic guided search over symbol data structures is a required and sufficient mechanism for human-level intelligence. If this idea were true, the general architecture for intelligent action would be a simple search function. Cognitive science would be a field devoted to determining the heuristics used for the variety of intelligent actions. But, flaws in their evidence and in their reasoning make it impossible to accept their claim in its strongest form.
The idea that the architecture for intelligence is located solely within the brain is hard to reconcile with successful work in embodied cognition by scientists like Rolf Pfeifer  and Rodney Brooks . Their work has shown the effect of morphology on intelligent action—essentially showing that certain kinds of intelligences can be implicit in morphology, i.e., without an explicit internal representation. The Cornell Ranger  illustrates this point. The Ranger posses little intelligence between the ears, as it were, but is very successful at walking long distances with little energy. It accomplishes this because the intelligence required for walking is implicit in its body, similar to humans, meaning that the Ranger need not calculate how much to modify each joint in its body to achieve the desired foot position. Walking is difficult for robots like the Asimo or Nao which, while far more intelligent, cannot perform as well as the Ranger. Their approach to walking is over-thought becasue they do not allow the offloading of walking-related mental processes into their morphology which makes their walking slow and energy inefficient.
The brain as symbol manipulator faces other challenges of efficiency. At the time of this paper, symbol manipulation had been used successfully in constrained tasks which lend themselves to logical reasoning. But when trying to solve the more basic problems of intelligence, symbol manipulation does not offer compelling arguments. One can imagine the difficultly of such low-level tasks as scene recognition for a physical symbol system. It is not obvious at what level symbols should be designated in a visual scene, nor what a search function would be seeking in this representation. Searching over scene space to recognize objects would likely take a long time. Coupled with the rate at which new images enter the brain it seems impossible to envisage a way to make symbol manipulation work. The space of scenes is essentially unbounded, even the best heuristic would have trouble making search tractable.
Another problem arises from the use of heuristics for search. How does a physical symbol system determine which heuristic is most appropriate for the task at hand? This is an important question, because if left unanswered it might lead one to a heuristic homunculus rendering this hypothesis' explanatory power to nil. There are two ways out of this situation.
One approach would be to use only one heuristic function for the entire physical symbol system. This entails a large, complex heuristic which is capable of evaluating a variety of situations. But this is a heuristic which could have evolved and would have a naturalistic explanation. This single heuristic could be efficient in its execution since the entire heuristic would be inline, to borrow a term from the computer science world. The main difficulty is the modification of such a complex function. It seems reasonable for it to be modified over the time periods of evolutionary processes, but a generally intelligent system must be able to modify itself within its lifetime. Modifying a large, complex heuristic function to incorporate new tasks and experiences—say the use of computers—would be a slow and energy intensive process.
Another approach would be the use of a metaheuristic . Essentially, a physical symbol system would contain a vast number of heuristics, each for a specific task. The goal of the metaheuristic would then be to search over the set of available heuristics and select the one which best fits the task at hand. The set of heuristics is then searched in heuristic space and the same general search function can be used to find the heuristic which best fits the input patterns. This approach is particularly elegant but searching over the full set of heuristics may be impossible, and is almost certainly computationally intensive.
The paper has many interesting ideas, which in their less literal form are having success in systems which are mixtures of symbolic and non-symbolic processing . Newell and Simon presented a well-thought out argument dependent on limited empirical data. The fall of the physical symbol system hypothesis was ushered in by new ideas like embodied cognition and connectionism. And while the strong form of their hypothesis is widely regarded as false, it is likely that their hypothesis will survive in some weaker form when artificial general intelligence is finally created.
A. Newell and H. Simon. Computer Science as Empirical Inquiry: Symbols and Search. Communications of the ACM, 19(3): 113–126, 1976.
R. Pfeifer and J. Bongard. How the Body Shapes the Way We Think: A New View of Intelligence. MIT Press, 2006.
R. Brooks. Elephants Don't Play Chess. Robotics and Autonomous Systems, 6(1-2):3–15, 1990.
This is similar to the IBM Watson, which runs thousands of machine learning algorithms simultaenously and uses a higher-level mechanism to collect them together and form an answer.
N. Nilsson. The Physical Symbol System Hypothesis: Status and Prospects. Lecture Notes in Computer Science. Springer, 2007.