Until recently, computers have been primarily sequential, or serially operating, machines. As pointed out earlier, the brain operates in parallel and makes up for its slower operating individual parts in this way. Designers are now working on parallel operation for computers, an improvement that may be even more important than random-access memory.
Bionics
It is obvious that while there are many differences in the brain and the computer there are also many striking similarities. These similarities have given rise to the computer-age science of “bionics.” A coinage of Major J. E. Steele of the Air Force’s Wright Air Development Center, bionics means applying knowledge of biology and biological techniques to the design of electronic devices and systems. The Air Force and other groups are conducting broad research programs in this field.
As an indication of the scope of bionics, Dr. Steele himself is a flight surgeon, primarily trained as neurologist and psychiatrist, with graduate work in electronics and mathematics. Those engaged in bionics research include mathematicians, physical scientists, embryologists, philosophers, neurophysiologists, psychologists, plus scientists and engineers in the more generally thought of computer fields of electronics and other engineering disciplines.
A recent report from M.I.T. is indicative of the type of work being done: “What the Frog’s Eyes Tell the Frog.” A more ambitious project is one called simply “Hand,” which is just that. Developed by Dr. Heinrich Ernst, “Hand” is a computer-controlled mechanical hand that is described as the first artificial device to possess a limited understanding of the outside world. Although it will undoubtedly have industrial and other applications, “Hand” was developed primarily as a study of the cognitive processes of man and animals.
Besides the Air Force’s formal bionics program, there are other research projects of somewhat similar nature. At Harvard, psychologists Bruner and Miller direct a Center for Cognitive Studies, and among the scientists who will contribute are computer experts. Oddly, man knows little of his own cognitive or learning process despite the centuries of study of the human mind. It has been said that we know more about Pavlov’s dog and Skinner’s pigeons than we do about ourselves, but now we are trying to find out. Some find it logical that man study the animals or computer rather than his own mind, incidentally, since they doubt that an intelligence can understand itself anyway.
As an example of the importance placed on this new discipline, the University of California at Los Angeles recently originated a course in its medical school entitled “Introduction to the Function and Structure of the Nervous System,” designed to help bridge the gap between engineering and biology. In Russia, M. Livanov of the Soviet Academy Research Institute of Physiology in Higher Nervous Activity has used a computer coupled with an electric encephaloscope in an effort to establish the pattern of cortical connections in the brain.
While many experts argue that we should not necessarily copy the brain in designing computers, since the brain is admittedly a survival device and somewhat inflexible as a result of its conditioning, it looks already as if much benefit has come from the bionics approach.
The circuitry of early computers comprised what is called “soldered” learning. This means that the connections from certain components hook up to certain other components, so that when switches operated in a given order, built-in results followed. One early teaching device, called the Electric Questionnaire, illustrates this built-in knowledge. A card of questions and answers is slipped over pegs that are actually terminals of interconnected wires. Probes hooked to a battery are touched to a question and the supposed correct answer. If the circuit is completed, a light glows; otherwise the learner tries other answers until successful.
More sophisticated systems are those of “forced” learning and free association. Pioneer attempts at teaching a computer to “perceive” were conducted at Cornell University under contract with the Air Force to investigate a random-network theory of learning formulated by Dr. Frank Rosenblatt. Specifically, the Perceptron learns to recognize letters placed in front of its “eyes,” an array of 400 photocells. The human brain accomplishes perception in several steps, though at a high enough rate of operation to be thought of as a continuous, almost instantaneous, act. Stimuli are received by sense organs; impulses travel to neurons and form interconnections resulting in judgment, action if necessary, and memory. The Perceptron machine functions in much the same manner.