Electronics
Simplified version of a mammalian visual system (A) and Perceptron simulating the biological network (B).

The forced learning technique, in which Perceptron was told when it correctly identified a letter, and when it missed, was used first. Later it was found that “corrective” or reinforced teaching, which notes only errors, was more effective. After Perceptron had seen each letter fifteen times and received proper correction, it could subsequently identify all the letters correctly.

Announcement of Perceptron triggered many wild headlines and a general misconception in the public mind. Dr. Rosenblatt and other developers wisely refuse to comment on the potential of his machine, but the number of experiments being conducted indicates wide scientific interest, and perceptron has attained the prestige of an uncapitalized generic term. However, the theory of its random process has been questioned by scientists including Theodore Kalin, one of the builders of an early electrical logic machine. Kalin feels that intelligence presupposes a certain minimum of a priori knowledge: the wired-in learning of the computer or the instincts or inherited qualities of animals. This of course echoes the thoughts of Kant who deplored the notion as similar to all the books and papers in a library somehow arranging themselves properly on the shelves and in filing cabinets.

Indeed, the whole idea of finding human intelligence mirrored in the electronic innards of the computer has been flatly denounced at some scientific symposiums. Computers given an intelligence test at the University of Michigan “flunked,” according to researchers. Another charge is that the reaction of the brain’s neuron depends on its history and thus cannot be compared with the computer. However, other researchers seem to have anticipated this weakness and are working on electronic or electrochemical neurons that also are conditioned by their input. Despite criticism, the bionics work proceeds on a broad front.

More recently a machine called Cybertron has been developed by the Raytheon Company. This more sophisticated machine is being trained to recognize sonar sounds, using the corrective technique. If Cybertron errs, the teacher pushes a “goof” button. When the machine is fully developed, Raytheon feels it will be able to recognize all typical American word sounds, using its 192 learning elements, and to type them out.

Computers generally do “logical” operations. Many human problems do not seem to be logical, and can be solved only by experience, as the mathematician Gödel demonstrated some years ago. Since Cybertron solves such “alogical” problems, its builders prefer not to call it a computer, but rather a self-organizing data-processor that adapts to its environment. Among the variety of tasks that Cybertron could perform are the grading of produce and the recognition of radar signals. Raytheon foresees wide application for Cybertron as a master learner with apprentice machines incapable of learning but able to “pick the brains” of Cybertron and thus do similar tasks.

Cornell Aeronautical Laboratory
With the letter C in its field of view, Perceptron’s photocells at top center are activated. Simultaneously, response units in panel at right identify the letter correctly.

The assembly of machines like Perceptron and Cybertron requires elements that simulate the brain’s neuron. One such component which has evolved from bionics research is the Artron, or artificial neuron. Inside the Artron are logic gates and inhibit gates. By means of reward or punishment, the Artron learns to operate a “statistical switch” and send impulses to other Artrons or to a readout. There are two interesting parallels here besides the operation of a simulated neural net. One is the statistical approach to decisions and learning. The late John von Neumann theorized that the brain’s actions might be statistical, or based on probability. Second, the designers of Artron see a similarity in its operation and Darwin’s theory of natural selection.