6: The Electronic Brain

The idea of a man-made “brain” is far from being new. Back in 1851, Dr. Alfred Smee of England proposed a machine made up of logic circuits and memory devices which would be able to answer any questions it was asked. Doctor Smee was a surgeon, keenly interested in the processes of the mind. Another Britisher, H. G. Wells, wrote a book called Giant Brain in 1938 which proposed much the same thing: a machine with all knowledge pumped into it, and capable of feeding back answers to all problems.

If it was logical to credit “human” characteristics to the machines man contrived, the next step then was to endow the machine with the worst of these attributes. In works including Butler’s Erewhon, the diabolical aspects of an intelligent machine are discussed. The Lionel Britton play, Brain, produced in 1930, shows the machine gradually becoming the master of the race. A more physical danger from the artificial brain is the natural result of giving it a body as well. We have already mentioned Čapek’s R.U.R. and the Ambrose Bierce story about a chess-playing robot without a built-in sense of humor, who strangles the human being who beats him at a game. With these stories as models, other writers have turned out huge quantities of work involving mechanical brains capable of all sorts of mischief. Most of these authors were not as well-grounded scientifically as the pioneering Dr. Smee who admitted sadly that his “brain” would indeed be a giant, covering an area about the size of London!

The idea of the giant brain was given new lease by the early electronic computers that began appearing in the 1940’s. These vacuum-tube and mechanical-relay machines with their rows of cabinets and countless winking lights were seized on gleefully by contemporary writers, and the “brain” stories multiplied gaudily.

Many of the acts of these fictional machines were monstrous, and most of the stories were calculated to make scientists ill. Many of these gentlemen said the only correct part of the name “giant brain” was the adjective; that actually the machine was an idiot savant, a sort of high-speed moron. This opinion notwithstanding, the name stuck. One scholar says that while it is regrettable that such a vulgar term has become so popular, it is hardly worth while campaigning against its use.

An amusing contemporary fiction story describes an angry crowd storming a laboratory housing a “giant brain,” only to be placated by a calm, sensibly arguing scientist. The mob dispersed, he goes back inside and reports his success to the machine. The “brain” is pleased, and issues him his next order.

“Nonsense!” scoff most computer people. A recent text on operation of the digital computer says, “Where performance comparable with that of the human brain is concerned, man need have little fear that he will ever be replaced by this machine. It cannot think in any way comparable to a human being.” Note the cautious use of “little,” however.

Another authority admits that the logic machines of the monk Ramón Lull were very clever in their proof of God’s existence, but points out that the monk who invented them was far cleverer since no computer has ever invented a monk who could prove anything at all!

The first wave of ridiculous predictions has run its course and been followed by loud refutations. Now there is a third period of calmer and more sensible approach. A growing proportion of scientists take a middle-of-the-stream attitude, weighing both sides of the case for the computer, yet some read like science fiction.

Cyberneticist Norbert Wiener, more scientist than fictioneer, professes to foresee computerized robots taking over from their masters, much as a Greek slave once did. Mathematician John Williams of the Rand Corporation thinks that computers can, and possibly will, become more intelligent than men.

Equally reputable scientists take the opposite view. Neuro-physiologist Gerhard Werner of Cornell Medical College doubts that computers can ever match the creativity of man. He seems to share the majority view today, though many who agree will add, tongue in cheek, that perhaps we’d better keep one hand on the wall plug just in case.

Thinking Defined

The first step in deciding whether or not the computer thinks is to define thinking. Far from being a simple task, this definition turns out to be a slippery thing. In fact, if the computer has done no more than demand this sort of reappraisal of the human brain’s working, it has justified its existence. Webster lists meanings for “think” under two headings, for the transitive and intransitive forms of the verb. These meanings, respectively, start out with “To form in the mind,” and “To exercise the powers of judgment ... to reflect for the purpose of reaching a conclusion.”

Even a fairly simple computer would seem to qualify as a thinker by these yardsticks. The storing of data in a computer memory may be analogous to forming in the mind, and manipulating numbers to find a square root certainly calls for some sort of judgment. Learning is a part of thinking, and computers are proving that they can learn—or at least be taught. Recall of this learning from the memory to solve problems is also a part of the thinking process, and again the computer demonstrates this capability.

One early psychological approach to the man-versus-machine debate was that of classifying living and nonliving things. In Outline of Psychology, the Englishman William McDougall lists seven attributes of life. Six of these describe “goal-seeking” qualities; the seventh refers to the ability to learn. In general, psychologist McDougall felt that purposive behavior was the key to the living organism. Thus any computer that is purposive—and any commercial model had better be!—is alive, in McDougall’s view. A restating of the division between man and machine is obviously in order.

Dr. W. Ross Ashby, a British scientist now working at the University of Illinois, defines intelligence as “appropriate selection” and goal-seeking as the intelligent process par excellence, whether the selecting is done by a human being or by a machine. Ashby does split off the “non goal-seeking” processes occurring in the human brain as a distinct class: “natural” processes neither good nor bad in themselves and resulting from man’s environment and his evolution.

Intelligence, to Ashby, who long ago demonstrated a mechanical “homeostat” which showed purposive behavior, is the utilization of information by highly efficient processing to achieve a high intensity of appropriate selection. Intelligent is as intelligent does, no distinction being made as to man or machine. Humanoid and artificial would thus be meaningless words for describing a computer. Ashby makes another important point in that the intelligence of a brain or a machine cannot exceed what has been put into it, unless we admit the workings of magic. Ashby’s beliefs are echoed in a way by scientist Oliver Selfridge of Lincoln Laboratory. Asked if a machine can think, Selfridge says, “Certainly; although the machine’s intelligence has an elusive, unnatural quality.”

“Think, Hell, COMPUTE!” reads the sign on the wall of a computer laboratory. But much of our thinking, perhaps some of the “natural” processes of our brains, doesn’t seem to fit into computational patterns. That part of our thinking, the part that includes looking at pretty girls, for example, will probably remain peculiar to the human brain.

The Human Brain

Mundy Peale, president of Republic Aviation Corporation, addressing a committee studying the future of manned aircraft, had this to say:

Until someone builds, for $100 or less with unskilled labor, a computer no larger than a grapefruit, requiring only a tenth of a volt of electricity, yet capable of digesting and transmitting incoming data in a fraction of a second and storing 10,000 times as much data as today’s largest computers, the pilots of today have nothing to worry about.

The human brain is obviously a thing of amazing complexity and fantastic ability. Packed into the volume Mr. Peale described are some 10 billion neurons, the nerve cells that seem to be the key to the operation of our minds. Hooked up like some ultra-complicated switchboard, the network of interconnections stores an estimated 200,000,000,000,000,000,000 bits of information during a lifetime! By comparison, today’s most advanced computers do seem pathetically unimpressive.

We have discussed both analog and digital computers in preceding chapters. It is interesting to find that the human brain is basically a digital type, though it does have analog overtones as well. Each of the neurons is actually a switch operated by an electric current on a go/no-go, all-or-nothing basis. Thus a neuron is not partly on or partly off. If the electrical impulse exceeds a certain “threshold” value, the switch operates.

Tied to the neurons are axons, the long “wires” that carry the input and output. The axons bring messages from the body’s sensors to the neurons, and the output to other neurons or to the muscles and other control functions. This grapefruit-size collection of electrochemical components thus stores our memories and effects the operation we call thinking.

Since brain impulses are electrical in nature, we speak of them in electrical terms. The impulses have an associated potential of 50 millivolts, that is, fifty thousandths of a volt. The entire brain dissipates about 10 watts, so that each individual neuron requires only a billionth of a watt of power. This amount is far less than that of analogous computer parts.

A neuron may take a ten-thousandth of a second to respond to a stimulus. This seemingly rapid operation time turns out to be far slower than present-day computer switches, but the brain makes up for this by being a “parallel operation” system. This means that many different connections are being made simultaneously in different branches, rather than being sequential, or a series of separate actions.

Packaging 10 billion parts in a volume the size of a grapefruit is a capability the computer designer admires wistfully. Since the brain has a volume of about 1,000 cubic centimeters, 10 million neurons fit into a space of one cubic centimeter! A trillion would fit in one cubic foot, and man-made machines with even a million components per cubic foot are news today.

Even when we are resting, with our eyes closed, a kind of stand-by current known as the alpha rhythm is measurable in our brains. This current, which has a frequency of about 10 cycles per second, changes when we see or feel something, or when we exercise the power of recall. It disappears when we sleep soundly, and is analogous to the operating current in a computer. Also, there is “power” available locally at the neurons to “amplify” weak signals sufficiently to trigger off following branches of neurons.

Philosophers have proposed two general concepts of the human brain and how it functions. The a priori theory presupposes a certain amount of “wired-in” knowledge: instincts, ideals, and so on. The other theory, that of the tabula rasa or clean slate new brain, argues that each of us organizes an essentially random net of nerves into ordered intelligence. Both theories are being investigated with computers, and as a result light is beginning to be shed on the workings of our brains.

The Upjohn Company, Ezra Stoller Associates Photo
“A moment at a concert” is diagrammed by brain model, showing eyes, ears, nerves, and structures analogous to brain. Picture at top represents perception.

There is another division of philosophical thought in the mechanistic versus elan vital argument. In other words, is the entire mind to be found in its constituent parts, or is there an intangible extra something that really breathes life into us? Whatever the correct concept, the brain does record impressions it can later recall. No one yet knows just how this is done, but several theories have been advanced. One of these describes a “chain circuit” set up in a neuron network by messages from the body’s sensors. This circuit, once started, continues to circle through the brain and is on tap whenever that particular experience needs to be recalled. The term “reverberate” is used in connection with this kind of memory, seeming to be a good scientific basis for the poetic “echoes of the past.” Reverberation circuits also provide the memory for some computers.

Among other explanations of memory is that of conditioning the neurons to operate more “easily,” so that certain paths are readily traversed by brain impulses. This could be effected by chemical changes locally, and such a technique too is used in computers.

However the brain accomplishes its job, it is certain that it evolved in its present form as a result of the environment its cells have had to function in for billions of years. Its prime purpose has been one of survival, and for this reason some argue that it is not particularly well adapted to abstract reasoning. Although the brain can do a wide variety of things from dreaming to picking out one single voice amid the hubbub of noise at a social gathering—a phenomenon scientists have given the descriptive name of “cocktail party effect”—men like Ashby consider it a very inflexible piece of equipment not well suited to pure logic. As a test of your brain as a logical device, consider the following problem from the Litton Industries “Problematical Recreations.”

If Sara shouldn’t, then Wanda would. It is impossible that the statements: “Sara should” and “Camille couldn’t” can both be true at the same time. If Wanda could, then Sara should and Camille could. Therefore Camille could. Is this conclusion valid?

If your head starts to swim, you are not alone. Very few humans solve such problems easily. Interestingly, those who do, make good computer programmers.

The Computer’s Brain

Just as we have an anthropomorphic God, many people have done their best to endow the computer with human characteristics. Not only in fiction but also in real life, the electronic brains have been described as neurotic and frustrated on occasion, and also as being afraid and even having morning sickness! A salesman for a line of computers was asked to explain in understandable terms the difference between two computers whose specifications confused a customer. “Let’s put it this way,” the salesman said, “The 740 thinks the 690 is a moron!”

We can begin to investigate the question of computer intelligence by again looking up a definition. The word “compute” means literally to think, or reckon, with. Early computers such as counting sticks, the abacus, and the adding machine are obviously something man thinks with. Even though we may know the multiplication tables, we find it easier and safer to use a mechanical device to remember and even to perform operations for us.

These homely devices do not possess sufficient “intelligence” to raise any fears in our minds. The abacus, for example, displays only what we might charitably call the property of memory. It has a certain number of rows, each row with a fixed number of beads. While it is not fallible, as is the human who uses it, it is far more limited in scope. All it can ever do is help us to add or subtract, and if we are clever, to multiply, divide, do square roots, and so on. If we are looking for purposive behavior in computing machines, it is only when we get to the adding machine that a glimmer appears. When a problem is set in and the proper button pushed, this device is compelled to go through the gear-whirring or whatever required to return it to a state of equilibrium with its problem solved.

So far we might facetiously describe the difference in the goal-seeking characteristics of man and machine by recalling that man seeks lofty goals like climbing mountains simply because they are there, while the computer seeks its goal much like the steel ball in the pinball machine, impelled by gravity and the built-in springs and chutes of the device. When we come to a more advanced computer, however, we begin to have difficulty in assessing characteristics. For the JOHNNIAC, built by Rand and named for John von Neumann, can prove the propositions in the Principia Mathematica of Whitehead and Russell. It can also “learn” to play a mediocre game of chess.

If we investigate the workings of a digital computer, we find much to remind us of the human brain. First is the obvious similarity of on-off, yes-no operation. This implies a power source, usually electrical, and a number of two-position switches. The over-all configuration of the classic computer resembles, in principal if not physical appearance, that of the human brain and its accessories.

As we have learned, the electronic computer has an input section, a control, an arithmetic (or logic) section, a memory, and an output. Looking into the arithmetic and memory sections, we find a number of comparisons with the brain. The computer uses power, far more than the brain. A single transistor, which forms only part of a neuron, may use a tenth of a watt; the brain is ahead on this score by a factor of millions to one.

Electronic switches have an advantage over the neuron in that they are much faster acting. So fast have they become that engineers have had to coin new terms like nanosecond and picosecond, for a billionth and a trillionth of a second. Thus, the computer’s individual elements are perhaps 100,000 times faster than those of the brain.

There is no computer in existence with the equivalent of 10 billion neurons. One ambitious system of computers does use half a million transistors, plus many other parts, but even these relatively few would not fit under a size 7-1/2 hat. One advanced technique, using a “2-D” metal film circuitry immersed in liquid helium for supercooling, hopefully will yield a packaging density of about 3-1/2 million parts per cubic foot in comparison with the brain’s trillion-part density.

We have mentioned the computer memory that included the “delay line,” remindful of the “chain circuit” in the brain. Electrical impulses were converted to acoustic signals in mercury, traversed the mercury, and were reconverted to electrical impulses. Early memory storage systems were “serial” in nature, like those stored on a tape reel. To find one bit of information required searching the whole reel. Now random-access methods are being used with memory core storage systems so wired that any one bit of information can be reached in about the same amount of time. This magnetic core memory stores information as a magnetic field, again analogous to a memory theory for the human brain except that the neuron is thought to undergo a chemical rather than magnetic change.

General Electric Co., Computer Dept.
Tiny ferrite cores like these make up the memory of some computers. Each core stores one “bit” of information.

Until recently, computers have been primarily sequential, or serially operating, machines. As pointed out earlier, the brain operates in parallel and makes up for its slower operating individual parts in this way. Designers are now working on parallel operation for computers, an improvement that may be even more important than random-access memory.

Bionics

It is obvious that while there are many differences in the brain and the computer there are also many striking similarities. These similarities have given rise to the computer-age science of “bionics.” A coinage of Major J. E. Steele of the Air Force’s Wright Air Development Center, bionics means applying knowledge of biology and biological techniques to the design of electronic devices and systems. The Air Force and other groups are conducting broad research programs in this field.

As an indication of the scope of bionics, Dr. Steele himself is a flight surgeon, primarily trained as neurologist and psychiatrist, with graduate work in electronics and mathematics. Those engaged in bionics research include mathematicians, physical scientists, embryologists, philosophers, neurophysiologists, psychologists, plus scientists and engineers in the more generally thought of computer fields of electronics and other engineering disciplines.

A recent report from M.I.T. is indicative of the type of work being done: “What the Frog’s Eyes Tell the Frog.” A more ambitious project is one called simply “Hand,” which is just that. Developed by Dr. Heinrich Ernst, “Hand” is a computer-controlled mechanical hand that is described as the first artificial device to possess a limited understanding of the outside world. Although it will undoubtedly have industrial and other applications, “Hand” was developed primarily as a study of the cognitive processes of man and animals.

Besides the Air Force’s formal bionics program, there are other research projects of somewhat similar nature. At Harvard, psychologists Bruner and Miller direct a Center for Cognitive Studies, and among the scientists who will contribute are computer experts. Oddly, man knows little of his own cognitive or learning process despite the centuries of study of the human mind. It has been said that we know more about Pavlov’s dog and Skinner’s pigeons than we do about ourselves, but now we are trying to find out. Some find it logical that man study the animals or computer rather than his own mind, incidentally, since they doubt that an intelligence can understand itself anyway.

As an example of the importance placed on this new discipline, the University of California at Los Angeles recently originated a course in its medical school entitled “Introduction to the Function and Structure of the Nervous System,” designed to help bridge the gap between engineering and biology. In Russia, M. Livanov of the Soviet Academy Research Institute of Physiology in Higher Nervous Activity has used a computer coupled with an electric encephaloscope in an effort to establish the pattern of cortical connections in the brain.

While many experts argue that we should not necessarily copy the brain in designing computers, since the brain is admittedly a survival device and somewhat inflexible as a result of its conditioning, it looks already as if much benefit has come from the bionics approach.

The circuitry of early computers comprised what is called “soldered” learning. This means that the connections from certain components hook up to certain other components, so that when switches operated in a given order, built-in results followed. One early teaching device, called the Electric Questionnaire, illustrates this built-in knowledge. A card of questions and answers is slipped over pegs that are actually terminals of interconnected wires. Probes hooked to a battery are touched to a question and the supposed correct answer. If the circuit is completed, a light glows; otherwise the learner tries other answers until successful.

More sophisticated systems are those of “forced” learning and free association. Pioneer attempts at teaching a computer to “perceive” were conducted at Cornell University under contract with the Air Force to investigate a random-network theory of learning formulated by Dr. Frank Rosenblatt. Specifically, the Perceptron learns to recognize letters placed in front of its “eyes,” an array of 400 photocells. The human brain accomplishes perception in several steps, though at a high enough rate of operation to be thought of as a continuous, almost instantaneous, act. Stimuli are received by sense organs; impulses travel to neurons and form interconnections resulting in judgment, action if necessary, and memory. The Perceptron machine functions in much the same manner.

Electronics
Simplified version of a mammalian visual system (A) and Perceptron simulating the biological network (B).

The forced learning technique, in which Perceptron was told when it correctly identified a letter, and when it missed, was used first. Later it was found that “corrective” or reinforced teaching, which notes only errors, was more effective. After Perceptron had seen each letter fifteen times and received proper correction, it could subsequently identify all the letters correctly.

Announcement of Perceptron triggered many wild headlines and a general misconception in the public mind. Dr. Rosenblatt and other developers wisely refuse to comment on the potential of his machine, but the number of experiments being conducted indicates wide scientific interest, and perceptron has attained the prestige of an uncapitalized generic term. However, the theory of its random process has been questioned by scientists including Theodore Kalin, one of the builders of an early electrical logic machine. Kalin feels that intelligence presupposes a certain minimum of a priori knowledge: the wired-in learning of the computer or the instincts or inherited qualities of animals. This of course echoes the thoughts of Kant who deplored the notion as similar to all the books and papers in a library somehow arranging themselves properly on the shelves and in filing cabinets.

Indeed, the whole idea of finding human intelligence mirrored in the electronic innards of the computer has been flatly denounced at some scientific symposiums. Computers given an intelligence test at the University of Michigan “flunked,” according to researchers. Another charge is that the reaction of the brain’s neuron depends on its history and thus cannot be compared with the computer. However, other researchers seem to have anticipated this weakness and are working on electronic or electrochemical neurons that also are conditioned by their input. Despite criticism, the bionics work proceeds on a broad front.

More recently a machine called Cybertron has been developed by the Raytheon Company. This more sophisticated machine is being trained to recognize sonar sounds, using the corrective technique. If Cybertron errs, the teacher pushes a “goof” button. When the machine is fully developed, Raytheon feels it will be able to recognize all typical American word sounds, using its 192 learning elements, and to type them out.

Computers generally do “logical” operations. Many human problems do not seem to be logical, and can be solved only by experience, as the mathematician Gödel demonstrated some years ago. Since Cybertron solves such “alogical” problems, its builders prefer not to call it a computer, but rather a self-organizing data-processor that adapts to its environment. Among the variety of tasks that Cybertron could perform are the grading of produce and the recognition of radar signals. Raytheon foresees wide application for Cybertron as a master learner with apprentice machines incapable of learning but able to “pick the brains” of Cybertron and thus do similar tasks.

Cornell Aeronautical Laboratory
With the letter C in its field of view, Perceptron’s photocells at top center are activated. Simultaneously, response units in panel at right identify the letter correctly.

The assembly of machines like Perceptron and Cybertron requires elements that simulate the brain’s neuron. One such component which has evolved from bionics research is the Artron, or artificial neuron. Inside the Artron are logic gates and inhibit gates. By means of reward or punishment, the Artron learns to operate a “statistical switch” and send impulses to other Artrons or to a readout. There are two interesting parallels here besides the operation of a simulated neural net. One is the statistical approach to decisions and learning. The late John von Neumann theorized that the brain’s actions might be statistical, or based on probability. Second, the designers of Artron see a similarity in its operation and Darwin’s theory of natural selection.

Another new component in the bionics approach is the “neuristor.” This semiconductor diode simulates the axon, the nerve fiber that connects with the neuron. Another device is the “memistor,” unique in that it uses electrochemical phenomena to function as a memory unit. A different kind of artificial neuron called MIND is made up of magnetic cores.

There is another plus factor in this duplication of what we think is the system used by the brain. While one neuron may not be as reliable as a vacuum tube or transistor, the complete brain is millions of times more dependable than any of its single parts. This happy end result is just the reverse of what man has come up with in his complex computer systems. For instance, individual parts in the Minuteman missile must have a reliability factor of 99.9993% so that the system will have a fair chance of working properly. Duplication of the brain’s network may well lead to electronic systems that are many times more reliable than any of their individual parts.

Bionics is apparently a fruitful approach, both for benefiting computer technology and for learning more about the human brain. As an example, consider the fact that work with the Perceptron indicated that punishment was more effective in the learning process than punishment and reward together. This of course does not say that such a method would work best with a human subject, but if separate tests with human beings proved a similar result, it might then be safe to infer some similarity between the human and computer brain.

One of the biggest roadblocks to implementation of a humanlike neural net is economic. Since there are some 10 billion neurons in the brain, and early electronic neurons consisted of several components including transistors which are a bargain at $2 each, building such a computer might double our national debt. Bionics workers have been thinking dreamily in terms of something like one cent per artificial neuron. This is a ridiculously low figure, but even at that a one-tenth brainpower computer with only a billion penny neurons would cost $10 million for those components alone!

Cornell Aeronautical Laboratory
Random wiring network between the Mark I Perceptron’s 400 photocell sensors and the machine’s association units.... The Mark I has ten sensory output connections to each of its 512 association units.

Not yet whipped, researchers are now thinking in terms of mass-producing lattices of thin metal, in effect many thousands of elements in a microscopic space, and propagating electrochemical waves rather than an electrical current through them.

Raytheon Co.
When Cybertron doesn’t catch on to a new lesson, engineers push the goof button to punish the machine. When it learns correctly it is allowed to continue its studies with no interruption, thus it constantly improves its skill.

Other ideas include getting down to the molecular level for components. If this is achieved it will be a downhill pull, for even the human neuron consists of many molecules. Farfetched as these ideas seem, packaging densities of 100 billion per cubic foot are being talked of as foreseeable in less than ten years. This is only about ten times as bulky as the goal, the human brain, and when it is achieved the computer will be entitled to a big head.

The Computer as a Thinker

About the time Johnny was having all his trouble reading, a computer named JOHNNIAC was given the basic theorems needed, and then asked to prove the propositional calculus in the Principia Mathematica, a task certainly over the heads of most of us. The computer waded through the job with no particular strain, and even turned in one proof more elegant than human brains had found before. When the same problems were given to an engineer unfamiliar with that branch of mathematics, his verbalized problem-solving technique paralleled that of JOHNNIAC. Asked if he had been thinking, the engineer said he “surely thought so!”

In his interesting department in Scientific American, mathematical gamester Martin Gardner describes a simple set of punched cards for solving the type of logic problem discussed earlier in this chapter. Using these cards and a simple digital type of manipulation, we happily learn that Camille surely could. The problem is a simple, three-premise type in two-valued logic and can be solved by any self-respecting digital computer in a split second. A few demonstrations like this give a rather disconcerting insight into our brain’s limitations and build more respect for the computer’s intelligence.

When we hear of expensive computers apparently frittering away their valuable time playing games we may well wonder how come. But games, it turns out, are an ideal testing ground for problem-solving ability and hence intelligence. Back in 1957, computer experts Simon and Newell predicted that in ten years the chess champion of the world would be a computer. Master players most likely laughed up their sleeves, and thus far the electronic machine has done no better than play a routine game against a human amateur. This, of course, is not a mean achievement. Wise heads are supposed to have responded to the prediction with “So what?”

Photo at left from Organization of the Cerebral Cortex, by D. A Sholl, J. Wiley and Sons. Right, General Electric Research Laboratory
Photo at right shows a “crossed-film cryotron” shift register—an advanced computer element. The separation of active crossovers shown is comparable to the separation of nerve cells in the section of cat brain shown at left.

Alex Bernstein of IBM worked out a program for the 704 computer in which the machine looks ahead four moves before each of its plays. Even this limited look ahead requires 2,800 calculations, and the 704 takes eight minutes deliberating. Occasionally it makes a move the experts rate as masterful.

Chess is a far more complex game even than it appears to those of us on the sidelines. In an average game there are forty moves and each has about thirty possibilities. So far this sounds innocuous, but mathematics shows that there are thus 10120 possible moves in any one game. This number is a 1 followed by 120 zeros, and to underline its size it has been estimated that even if a million games a second were played, the possibilities would not be exhausted in our lifetime!

Obviously human chess wizards do not investigate all possible moves. Instead they use heuristic reasoning, or hunch playing, to cut corners. The JOHNNIAC computer is investigating such approaches to computer-playing chess, in a movement away from rigorously programmed routines or “algorithms.” Algorithms are formulas or equations such as the quadratic equation used in finding roots. If indeed the computer does dethrone the human chess champ by 1967, it will be exceedingly hard to argue that the machine is not thinking.

The word “heuristic” comes from the Greek heuriskein, meaning to discover or invent. An example of what it is and how important it is can be seen in the recent disproving of a famous conjecture made by the mathematician Euler some 180 years ago. Euler was interested in the properties of so-called “magic squares” in which letters are arranged vertically and horizontally. While it is possible to arrange the letters a, b, c, d, and e in such a square so that all are present in each row and in different order, Euler didn’t think such was the case with a square having six units on a side. He tried it, visualizing officers of different rank arranged in rows. Convinced that it would not work, he extended his educated guess to squares having units of ten, fourteen, and other even numbers not divisible by four. He didn’t actually prove his conjecture, because the amount of paperwork makes it practically impossible.

In 1901 a mathematician did try all the possible configurations of the square of six units and found that Euler was indeed correct. It was assumed that ten was impossible too, until 1958 when three American mathematicians spoiled Euler’s theory by finding workable magic squares having ten units per side. They did not do this by exhausting all the possibilities, for such a chore would have been humanly impossible. In fact, a computer labored for 100 hours and completed only a tiny fraction of the job. The square-seekers concluded that it would take even the high-speed computer upwards of a century to do the job, so instead they used hunches or inspired guesses, working out a heuristic for the task. The point of importance is that not only man, but the computer as well, despite its fantastic speed, must learn to use heuristic reasoning rather than blindly plowing through all possible solutions. There are just too many numbers!

Computers play other games too, from tick-tack-toe and Nim, which it plays flawlessly, to Go and checkers. Dr. Arthur Samuel of IBM has taught the 704 computer to play checkers well enough to beat him regularly, though Dr. Samuel, scientist that he is, admits he is not a great checker-player. He has used two types of learning in the program: “rote” and “generalization.” So far these have been used separately, while human players use both types of learning in a game.

American scientists visiting Russia recently reported that the Russians, like some of us, were amazed to hear that computer time was allotted to the mere playing of games. The real goal in all this game-playing is to learn how to do other more important things. Gaming is being applied to war strategy and to business management. Corporation executives are playing games with computers that simulate the operation of their firms, both to improve methods and to learn about themselves and their employees. A General Problem-Solver computer is being developed too; one which can solve problems like the cannibals and the missionaries and then do mathematical equations and other types of thinking. As was pointed out, when the computer’s method of solving a problem is compared with the protocol used by a person (by having him think aloud as he goes through the problem) it is seen that both use pretty much the same tricks and short cuts.

As the computer keeps closing the gap, we can push the goal back by redefining our terms. This is much like dangling a carrot on a stick, and with the computer doggedly taking the part of the donkey, it is a pretty good technological flail. By making the true test of intelligence something like artistic creativity, we can rule out the machine unless it can write poetry, compose music, or paint a picture. So far the computer has done the first two, and the last poses no particular problem, though debugging the machine might be a messy operation. True, the machine’s poetry is only about beatnik level:

Children

Sob suddenly, the bongos are moving.

Or could we find that tall child?

And dividing honestly was like praying badly,

And while the boy is obese, all blast could climb.

First you become oblong,

To weep is unctious, to move is poor.

This masterpiece, produced by a computer in the Librascope Laboratory for Automata Research, is not as obscure as an Eliot or a Nostradamus. Computer music has not yet brought audiences to their feet in Carnegie Hall. The machine’s detractors may well claim that it has produced nothing truly great; nothing worthy of an Einstein or Keats or Vermeer. But then, how many of us people have?

There is yet another way we can ban the computer from membership in our human society. While human beings occasionally think they are machines, and Dr. Bruno Bettelheim has documented a case history of “Joey” who was so convinced that he was a machine that he had to keep himself plugged in to stay alive, no machine has yet demonstrated that it is consciously aware of itself, as human beings are.

Machines are, hopefully, objective. Consciousness seems to be subjective in the extreme; indeed, some feel that it is a thing one of us cannot hope to convey as intelligence to another and thus has no scientific importance. It is also noted that the thinking and learning processes can be carried out with no need for consciousness of what we are doing. An example given is that of the cyclist who learns, without being “aware” of the fact, that to turn his machine left he must first make a slight swing to the right in order to keep from falling outward during his left turn. This observation in itself is not final proof of the pudding, of course, unless we are aiming only to make a mechanical bike-rider, but many of our other actions are carried out more or less mechanically without calling attention to themselves. Just as certainly, however, the thing called consciousness plays a vital role in human thinking. Perhaps the machine must learn to do this before it can be truly creative.

Although we have described some fairly “exotic” devices, it should be remembered that the computers in use outside of the laboratory today are fairly old-fashioned second-generation models. They have progressed from vacuum tubes or mechanical relays to “solid-state” components. When Artrons and neuristors and memistors and other more sophisticated parts are standard, we can look for a vast increase in the brain power of computers.

The Gilfillan radar ground-controlled-approach system for aircraft that “sees” the plane on the radar scope, computes the proper path for it to follow, and then selects the right voice commands from a stored-tape memory seems to be thinking and acting already. The addition of eyes and ears plus limbs and locomotion to the computer, foreseen now in the photocell eyes of Perceptron, the ears of Cybertron, and dexterity of Mobot and Hand, will move the computer from mere brain to robot.

Some people profess to worry about what will happen when the computer itself realizes that it is thinking, calling to mind the apocryphal story of the machine that was asked if there was a God. After brief cogitation, it said, “Now there is.” To offset such a chilling possibility, it is comforting to recall the post-office electronic brain that mistook the Christmas seals on packages for foreign stamps, and the Army computer that ordered millions of dollars worth of supplies that weren’t needed. Or perhaps it isn’t comforting, at that!

The question of whether or not a computer actually thinks is still a controversy, though not as much so as it was a few years ago. The computer looks and acts as if it is thinking, but the true scientist prefers to reserve judgment in the spirit of one shown a black sheep some distance away. “This side is black,” he admitted, “but let’s investigate further.”


For forms of government let fools contest,

That which is best administered is best.

—Pope