3: How Computers Work

In the past decade or so, an amazing and confusing number of computing machines have developed. To those of us unfamiliar with the beast, many of them do not look at all like what we imagined computers to be; others are even more awesome than the wildest science-fiction writer could dream up. On the more complex, lights flash, tape reels spin dizzily, and printers clatter at mile-a-minute speeds. We are aware, or perhaps just take on faith, that the electronic marvel is doing its sums at so many thousand or million per second, cranking out mathematical proofs and processing data at a rate to make mere man seem like the dullest slowpoke. Just how computers do this is pretty much of a mystery unless we are of the breed that works with them. Actually, in spite of all the blurring speed and seeming magic, the basic steps of computer operation are quite simple and generally the same for all types of machines from the modestly priced electromechanical do-it-yourself model to STRETCH, MUSE, and other ten-million-dollar computers.

It might be well before we go farther to learn a few words in the lexicon of the computer, words that are becoming more and more a part of our everyday language. The following glossary is of course neither complete nor technical but it will be helpful in following through the mechanics of computer operation.

COMPUTER DICTIONARY

Access Time—Time required for computer to locate data and transfer it from one computer element to another.

Adder—Device for forming sums in the computer.

Address—Specific location of information in computer memory.

Analog Computer—A physical or electrical simulator which produces an analogy of the mathematical problem to be solved.

Arithmetic Unit—Unit that performs arithmetical and logical operations.

Binary Code—Representation of numbers or other information using only one and zero, to take advantage of open and closed circuits.

Bit—A binary digit, either one or zero; used to make binary numbers.

Block—Group of words handled as a unit, particularly with reference to input and output.

Buffer—Storage device to compensate for difference in input and operation rate.

Control Unit—Portion of the computer that controls arithmetic and logical operations and transfer of information.

Delay Line—Memory device to store and later reinsert information; uses physical, mechanical, or electrical techniques.

Digital Computer—A computer that uses discrete numbers to represent information.

Flip-Flop—A circuit or device which remains in either of two states until the application of a signal.

Gate—A circuit with more than one input, and an output dependent on these inputs. An AND gate’s output is energized only when all inputs are energized. An OR gate’s output is energized when one or more inputs are energized. There are also NOT-AND gates, EXCLUSIVE-OR gates, etc.

Logical Operation—A nonarithmetical operation, i.e., decision-making, data-sorting, searching, etc.

Magnetic Drum—Rotating cylinder storage device for memory unit; stores data in coded form.

Matrix—Circuitry for transformation of digital codes from one type to another; uses wires, diodes, relays, etc.

Memory Unit—That part of the computer that stores information in machine language, using electrical or magnetic techniques.

Microsecond—One millionth of a second.

Millisecond—One thousandth of a second.

Nanosecond—One billionth of a second.

Parallel Operation—Digital computer operation in which all digits are handled simultaneously.

Programming—Steps to be executed by computer to solve problem.

Random Access—A memory system that permits more nearly equal access time to all memory locations than does a nonrandom system. Magnetic core memory is a random type, compared with a tape reel memory.

Real Time—Computer operation simultaneous with input of information; e.g., control of a guided missile or of an assembly line.

Register—Storage device for small amount of information while, or until, it is needed.

Serial Operation—Digital computer operation in which all digits are handled serially.

Storage—Use of drums, tapes, cards, and so on to store data outside the computer proper.

The Computer’s Parts

Looking at computers from a distance, we are vaguely aware that they are given problems in the form of coded instructions and that through some electronic metamorphosis this problem turns into an answer that is produced at the readout end of the machine. There is an engineering technique called the “black box” concept, in which we are concerned only with input to this box and its output. We could extend this concept to “black-magic box” and apply it to the computer, but breaking the system down into its components is quite simple and much more informative.

There are five components that make up a computer: input, control, arithmetic (or logic) unit, memory, and output. As machine intelligence expert, Dr. W. Ross Ashby, points out, we can get no more out of a brain—mechanical or human—than we put into it. So we must have an input. The kind of input depends largely on the degree of sophistication of the machine we are considering.

With the abacus we set in the problem mechanically, with our fingers. Using a desk calculator we punch buttons: a more refined mechanical input. Punched cards or perforated tapes are much used input methods. As computers evolve rapidly, some of them can “read” for themselves and the input is visual. There are also computers that understand verbal commands.

Input should not be confused with the control portion of the computer’s anatomy. We feed in data, but we must also tell the computer what to do with the information. Shall it count the number of cards that fly through it, or shall it add the numbers shown on the cards, record the maximum and minimum, and print out an average? Control involves programming, a computer term that was among the first to be assimilated into ordinary language.

The arithmetic unit—that part of the computer that the pioneer Babbage called his “mill”—is the nuts and bolts end of the business. Here are the gears and shafts, the electromechanical relays, or the vacuum tubes, transistors, and magnetic cores that do the addition, multiplication, and other mathematical operations. Sometimes this is called the “logic” unit, since often it manipulates the ANDS, ORS, NORS, and other conjunctives in the logical algebra of Boole and his followers.

The memory unit is just that; a place where numbers, words, or other data are stored and ready to be called into use whenever needed. There are two broad types of memory, internal and external, and they parallel the kind of memory we use ourselves. While our brain can store many, many facts, it does have a practical limit. This is why we have phone books, logarithm tables, strings around fingers, and so on. The computer likewise has its external memory that may store thousands of times the capacity of its internal memory. Babbage’s machine could remember a thousand fifty-digit numbers; today’s large computers call on millions of bits of data.

Conversion of problem to machine program.

After we have dumped in the data and told the computer what to do with them, and the arithmetic and memory have collaborated, it remains only for the computer to display the result. This is the output of the computer, and it can take many forms. If we are using a simple analog computer such as a slide rule, the answer is found under the hairline on the slide. An electronic computer in a bank prints out the results of the day’s transactions in neat type at hundreds of lines a minute. The SAGE defense computer system displays an invading bomber and plots the correct course for interceptors on a scope; a computer in a playful mood might type out its next move—King to Q7 and checkmate.

With this sketchy over-all description to get us started, let us study each unit in a little more detail. It is interesting to compare these operations with those of our human computer, our brain, as we go along.

Remington Rand UNIVAC
A large computer, showing the different parts required.

Input

An early and still very popular method of getting data into the computer is the punched card. Jacquard’s clever way of weaving a pattern got into the computer business through Hollerith’s census counting machines. Today the ubiquitous IBM card can do these tasks of nose counting and weaving, and just about everything else in between. Jacquard used the punched holes to permit certain pins to slide through. Hollerith substituted the mercury electrical contact for the loom’s flying needles. Today there are many other ways of “reading” the cards. Metal base plate and springs, star wheels, even photoelectric cells are used to detect the presence or absence of the coded holes. A human who knows the code can visually extract the information; a blind man could do it by the touch system. So with the computer, there are many ways of transferring data.

Remington Rand UNIVAC
The Computer’s Basic Parts.

An obvious requirement of the punched card is that someone has to punch the holes in the first place. This is done with manually operated punches, power punches, and even automatic machines that handle more than a hundred cards a minute. Punched cards, which fall into the category called computer “software,” are cheap, flexible, and compatible with many types of equipment.

Particularly with mathematical computations and scientific research, another type of input has become popular, that of paper tape. This in effect strings many cards together and puts them on an easily handled roll. Thus a long series of data can be punched without changing cards, and is conveniently stored for repeated use. Remember the old player-piano rolls of music? These actually formed the input for one kind of computer, a musical machine that converted coded holes to musical sounds by means of pneumatic techniques. Later in this chapter we will discuss some modern pneumatic computers.

More efficient than paper is magnetic tape, the same kind we use in our home recording instruments. Anyone familiar with a tape recorder knows how easy it is to edit or change something on a tape reel. This is a big advantage over punched cards or paper tapes which are physically altered by the data stored on them and cannot be corrected. Besides this, magnetic tape can hold many more “bits” of information than paper and also lends itself to very rapid movement through the reading head of the computer. For example, standard computer tape holds seven tracks, each with hundreds of bits of information per inch. Since there are thousands of feet on a ten-inch reel, it is theoretically possible to pack 40 million bits on this handful of tape!

Since the computer usually can operate at a much higher rate of speed than we can put information onto tape, it is often the practice to have a “buffer” in the input section. This receiving station collects and stores information until it is full, then feeds it to the computer which gobbles it up with lightning speed. Keeping a fast computer continuously busy may require many different inputs.

Never satisfied, computer designers pondered the problem of all the lost time entailed in laboriously preparing cards or tapes for the ravenous electronic machine. The results of this brain-searching are interesting, and they are evident in computers that actually read man-talk. Computers used in the post office and elsewhere can optically read addresses as well as stamps; banks have computers that electrically read the coded magnetic ink numbers on our checks and process thousands of times as many as human workers once did. This optical reading input is not without its problems, of course. Many computers require a special type face to be used, and the post office found that its stamp recognizer was mistaking Christmas seals for foreign stamps. Improved read heads now can read hand-printed material and will one day master our widely differing human scrawls. This is of course a boon to the “programmer” of lengthy equations who now has to translate the whole problem into machine talk before the machine can accept it.

If a machine can read, why can’t it understand verbal input as well? Lazy computer engineers have pushed this idea, and the simplest input system of all is well on the way to success. Computers today can recognize numbers and a few words, and the Japanese have a typewriter that prints out the words spoken to it! These linguistic advances that electronic computers are making are great for everyone, except perhaps the glamorized programmer, a new breed of mathematical logician whose services have been demanded in the last few years.

Magnetic Tape - Paper Tape - IBM Card - Magnetic Ink Characters

Control

Before we feed the problem into the machine, or before we give it some “raw” data to process, we had better tell our computer what we want it to do. All the fantastic speed of our electrons will result in a meaningless merry-go-round, or perhaps a glorious machine-stalling short circuit unless the proper switches open and close at the right time. This is the job of the control unit of the computer, a unit that understands commands like “start,” “add,” “subtract,” “find the square root,” “file in Bin B,” “stop,” and so on. The key to all the computer’s parts working together in electronic harmony is its “clock.” This timekeeper in effect snaps its fingers in perfect cadence, and the switches jump at its bidding. Since the finger-snapping takes place at rates of millions of snaps a second, the programmer must be sure he has instructed the computer properly.

The ideal programmer is a rare type with a peculiarly keen brain that sometimes takes seemingly illogical steps to be logical. Programmers are likely to be men—or women, for there is no sex barrier in this new profession—who revel in symbolic logic and heuristic or “hunch” reasoning. Without a program, the computer is an impressively elaborate and frighteningly expensive contraption which cannot tell one number from another. The day may come when the mathematician can say to the machine, “Prove Fermat’s last theorem for me, please,” or the engineer simply wish aloud for a ceramic material that melts at 15,000° C. and weighs slightly less than Styrofoam. Even then the human programmer will not start drawing unemployment insurance, of course. If he is not receiving his Social Security pension by then he will simply shift to more creative work such as thinking up more problems for the machine to solve.

Just as there are many jobs for the computer, so there are many kinds of programs. On a very simple, special-purpose computer, the program may be “wired-in,” or fixed, so that the computer can do that particular job and no other. On a more flexible machine, the program may still be quite simple, perhaps no more than a card entered in a desk unit by an airline ticket agent to let the computer arrange a reservation for three tourist seats on American Airlines jet flight from Phoenix to Chicago at 8:20 a.m. four days from now. On a general-purpose machine, capable of many problems, the program may be unique, a one-of-a-kind highly complex set of instructions that will make the computer tax its huge memory and do all sorts of mental “nip-ups” before it reaches a solution.

A computer that understands about sixty commands has been compared to a Siamese elephant used for teak logging; the animal has about that many words in its vocabulary. Vocabulary is an indication of computer as well as human sophistication. The trend is constantly toward less-than-elephant size, and more-than-elephant vocabulary.

The programmer’s work can be divided into four basic phases: analysis of the problem; application or matching problem requirements with the capabilities of the right computer; flow charting the various operations using symbolic diagrams; and finally, coding or translating the flow chart into language the computer knows.

The flow chart to some extent parallels the way our own brains solve logic problems, or at least the way they ought to solve them. For example, a computer might be instructed to select the smallest of three keys. It would compare A and B, discard the larger, and then compare with C, finally selecting the proper one. This is of course such a ridiculously simple problem that few of us would bother to use the computer since it would take much longer to plot the flow chart than to select the key by simple visual inspection. But the logical principle is the same, even when the computer is to be told to analyze all the business transactions conducted by a large corporation during the year and advise a program for the next year which will show the most profit. From the symbolic flow chart, the programmer makes an operational flow chart, a detailed block diagram, and finally the program itself. Suitably coded in computer language, this program is ready for the computer’s control unit.

With a problem of complex nature, such as one involving the firing of a space vehicle, programmers soon learned they were spending hours, or even days, on a problem which the computer proceeded to zip through in minutes or seconds. It was something like working all year building an elaborate Fourth of July fireworks display, touching the match, and seeing the whole thing go up in spectacular smoke for a brief moment. Of course the end justifies the means in either case, and as soon as the computer has quit whirring, or the skyrockets faded out, the programmer gets back to work. But some short cuts were learned.

Even a program for a unique problem is likely to contain many “subroutines” just like those in other problems. These are used and re-used; some computers now have libraries of programs they can draw on much as we call on things learned last week or last year.

With his work completed, the programmer’s only worry is that an error might exist in it, an error that could raise havoc if not discovered. One false bit of logic in a business problem; a slight mathematical boner in a design for a manned missile, could be catastrophic since our technology is so complicated that the mistake might be learned only when disaster struck. So the programmer checks and rechecks his work until he is positive he has not erred.

How about the computer? It checks itself too; so thoroughly that there is no danger of it making a mistake. Computer designers have been very clever in this respect. One advanced technique is “majority rule” checking. Not long ago when the abacus was used even in banking, the Japanese were aware that a single accountant might make a false move and botch up the day’s tally. But if two operators worked the same problem and got the same answer, the laws of probability rule that the answer can be accepted. If the sums do not agree, though, which man is right? To check further, and save the time needed to go through the whole problem again, three abacuses, or abaci, are put through their paces. Now if two answers agree, chances are they are the right solution. If all three are different, the bank had better hire new clerks!

Remington Rand UNIVAC
A word picture “flow chart” of the logical operation of selecting the proper key.

Arithmetic or Logic

Now that our computer has the two necessary ingredients of input and control, the arithmetic or logic unit can get busy. Babbage called this the “mill,” and with all the whirring gears and clanking arms his engine boasted, the term must have been accurate. Today’s computer is much quieter since in electronic switches the only moving parts are the electrons themselves and these don’t make much of a racket. Such switches have another big advantage in that they open and close at a great rate, practically the speed of light. The fastest computers use switches that act in nanoseconds, or billionths of a second. In one nanosecond light itself travels only a foot.

The computer may be likened to someone counting on two of his fingers. Instead of the decimal or ten-base system, most computers use binary arithmetic, which has a base of two. But fingers that can be counted in billionth parts of a second can handle figures pretty fast, and the computer has learned some clever tricks that further speed things up. It can only add, but by adroit juggling it subtracts by using the complement of the desired number, a technique known to those familiar with an ordinary adding machine. There are also some tricks to multiplying that allow the computer again to simply add and come up with the answer.

With pencil and paper we can multiply 117 times 835 easily. Remember, though, that the computer can only add, and that it was once called a speedy imbecile. The most imbecilic computer might solve the problem by adding 117 to itself 835 times. A smarter model will reverse the procedure and handle only 117 numbers. The moron type of computer is a bit more clever and sets up the problem this way:

835

835

835

835

835

835

835

8350

83500

——

97695

A moment’s reflection will show that this is the same as adding 7 times 835, 10 times 835, and 100 times 835. And of course the computer arrives at the answer in about the time it takes us to start drawing the line under our multiplier.

The Bendix Corp., Computer Division
Assembly of printed-circuit component “packages” into computer.

Perhaps smarting under the unkind remarks about its mental ability, the computer has lately been trying some new approaches to the handling of complex arithmetical problems. Instead of adding long strings of numbers, it will take a guess at the result, do some smart checking, adjust its figures, and shortly arrive at the right solution. For nonarithmetical problems, the computer substitutes yes and no for 1 and 0 and blithely solves problems in logic at the same high rate of speed.

Memory

When we demonstrated our superiority earlier in multiplying instead of adding the numbers in the problem, we were drawing on our memory: recalling multiplication tables committed to memory when we were quite young. Babbage’s “store” in his difference engine, you will recall, could memorize a thousand fifty-digit numbers, a feat that would tax most of us. The grandchildren of the Babbage machine can call on as many as a billion bits of information stored on tape. As you watch the reels of tape spinning, halting abruptly, and spinning again so purposefully, remember that the computer is remembering. In addition to its large memory, incidentally, a computer may also have a smaller “scratch-pad” memory to save time.

Early machines used electromechanical relays or perhaps vacuum-tube “flip-flops” for memory. Punched-card files store data too. To speed up the access to information, designers tried the delay-line circuit, a device that kept information circulating in a mercury or other type of delay. Magnetic drums and discs are also used. Magnetic tape on reels is used more than any other memory system for many practical reasons. There is one serious handicap with the tape system, however. Information on it, as on the drum, disc, file card, or delay line, is serial, that is, it is arranged in sequence. To reach a certain needed bit of data might require running through an entire reel of tape. Even though the tape moves at very high speed, time is lost while the computer’s arithmetic unit waits. For this reason the designers of the most advanced computers have gone to “random access” instead of sequential memory for part of the machine.

Tiny cores of ferrite material which has the desired magnetic properties are threaded on wires. These become memory elements, as many as a hundred of them in an area the size of a postage stamp. Each core is at the intersection of two wires, one horizontal and one vertical. Each core thus has a unique “address” and because of the arrangement of the core matrix, any address can be reached in about the same amount of time as any other. Thus, instead of spinning the tape several hundred feet to reach address number 6,564, the computer simply closes the circuit of vertical row 65 and horizontal row 64, and there is the desired bit of information in the form of a magnetic field in the selected core.

Hot on the heels of the development of random-access core memories came that of thin metallic film devices and so-called cryogenic or supercold magnetic components that do the same job as the ferrite cores but take only a fraction of the space. Some of these advanced devices also lend themselves to volume production and thus pave the way for memories with more and more information-storage capability.

International Business Machines Corp.
Magnetic core plane, the computer’s memory.

In the realm of “blue-sky” devices, sometimes known as “journalistors,” are molecular block memories. These chunks of material will contain millions of bits of information in cubic inches of volume, and some way of three-dimensional scanning of the entire block will be developed. With such a high-volume memory, the computer of tomorrow will fit on a desk top instead of requiring rows and rows of tape-filled machines.

Today, tape offers the cheapest “per bit” storage, and it is necessary to use the external or peripheral type of information storage. This is not much of a problem except for the matter of space. Since most computers are electronic, all that is required to tie the memory units to the arithmetic unit is wire connections. Douglas Aircraft ties computers in its California and North Carolina plants with 2,400 miles of telephone hookup. Sometimes even wires are not necessary. In the Los Angeles area, North American Aviation has a number of plants separated by as many as forty miles. Each plant is quite capable of using the computers in the other locations, with a stream of digits beamed by microwave radio from one to the other. Information can be transferred in this manner at rates up to 65,000 bits per second.

Output

Once the computer has taken the input of information, been instructed what to do, and used its arithmetic and memory, it has done the bulk of the work on the problem. But it must now reverse the procedure that took place when information flowed into it and was translated into electrical impulses and magnetic currents. It could convey the answer to another machine that spoke its language, but man would find such information unintelligible. So the computer has an output section that translates back into earth language.

Babbage’s computer was to have printed out its answers on metal plates, and many computers today furnish punched cards or tape as an output. Others print the answers on sheets of paper, so rapidly that a page of this book would take little more than a second to produce! One of the greatest challenges of recent years is that of producing printing devices fast enough to exploit fully the terrific speeds of electronic computing machines. There would be little advantage in a computer that could add all the digits in all the phone books in the world in less than a minute if it took three weeks to print out the answer.

Impact printers, those that actually strike keys against paper, have been improved to the point where they print more than a thousand lines of type, each with 120 characters in it, per minute. But even this is not rapid enough in some instances, and completely new kinds of printers have been developed. One is the Charactron tube, a device combining a cathode-ray tube, something like the TV picture tube, with an interposed 64-character matrix about half an inch in diameter. Electrical impulses deflect the electron beam in the tube so that it passes through the proper matrix character and forms that image on the face of the tube. This image then is printed electrostatically on the treated paper rather than with a metal type face. With no moving parts except the paper, and of course the electrons themselves, the Charactron printer operates close to the speed of the computer itself, and produces 100,000 words a minute. This entire book could be printed out in about forty-five seconds in this manner.

Minneapolis-Honeywell,
Electronic Data Processing Division

A high-speed printer is the output of this computer. It prints 900 lines a minute.

There are many other kinds of outputs. Some are in the form of payroll checks, rushing from the printer at the rate of 10,000 an hour. Some are simply illuminated numbers and letters on the face of the computer. As mentioned earlier, the SAGE air defense computer displays the tracks of aircraft and missiles on large screens, each accurately tagged for speed, altitude, and classification. The computer may even speak its answer to us audibly.

General Electric engineers have programmed computers to play music, and come up with a clever giveaway record titled “Christmas Carols in 210 Time,” à la pipe-organ solo. Some more serious musical work is now being done in taking a musical input fed to a computer, programming it for special effects including the reverberant effect of a concert hall, and having that played as the output.

A more direct vocal output is the spoken word. Some computers have this capability now, with a modest vocabulary of their own and an extensive tape library to draw from. As an example, Gilfillan Radio has produced a computerized ground-control-approach system that studies the radar return of the aircraft being guided, and “tells” the pilot how to fly the landing. All the human operator does is monitor the show.

The system uses the relatively simple method of selecting the correct words from a previously tape-recorded human voice. More sophisticated systems will be capable of translating code from the computer directly into an audible output. One very obvious advantage of such an automatic landing system is that the computer is never subject to a bad day, nerves, or fright. It will talk the aircraft down calmly and dispassionately, albeit somewhat mechanically.

These then are the five basic parts of a computer or computer system: input, control, arithmetic-logic, memory, and output. Remember that this applies equally to simple and complex machines, and also to computers other than the more generally encountered electronic types. For while the electronic computer is regarded as the most advanced, it is not necessarily the final result of computer development. Let us consider some of the deviants, throwbacks, and mutations of the computer species.

Kearfott Division, General Precision, Inc.
The tiny black box is capable of the same functions as the larger plastic laboratory model pneumatic digital computer.
Packaging densities of more than 2,000 elements per cubic inch are expected.

Another Kind of Computer

We have discussed mechanical, electromechanical, electrical, and electronic computers. There are also those which make use of quite different media for their operation: hydraulics, air pressure, and even hot gases. The pneumatic is simplest to explain, and also has its precedent in the old player-piano mentioned earlier.

Just as an electric or electronic switch can be open or closed, so can a pneumatic valve. The analogy carries much further. Some of the basic electronic components used in computers are diodes, capacitors, inductors, and “flip-flop” circuits which we have talked of. Each of these, it turns out, can be approximated by pneumatic devices.

The pneumatic diode is the simplest component, being merely an orifice or opening through which gas is flowing at or above the speed of sound. Under these conditions, any disturbance in pressure “upstream” of the orifice will move “downstream” through the orifice, but any such happening downstream cannot move upstream. This is analogous to the way an electronic diode works in the computer, a one-way valve effect.

The electrical capacitor with its stored voltage charge plays an important part in computer circuitry. A plenum chamber, or box holding a volume of air, serves as a pneumatic capacitor. Similarly, the effect of an inductor, or coil, is achieved with a long pipe filled with moving air.

The only complicated element in our pneumatic computer building blocks is the flip-flop, or bistable element. A system of tubes, orifices, and balls makes a device that assumes one position upon the application of pneumatic force, and the other upon a successive application, similar to the electronic flip-flop. Pneumatic engineers use terms like “pressure drop” and “pneumatic buffering,” comparable to voltage drop and electrical buffering.

A good question at this point is just why computer designers are even considering pneumatic methods when electronic computers are doing such a fine job. There are several reasons that prompt groups like the Kearfott Division of General Precision Inc., AiResearch, IBM’s Swiss Laboratory, and the Army’s Diamond Ordnance Fuze Laboratory to develop the air-powered computers. One of these is radiation susceptibility. Diodes and transistors have an Achilles heel in that they cannot take much radiation. Thus in military applications, and in space work, electronic computers may be incapable of proper operation under exposure to fallout or cosmic rays. A pneumatic computer does not have this handicap.

High temperature is another bugaboo of the electronic computer. For operation above 100° C., for instance, it is necessary to use expensive silicon semiconductor elements. The cryogenic devices we talked of require extremely low temperatures and are thus also ruled out in hot environments. The pneumatic computer, on the other hand, can actually operate on the exhaust gases of a rocket with temperatures up to 2000° F. There may be something humanlike in this ability to operate on hot air, but there are more practical reasons like simplicity, light weight, and low cost.

The pneumatic computer, of course, has limitations of its own. The most serious is that of speed, and its top limit seems to be about 100 kilocycles a second. Although this sounds fast—a kilocycle being a thousand cycles, remember—it is tortoise-slow compared with the 50-megacycle speed of present electronic machines. But within its limitations the pneumatic machine can do an excellent job. Kearfott plans shrinking 3,000 pneumatic flip-flops and their power supply and all circuitry into a one-inch cube; and packing a medium-size general-purpose digital computer complete with memory into a case 5-1/2 inches square and an inch thick. Such a squeezing of components surely indicates compressed air as a logical power supply!

Going beyond the use of air as a medium, Army researchers have worked with “fluid” flip-flops capable of functioning at temperatures ranging from minus 100° to plus 7,000° F.! The limit is dictated only by the material used to contain the fluid, and would surely meet requirements for the most rigorous environment foreseeable.

The fluid flip-flop operates on a different principle from its pneumatic cousin, drawing on fluid dynamics to shift from one state to the other. Fluid dynamics permits the building of switches and amplifiers that simulate electronic counterparts adequately, and the Army’s Diamond Ordnance Fuze Laboratory has built such oscillators, shift registers, and full adders, the flesh and bones of the computer. Researchers believe components can be built cheaply and that ultimately a complete fluid computer can be assembled.

The X-15 is cited as an example of a good application for fluid-type computing devices. The hypersonic aircraft flies so fast it glows, and a big part of its problem is the cooling of a large amount of electronic equipment that generates additional heat to compound the difficulty. Missiles and space vehicles have similar requirements.

Tomorrow’s computer may use liquid helium or a white-hot plasma jet instead of electronics or gas as a medium. It may use a medium nobody has dreamed of yet, or one tried earlier and discarded. Regardless of what it uses, it will probably work on the same basic theory and principles we’ve outlined here. And try as we may, we will get no more out of it than we put in.

By Herbert Goldberg © 1961 Saturday Review
“Is this your trouble?”


It is the machines that make life complicated, at the

same time that they impose on it a high tempo.

—Carl Lotus Becker

4: Computer Cousins—Analog
and Digital

There are many thousands of computers in operation today—in enough different outward varieties to present a hopeless classification task to the confused onlooker. Actually there are only two basic types of computing machines, the analog and the digital. There is also a third computer, an analog-digital hybrid that makes use of the better features of each to do certain jobs more effectively.

The distinction between basic types is clear-cut and may be explained in very simple terms. Again we go to the dictionary for a starting point. Webster says: “Analogue.—That which is analogous to some other thing.” Even without the terminal ue, the analog computer is based on the principle of analogy. It is actually a model of the problem we wish to solve. A tape measure is an analog device; so is a slide rule or the speedometer in your car. These of course are very simple analogs, but the principle of the more complex ones is the same. The analog computer, then, simulates a physical problem and deals in quantities which it can measure.

Some writers feel that the analog machine is not a computer at all in the strict sense of the word, but actually a laboratory model of a physical system which may be studied and measured to learn certain implicit facts.

Minneapolis-Honeywell Computer Center
A multimillion dollar aerospace computer facility. On left is an array of 16 analog computers; at right is a large digital data-processing system.
The facility can perform scientific and business tasks simultaneously.

The dictionary also gives us a good clue to the digital computer: “Digital.—Of the fingers or digits.” A digital machine deals in digits, or discrete units, in its calculations. For instance, if we ask it to multiply 2 times 2, it answers that the product is exactly 4. A slide rule, which we have said is an analog device, might yield an answer of 3.98 or 4.02, depending on the quality of its workmanship and our eyesight.

The term “discrete” describes the units used by the digital machine; an analog machine deals with “continuous” quantities. When you watch the pointer on your speedometer you see that it moves continuously from zero to as fast as you can or dare drive. The gas gauge is a graphic presentation of the amount of fuel in the tank, just as the speedometer is a picture of your car’s speed. For convenience we interpolate the numbers 10, 20, 30, 1/4, 1/2, and so on. What we do, then, is to convert from a continuous analog presentation to a digital answer with our eyes and brain. This analog-to-digital conversion is not without complications leading to speeding tickets and the inconvenience of running out of gas far from a source of supply.

A little thought will reveal that even prior to computers there were two distinct types of calculating; those of measuring (analog) and of counting (digital). Unless we are statisticians, we encounter 2-1/2 men or 3-1/2 women about as frequently as we are positive that there is exactly 10 gallons of fuel in the gas tank. In fact, we generally use the singular verb with such a figure since the 10 gallons is actually an arbitrary measurement we have superimposed on a quantity of liquid. Counting and measuring, then, are different things.

Because of the basic differences in the analog and digital computers, each has its relative advantages and disadvantages with respect to certain kinds of problems. Let us consider each in more detail and learn which is better suited to particular tasks. Using alphabetical protocol, we take the analog first.

The Analog Measuring Stick

We have mentioned the slide rule, the speedometer, and other popular examples of analog computers. There are of course many more. One beautiful example occurs in nature, if we can accept a bit of folklore. The caterpillar is thought by some to predict the severity of the winter ahead by the width of the dark band about its body. Even if we do not believe this charming relationship exists, the principle is a fine illustration of simulation, or the modeling of a system. Certainly there are reverse examples in nature not subject to any speculation at all. The rings in the trunk of a tree are accurate pictures of the weather conditions that caused them.

These analogies in nature are particularly fitting, since the analog computer is at its best in representing a physical system. While we do not generally recognize such homely examples as computers, automatic record-changers, washing machines, electric watt-hour meters, and similar devices are true analogs. So of course is the clock, one of the earliest computers made use of by man.

While Babbage was working with his difference engine, another Englishman, Lord Kelvin, conceived a brilliant method of predicting the height of tides in various ports. He described his system of solving differential equations invented in 1876 in the Proceedings of the Royal Society. A working model of this “differential analyzer,” which put calculus on an automated basis, was built by Kelvin’s brother, James Thomson. Thomson used mechanical principles in producing this analog computer, whose parts were discs, balls, and cylinders.

Science Materials Center
A simple analog computer designed to be assembled and used by teen-agers. Calculo performs multiplication and division within 5 per cent accuracy, and is a useful demonstration device.

Early electrical analogs of circuits built around 1920 in this country have been discussed briefly in the chapter on the computer’s past. The thing that sparked their development was an engineer’s question, “Why don’t we build a little model of these circuits?” Solving problems in circuitry was almost like playing with toys, using the circuit analyzers, although the toys grew to sizable proportions with hundreds of components. Some of the direct-current analog type are still operating in Schenectady, New York, and at Purdue University.

A simple battery-powered electric analog gives us an excellent example of the principle of all analog machines. Using potentiometers, which vary the resistance of the circuit, we set in the problem. The answer is read out on a voltmeter. Quite simply, a known input passing through known resistances will result in a proportional voltage. All that remains is assigning values to the swing of the voltmeter needle, a process called “scaling.” For instance, we might let one volt represent 100 miles, or 50 pounds, or 90 degrees. Obviously, as soon as we have set in the problem, the answer is available on the voltmeter. It is this factor that gives the analog computer its great speed.

General Electric and Westinghouse were among those building the direct-current analyzer, and the later alternating-current network type which came along in the 1930’s. The mechanical analogs were by no means forgotten, even with the success of the new electrical machines. Dr. Vannevar Bush, famous for many other things as well, started work on his analog mechanical differential analyzer in 1927 at the Massachusetts Institute of Technology. Bush drew on the pioneering work of Kelvin and other Englishmen, improving the design so that he could do tenth-order calculations.

Following Bush’s lead, engineers at General Electric developed further refinements to the “Kelvin wheels,” using electrical torque amplifiers for greater accuracy. The complexity of these computers is indicated in the size of one built in the early 1940’s for the University of California. It was a giant, a hundred feet long and filled with thousands of parts. Not merely huge, it represented a significant stride ahead in that it could perform the operation of integration with respect to functions other than just time. Instead of being a “direct” analog, the new machine was an “indirect” analog, a model not of a physical thing but of the mathematics expressing it. Engineers realized that the mechanical beast, as they called it, represented something of a dinosaur in computer evolution and could not survive. Because of its size, it cost thousands of dollars merely to prepare a place for its installation. Besides, it was limited in the scope of its work.

During World War II, however, it was all we had, and beast or not, it worked around the clock solving engineering problems, ballistics equations, and the like. England did work in this field, and Meccano—counterpart of the Gilbert Erector Set firm in the United States—marketed a do-it-yourself differential analyzer. The Russians too built mechanical differential analyzers as early as 1940.

Electronics came to the rescue of the outsized mechanical analog computers during and after the war. Paced by firms like Reeves Instrument and Goodyear Aircraft, the electronic analog superseded the older mechanical type. There was of course a transitional period, and an example of this stage is the General Electric fire-control computer installed in the B-29. It embraced mechanical, electrical, and electronic parts to do just the sort of job ideally suited to the analog type of device: that of tracking a path through space and predicting the future position of a target so that the gunsight aims at the correct point in space for a hit.

Another military analog computer was the Q-5, used by the Signal Corps to locate enemy gun installations. From the track of a projectile on a radar screen, the Q-5 did some complicated mathematics to figure backwards and pinpoint the troublesome gun. There were industrial applications as well for the analog machine. In the 1950’s, General Electric built computers to solve simultaneous linear equations for the petroleum industry. To us ultimate users, gasoline poses only one big mathematical problem—paying for a tankful. Actually, the control operations involved in processing petroleum are terribly involved, and the special analog computer had to handle twelve equations with twelve unknown quantities simultaneously. This is the sort of problem that eats up man-years of human mathematical time; even a modern digital computer has tough and expensive going, but the analog does this work rapidly and economically.

Another interesting analog machine was called the Psychological Matrix Rotation Computer. This implemented an advanced technique called multiple-factor analysis, developed by Thurston of the University of Illinois for use in certain psychological work. Multiple-factor analysis is employed in making up the attribute tests used by industry and the military services for putting the right man in the right job. An excellent method, it was too time-consuming for anything but rough approximations until the analog computer was built for it. In effect, the computer worked in twelve dimensions, correlating traits and aptitudes. It was delivered to the Adjutant General’s Office and is still being used, so Army men who wonder how their background as baker qualifies them for the typing pool may have the Psychological Matrix Rotation Computer to thank.

In the early 1950’s, world tension prompted the building of another advanced analog computer, this one a jet engine simulator. Prior to its use, it took about four years to design, build, and test a new jet engine. Using the simulator, the time was pared to half that amount. It was a big computer, even though it was electronic. More than 6,000 vacuum tubes, 1,700 indicator lights, and 2,750 dials were hooked up with more than 25 miles of wire, using about 400,000 interconnections. All of this required quite a bit of electrical power, about what it would take to operate fifty kitchen ranges. But it performed in “real” time, and could keep tabs on an individual molecule of gas from the time it entered the jet intake until it was ejected out the afterburner!

Other analog computers were developed for utility companies to control the dispatching of power to various consumers in the most efficient manner. Again the principle was simply to build a model or analog of an actual physical system and use it to predict the outcome of operation of that system.

From our brief skim of the history of the analog computer we can recognize several things about this type of machine. Since the analog is a simulator in most cases, we would naturally expect it to be a special-purpose machine. In other words, if we had a hundred different kinds of problems, and had to build a model of each, we would end up with a hundred special-purpose computers. It follows too that the analog computer will often be a part of the system it serves, rather than a separate piece of equipment.

The Boeing Co.
Analog machine used as flight simulator for jet airliner; a means of testing before building.

There are general-purpose analog computers, of course, designed for solving a broad class of problems. They are usually separate units, instead of part of the system. We can further break down the general-purpose analog computer into two types; direct and indirect. A direct analog is exemplified in the tank gauge consisting of a float with a scale attached. An indirect analog, such as the General Electric monster built for the University of California mentioned earlier, can use one dependent variable, such as voltage, to represent all the variables of the prototype. Such an analog machine is useful in automatic control and automation processes.

Finally, we may subdivide our direct analog computer one further step into “discrete” analogs or “continuous” analogs. The term “discrete” is the quality we have ascribed to the digital computer, and a discrete analog is indicative of the overlap that occurs between the two types. Another example of this overlap is the representation of “continuous” quantities by the “step-function” method in a digital device. As we shall see when we discuss hybrid or analog-digital computers, such overlap is as beneficial as it is necessary.

General Motors Corp.
Large analog computer in rear controls car, subjecting driver to realistic bumps, pitches, and rolls, for working out suspension problems of car.

We are familiar now with mechanical, electromechanical, and fully electronic analogs. Early machines used rods of certain lengths, cams, gears, and levers. Fully electronic devices substitute resistors, capacitors, and inductances for these mechanical components, adding voltages instead of revolutions of shafts, and counting turns of wire in a potentiometer instead of teeth on a gear. Engineers and technicians use terms like “mixer,” “integrator,” and “rate component,” but we may consider the analog computer as composed of passive networks plus amplifiers where necessary to boost a faint signal.

Some consideration of what we have been discussing will give us an indication of the advantages of the analog computer over the digital type. First and most obvious, perhaps, is that of simplicity. A digital device for recording temperature could be built; but it would hardly improve on the simplicity of the ordinary thermometer. Speed is another desirable attribute of most analog computers. Since operation is parallel, with all parts of the problem being worked on at once, the answer is reached quickly. This is of particular importance in “on-line” application where the computer is being used to control, let us say, an automatic machining operation in a factory. Even in a high-speed electronic digital computer there is a finite lag due to the speed of electrons. This “slack” is not present in a direct analog and thus there is no loss of precious time that could mean the difference between a rejected and a perfect part from the lathe.

It follows from these very advantages that there are drawbacks too. The analog computer that automatically profiles a propeller blade in a metalworking machine cannot mix paint to specifications or control the speed of a subway train unless it is a very special kind of general-purpose analog that would most likely be the size of Grand Central Station and sell for a good part of the national debt. Most analogs have one particular job they are designed for; they are specialists with all the limitations that the word implies.

There is one other major disadvantage that our analog suffers by its very nature. We can tolerate the approximate answer 3.98 instead of 4, because most of us recognize the correct product of 2 times 2. But few production managers would want to use 398 rivets if it took 400 to do the job safely—neither would they want to use 402 and waste material. Put bluntly, the analog computer is less accurate than its digital cousin. It delivers answers not in discrete units, but approximations, depending on the accuracy of its own parts and its design. Calculo, an electrical-analog computer produced for science students, has an advertised accuracy of 5 per cent at a cost of about $20. The makers frankly call it an “estimator.” This is excellent for illustrating the principles of analog machines to interested youngsters, but the students could have mathematical accuracy of 100 per cent from a digital computer called the abacus at a cost of less than a dollar.

Greater accuracy in the analog computer is bought at the expense of costlier components. Up to accuracies of about 1 per cent error it is usually cheaper to build an analog device than a digital, assuming such a degree of accuracy is sufficient, of course. Analog accuracies ten times the 1 per cent figure are feasible, but beyond that point costs rise very sharply and the digital machine becomes increasingly attractive from a dollars and cents standpoint. Designers feel that accuracies within 0.01 per cent are pushing the barriers of practicality, and 0.001 per cent probably represents the ultimate achievable. Thus the digital computer has the decided edge in accuracy, if we make some realistic allowances. For example, the best digital machine when asked to divide 10 by 3 can never give an exact answer, but is bound to keep printing 3’s after the decimal point!

There are other differences between our two types of computers, among them being the less obvious fact that it is harder to make a self-checking analog computer than it is to build the same feature into the digital. However, the most important differences are those of accuracy and flexibility.

For these reasons, the digital computer today is in the ascendant, although the analog continues to have its place and many are in operation in a variety of chores. We have mentioned fire control and the B-29 gunsight computer in particular. This was a pioneer airborne computer, and proved that an analog could be built light enough for such applications. However, most fire-control computers are earthbound because of their size and complexity. A good example is the ballistic computer necessary for the guns on a battleship. In addition to the normal problem of figuring azimuth and elevation to place a shell on target, the gun aboard ship has the additional factors of pitch, roll, and yaw to contend with. These inputs happen to be ideal for analog insertion, and a properly designed computer makes corrections instantaneously as they are fed into it.

A fertile field for the analog computer from the start was that of industrial process control. Chemical plants, petroleum refineries, power generating stations, and some manufacturing processes lend themselves to control by analog computers. The simplicity and economy of the “modeling” principle, plus the instantaneous operation of the analog, made it suitable for “on-line” or “on-stream” applications.

The analog computer has been described as useful in the design of engines; it also helps design the aircraft in which these engines are used, and even simulates their flight. A logical extension of this use is the training of pilots in such flight simulators. One interesting analog simulator built by Goodyear Aircraft Corporation studied the reactions of a pilot to certain flight conditions and then was able to make these reactions itself so faithfully that the pilot was unaware that the computer and not his own brain was accomplishing the task.

The disciplines of geometry, calculus, differential equations, and other similar mathematics profit from the analog computer which is able to make a model of their curves and configurations and thus greatly speed calculations. Since the analog is so closely tied to the physical rather than the mental world, it cannot cope with discrete numbers, and formal logic is not its cup of tea.

Surely, progress has been made and improvements continue to be designed into modern analog computers. Repetitive operations can now be done automatically at high speed, and the computer even has a memory. High-speed analog storage permits the machine to make sequential calculations, a job once reserved for the digital computer. But even these advances cannot offset the basic limitations the analog computer is heir to.

Fewer analog machines are being built now, and many in existence do not enjoy the busy schedule of the digital machines. As the mountains of data pile up, created incidentally by computers in the first place, more computers are needed to handle and make sense of them. It is easier to interpret, store, and transmit digital information than analog; the digital computer therefore takes over this important task.

Even in control systems the digital machine is gaining popularity; its tremendous speed offsets its inherent cumbersomeness and its accuracy tips the scales more in its favor. These advantages will be more apparent as we discuss the digital machine on the next pages and explain the trend toward the hybrid machine, ever becoming more useful in the computer market place. Of course, there will always be a place for the pure analog—just as there has always been for any specialist, no matter what his field.

The Digital Counter

The digital computer was first on the scene and it appears now that it will outnumber and perhaps outlive its analog relative. A simple computer of this type is as old as man, though it is doubtful that it has been in use that long. Proof of this claim to its pioneering are the words digit and calculi, for finger and pebbles, respectively. We counted “how many” before we measured “how large,” and the old Romans tallied on fingers until they ran out and then supplemented with pebbles.

Perhaps the first computations more complex than simple counting of wives or flocks came about when some wag found that he could ascertain the number of sheep by counting legs and dividing by four. When it was learned that the thing worked both ways and that the number of pickled pigs feet was four times the number of pigs processed, arithmetic was born. The important difference between analog and digital, of course, is that the latter is a means of counting, a dealing with discrete numbers rather than measuring.

This kind of computation was taxed sorely when such things as fractions and relationships like pi came along, but even then man has managed to continue dealing with numbers themselves rather than quantity. Just as the slide rule is a handy symbol for the analog computer, the abacus serves us nicely to illustrate the digital type, and some schools make a practice of teaching simple arithmetic to youngsters in this manner.

Our chapter on the history of the computer touched on early efforts in the digital field, though no stress was laid on the distinction between types. We might review a bit, and pick out which of the mechanical calculating devices were actually digital. The first obviously was the abacus. It was also the only one for a long time. Having discovered the principle of analogy, man leaned in that direction for many centuries, and clocks, celestial simulators, and other devices were analog in nature. Purists point out that even the counting machines of Pascal and Leibnitz were analog computers, since they dealt with the turning of shafts and gears rather than the manipulation of digits. The same reasoning has caused some debate about Babbage’s great machines in the 1800’s, although they are generally considered a digital approach to problem-solving. Perhaps logicians had as much as anyone to do with the increasing popularity of the digital trend when they pointed out the advantages of a binary or two-valued system.

With the completion in 1946 by Eckert and Mauchly of the electronic marvel they dubbed ENIAC, the modern digital computer had arrived and the floodgates were opened for the thousands of descendants that have followed. For every analog computer now being built there are dozens or perhaps hundreds of digital types. Such popularity must be deserved, so let us examine the creature in an attempt to find the reason.

Courtesy of the National Science Foundation
The computer family tree. Its remarkable growth began with government-supported research, continued in the universities; and the current generation was developed primarily in private industry.

We said that by its nature the analog device tended to be a special-purpose computer. The digital computer, perhaps because its basic operation is so childishly simple, is best suited for general-purpose work. It is simple, consisting essentially of switches that are either on or off. Yet Leibnitz found beauty in that simplicity, and even the explanation of the universe. Proper interconnection of sufficient on-off switches makes possible the most flexible of all computers—man’s brain. By the same token, man-made computers of the digital type can do a wider variety of jobs than can the analog which seemingly is more sophisticated.

A second great virtue of the digital machine is its accuracy. Even a trial machine of Babbage had a 5-place accuracy. This is an error of only one part in ten thousand, achievable in the analog at great expense. This was of course only a preliminary model, and the English inventor planned 20-place accuracy in his dream computer. Present electronic digital computers offer 10-place accuracy as commonplace, a precision impossible of achievement in the analog.

We pointed out in the discussion of analog computers that the complexity and expense of increased accuracy was in direct proportion to the degree of accuracy desired. Happily for the digital machine, the reverse is true in its case. Increasing accuracy from five to six figures requires a premium of one-fifth, or 20 per cent. But jumping from 10-place to 11-place precision costs us only 10 per cent, and from 20-place to 21-place drops to just 5 per cent.

Actually, such a high degree of accuracy is not necessary in most practical applications. For example, the multiplication of 10-digit numbers may yield a 20-digit answer. If we desired, we could increase the capability of our digital computer to twenty digits and give an accuracy of one part in 10 million trillion! However, we simply “round off” the last ten digits and leave the answer in ten figures, an accuracy no analog computer can match. The significant point is that the analog can never hope to compete with digital types for accuracy.

A third perhaps not as important advantage the digital machine has is its compactness. We are speaking now of later computers, and not the pioneer electromechanical giants, of course. The transistor and other small semiconductor devices supplanted the larger tubes, and magnetic cores took the place of cruder storage components. Now even more exotic devices are quietly ousting these, as magnetic films and cryotrons begin to be used in computers.

Science Materials Center
BRAINIAC, another do-it-yourself computer. This digital machine is here being programmed to solve a logic problem involving a will.

This drastic shrinking of size by thinking small on the part of computer designers increases the capacity of the digital computer at no sacrifice in accuracy or reliability. The analog, unfortunately, cannot make use of many of these solid-state devices. Again, the bugaboo of accuracy is the reason; let’s look further into the problem.

The most accurate and reliable analog computers are mechanical in nature. We can cut gears and turn shafts and wheels to great accuracy and operate them in controlled temperature and humidity. Paradoxically, this is because mechanical components are nearer to digital presentations than are electrical switches, magnets, and electronic components. A gear can have a finite number of teeth; when we deal with electrons flowing through a wire we leave the discrete and enter the continuous world. A tiny change in voltage or current, or magnetic flux, compounded several hundred times in a complex computer, can change the final result appreciably if the errors are cumulative, that is, if they are allowed to pile up. This is what happens in the analog computer using electrical and electronic components instead of precisely machined cams and gears.

The digital device, on the other hand, is not so penalized. Though it uses electronic switches, these can be so set that even an appreciable variation in current or voltage or resistance will not affect the proper operation of the switch. We can design a transistor switch, for example, to close when the current applied exceeds a certain threshold. We do not have to concern ourselves if this excess current is large or small; the switch will be on, no more and no less. Or it will be completely off. Just as there is no such thing as being a little bit dead, there is no such thing as a partly off digital switch. So our digital computer can make use of the more advanced electronic components to become more complex, or smaller, or both. The analog must sacrifice its already marginal accuracy if it uses more electronics. The argument here is simplified, of course; there are electronic analog machines in operation. However, the problem of the “drift” of electronic devices is inherent and a limiting factor on the performance of the analog.

These, then, are some of the advantages the digital computer has over its analog relative. It is more flexible in general—though there are some digital machines that are more specialized than some analog types; it is more accurate and apparently will remain so; and it is more amenable to miniaturization and further complexity because its designer can use less than perfect parts and produce a perfect result.

In the disadvantage department the digital machine’s only drawback seems to be its childish way of solving problems. About all it knows how to do is to add 1 and 1 and come up with 2. To multiply, it performs repetitive additions, and solving a difficult equation becomes a fantastically complex problem when compared with the instantaneous solution possible in the analog machine. The digital computer redeems itself by performing its multitudinous additions at fabulous speeds.

Because it must be fed digits in its input, the digital machine is not economically feasible in many applications that will probably be reserved for the analog. A digital clock or thermometer for household use would be an interesting gimmick, but hardly worth the extra trouble and expense necessary to produce. Even here, though, first glances may be wrong and in some cases it may prove worth while to convert analog inputs to digital with the reverse conversion at the output end. One example of this is the airborne digital computer which has taken over many jobs earlier done by analog devices.

There is another reason for the digital machines ubiquitousness, a reason it does not seem proper to list as merely a relative advantage over the analog. We have described the analog computer used as an aid to psychological testing procedures, and its ability to handle a multiplicity of problems at once. This perhaps tends to obscure the fact that the digital machine by its very on-off, yes-no nature is ideally suited to the solving of problems in logic. If it achieves superiority in mathematics in spite of its seemingly moronic handling of numbers, it succeeds in logic because of this very feature.

While it might seem more appropriate that music be composed by analogy, or that a chess-playing machine would likely be an analog computer, we find the digital machine in these roles. The reason may be explained by our own brains, composed of billions of neurons, each capable only of being on or off. While many philosophers build a strong case for the yes-no-maybe approach with its large areas of gray, the discipline of formal logic admits to only two states, those that can so conveniently be represented in the digital computer’s flip-flop or magnetic cores.

The digital computer, then, is not merely a counting machine, but a decision-maker as well. It can decide whether something should be added, subtracted, or ignored. Its logical manipulations can by clever circuitry be extended from AND to OR, NOT, and NOR. It thus can solve not only arithmetic, but also the problems of logic concerning foxes, goats, and cabbages, or cannibals and missionaries that give us human beings so much trouble when we encounter them.

The fact that the digital computer is just such a rigorously logical and unbending machine poses problems for it in certain of its dealings with its human masters. Language ideally should be logical in its structure. In general it probably is, but man is so perverse that he has warped and twisted his communications to the point that a computer sticking strictly to book logic will hit snags almost as soon as it starts to translate human talk into other human talk, or into a logical machine command or answer.

For instance, we have many words with multiple meanings which give rise to confusion unless we are schooled in subtleties. There are stories, some of them apocryphal but nonetheless pointing up the problem, of terms like “water goat” cropping up in an English-to-Russian translation. Investigation proved that the more meaningful term would have been “hydraulic ram.” In another interesting experiment, the expression, “the spirit is willing but the flesh is weak” was machine translated into Russian, and then that result in turn re-translated back into English much in the manner of the party game of “Telephone” in which an original message is whispered from one person to another and finally back to the originator. In this instance, the final version was, “The vodka is strong, but the meat is rotten.”

It is a fine distinction here as to who is wrong, the computer or man and his irrational languages. Chances are that in the long run true logic will prevail, and instead of us confusing the computer it will manage instead to organize our grammar into the more efficient tool it should be. With proper programming, the computer may even be able to retain sufficient humor and nuance to make talk interesting and colorful as well as utilitarian.

We can see that the digital machine with its flexibility, accuracy, and powerful logical capability is the fair-haired one of the computer family. Starting with a for abacus, digital computer applications run through practically the entire alphabet. Its take-over in the banking field was practically overnight; it excels as a tool for design and engineering, including the design and engineering of other computers. Aviation relies heavily on digital computers already, from the sale of tickets to the control of air traffic.

Gaming theory is important not only to the Saturday night poker-player and the Las Vegas casino operator, but to military men and industrialists as well. Manufacturing plants rely more and more on digital techniques for controls. Language translation, mentioned lightly above, is a prime need at least until we all begin speaking Esperanto, Io, or Computerese. Taxation, always with us, may at least be more smoothly handled when the computers take over. Insurance, the arrangement of music, spaceflight guidance, and education are random fields already dependent more or less on the digital computer. We will not take the time here to go thoroughly into all the jobs for which the computer has applied for work and been hired; that will be taken up in later chapters. But from even a quick glance the scope of the digital machine already should be obvious. This is why it is usually a safe assumption that the word computer today refers to the digital type.

Hybrid Computers

We have talked of the analog and the digital; there remains a further classification that should be covered. It is the result of a marriage of our two basic types, a result naturally hybrid. The analog-digital computer is third in order of importance, but important nonetheless.

Minneapolis-Honeywell
Nerve center of Philadelphia Electric Company’s digital computer-directed automatic economic dispatch system is this console from which power directors operate and supervise loading of generating units at minimum incremental cost.

Necessity, as always, mothered the invention of the analog-digital machine. We have talked of the relative merits of the two types; the analog is much faster on a complex problem such as solving simultaneous equations. The digital machine is far more accurate. As an example, the Psychological Matrix Rotator described earlier could solve its twelve equations practically instantaneously. A digital machine might take seconds—a terribly long time by computer standards. If we want an accurate high-speed differential analyzer, we must combine an analog with a digital computer.

Because the two are hardly of the same species, this breeding is not an easy thing. But by careful study, designers effected the desired mating. The hybrid is not actually a new type of computer, but two different types tied together and made compatible by suitable converters.

The composite consists of a high-speed general-purpose digital computer, an electronic analog computer, an analog-to-digital converter, a digital-to-analog and a suitable control for these two converters. The converters are called “transducers” and have the ability of changing the continuous analog signal into discrete pulses of energy, or vice versa.

Sometimes called digital differential analyzers, the hybrid computers feature the ease of programming of the analog, plus its speed, and the accuracy and much broader range of the digital machine. Bendix among others produced such machines several years ago. The National Bureau of Standards recently began development of what it calls an analog-digital differential analyzer which it expects to be from ten to a hundred times more accurate than earlier hybrid computers. The NBS analyzer will be useful in missile and aircraft design work.

Despite its apparent usefulness as a compromise and happy medium between the two types, the hybrid would seem to have as limited a future as any hybrid does. Pure digital techniques may be developed that will be more efficient than the stopgap combination, and the analog-digital will fall by the wayside along the computer trail.

Summary

Historically, the digital computer was first on the scene. The analog came along, and for a time was the more popular for a variety of reasons. One of these was the naïve, cumbersome mode of operation the digital computer is bound to; another its early lack of speed. Both these drawbacks have been largely eliminated by advances in electronics, and apparently this is only the beginning. In a few years the technology has progressed from standard-size vacuum tubes through miniature tubes and the shrinking of other components, to semiconductors and other tinier devices, and now we have something called integrated circuitry, with molecular electronics on the horizon. These new methods promise computer elements approaching the size of the neurons in our own brains, yet with far faster speed of operation.

Such advances help the digital computer more than the analog, barring some unexpected breakthrough in the accuracy problem of the latter. Digital building blocks become ever smaller, faster, cheaper, and more reliable. Computers that fit in the palm of the hand are on the market, and are already bulky by comparison with those in the laboratory. The analog-digital hybrid most likely will not be new life for the analog, but an assimilating of its better qualities by the digital.


“‘What’s one and one and one and one and one and one and one and one and one and one?

I don’t know,’ said Alice. ‘I lost count.

She can’t do Addition,’ the Red Queen interrupted.

—Lewis Carroll