5: The Binary Boolean Bit
In this world full of “bigness,” in which astronomical numbers apply not only to the speed of light and the distance to stars but to our national debt as well, it is refreshing to recall that some lucky tribes have a mathematical system that goes, “One—two—plenty!” Such an uncluttered life at times seems highly desirable, and we can only envy those who lump all numbers from three to billions as simply “plenty.”
Instead we are faced today with about as many different number systems as there are numbers, having come a long way from the dawn of counting when an even simpler method than “one—two—plenty” prevailed. Man being basically self-centered, he first thought in terms of “me,” or one. Two was not a concept, but two “ones”; likewise, three “ones” and so on. Pebbles were handy, and to represent the ten animals slain during the winter, a cave man could make ten scratches on the wall or string out that many stones.
It is said that the ancient cabbies in Rome had a taximeter that dropped pebbles one by one onto a plate as the wheels turned the requisite number of revolutions. This plate of stones was presented to the passenger at the end of his ride—perhaps where we get the word “fare”! Prices have risen so much that it would take quite a bag of pebbles in the taximeter today.
Using units in this manner to express a sum is called the unitary system. It is the concept that gives rise to the “if all the dollars spent in this country since such and such a time were laid end to end—” analogies. Put to practice, this might indeed have a salutary effect, but long ago man learned that it was not practical to stick to a one-for-one representation.
How long it was before we stumbled onto the fact that we had a “handy” counting system attached to our wrists is not positively known, but we eventually adopted the decimal system. In some places the jump from one to ten was not made completely. The Pueblo Indians, for instance, double up one fist each time a sum of five is reached. Thus the doubled fist and two fingers on the other hand signifies seven. In the mathematician’s language, this is a modulo-5 system. The decimal system is modulo-10; in other words we start over each time after reaching 10.
Besides the word digit in our vocabulary to tie fingers and numbers, the Roman numerals V and X are graphic representations of one hand with thumb widespread, and two hands crossed, respectively. A point worth remembering is that the decimal system was chosen arbitrarily because we happen to have ten digits. There is no divine arithmetical significance in the number 10; in fact mathematicians would prefer 12, since it can be divided more ways.
The ancient Mayans, feeling that if 10 were ten times as good as 1, then surely 20 would be twice the improvement of the decimal system. So they pulled off their boots and added toes to fingers for a modulo-20 number system. Their word for 20, then, is the same as that for “the whole man” for very good reason. Other races adopted even larger base systems, the base of 60 being an example.
If we look to natural reasons for the development of number systems, we might decide that the binary, or two-valued system, did not attain much prominence in naïve civilizations because there are so few one-legged, two-toed animals! Only when man built himself a machine uniquely suited to two-valued mathematics did the binary system come into its own.
Numbers are merely conventions, rigorous conventions to be sure with no semantic vagueness. God did not ordain that we use the decimal system, as evidenced in the large number of other systems that work just fine. Some abacuses use the biquinary system, and there are septal, octal, and sexagesimal systems. We can even express numbers in an ABC or XYZ notation. So a broad choice was available for the computer designer when he began to look about for the most efficient system for his new machine.
Considering only the question of a radix, or base, which will permit the fewest elements to represent the desired numbers, mathematicians can show us that a base of not 10, or 12, or any other whole number is most efficient, but the fraction 2.71828. The ideal model is not found in man, then, since man does not seem to have 2.71828 of anything. However, the strange-looking number does happen to be the base of the system of natural logarithms.
Now a system of mathematics based on 2.71828 might make the most efficient use of the components of the computer, but it would play hob with other factors, including the men who must work with such a weird set of numbers. As is often done, a compromise was made between ideal and practical choices. Since the computer with the most potential seems to be the electronic computer, and since its operation hinges on the opening and closing of simple or sophisticated switches, a two-valued mathematical system, the binary system, was chosen. It wasn’t far from the ideal 2.71828, and there was another even more powerful reason for the choice. Logic is based on a yes-no, true-false system. Here, then, was the best of all possible number systems: the lowly, apparently far-from-sophisticated binary notation. As one writer exclaimed sadly, a concept which had been hailed as a monument to monotheism ended up in the bowels of a robot!
The Binary System
It is believed from ancient writings that the Chinese were aware of the binary or two-valued system of numbers as early as 3000 B.C. However, this fact was lost as the years progressed, and Leibnitz thought that he had discovered binary himself almost 5,000 years later. In an odd twist, Leibnitz apprised his friend Grimaldi, the Jesuit president of the Tribunal of Mathematics in China, of the religious significance of binary 1 and 0 as an argument against Buddhism!
A legend in India also contains indications of the power of the binary system. The inventor of the game of chess was promised any award he wanted for this service to the king. The inventor asked simply that the king place a grain of wheat on the first square of the board, two on the second, and then four, eight, and so on in ascending powers of two until the sixty-four squares of the board were covered. Although the king thought his subject a fool, this amount of wheat would have covered the entire earth to a depth of about an inch!
We are perhaps more familiar with the binary system than we realize. Morse code, with its dots and dashes, for example, is a two-valued system. And the power of a system with a base of two is evident when we realize that given a single one-pound weight and sufficient two-pound weights we can weigh any whole-numbered amounts.
At first glance, however, binary numbers seem a hopeless conglomeration of ones and zeros. This is so only because we have become conditioned to the decimal system, which was even more hopeless to us as youngsters. We may have forgotten, with the contempt of familiarity, that our number system is built on the idea of powers. In grade school we learned that starting at the right we had units, tens, hundreds, thousands, and so on. In the decimal number 111, for example, we mean 1 times 102, plus 1 times 101, plus 1. We have handled so many numbers so many times we have usually forgotten just what we are doing, and how.
The binary system uses only two numbers: 1 and 0. So it is five times as simple as the decimal system. It uses powers of two rather than ten, again far simpler. Let’s take the binary number 111 and break it down just as we do a decimal number. Starting at the left, we have 1 times 22, plus 1 times 21, plus 1. This adds up to 7, and there is our answer.
The decimal system is positional; this is what made it so much more effective in the simple expression of large numbers than the Roman numeral system. Binary is positional too, and for larger numbers we continue moving toward the left, increasing our power of two each time. Thus 1111 is 23 plus 22 plus 21 plus 1.
System Development Corp.
A computer teaching machine answering a question about the binary system.
We are familiar with decimal numbers like 101. This means 1 hundred, no tens, and 1 unit. Likewise in binary notation 101 means one 4, no 2’s, and one 1. For all its seeming complexity, then, the binary system is actually simpler than the “easy” decimal one we are more familiar with. But despite its simplicity, the binary system is far from being inferior to the decimal system. You can prove this by doing some counting on your fingers.
Normally we count, or tally, by bending down a finger for each new unit we want to record. With both hands, then, we can add up only ten units, a quite limited range. We can add a bit of sophistication, and assign a different number to each finger; thus 1, 2, 3, 4, 5, 6, 7, 8, 9, 10. Now, believe it or not, we can tally up to 55 with our hands! As each unit is counted, we raise and lower the correct finger in turn. On reaching 10, we leave that finger—thumb, actually—depressed, and start over with 1. On reaching 9, we leave it depressed, and so on. We have increased the capacity of our counting machine by 5-1/2 times without even taking off our shoes. The mathematician, by the way, would say we have a capability of not 55 but 56 numbers, since all fingers up would signify 0, which can be called a number. Thus our two hands represent to the mathematician a modulo-56 counter.
This would seem to vanquish the lowly binary system for good, but let’s do a bit more counting. This time we will assign each finger a number corresponding to the powers of 2 we use in reading our binary numbers. Thus we assign the numbers 1, 2, 4, 8, 16, 32, 64, 128, 256, and 512. How many units can we count now? Not 10, or 55, but a good bit better than that. Using binary notation, our ten digits can now record a total of 1,023 units. True, it will take a bit of dexterity, but by bending and straightening fingers to make the proper sums, when you finally have all fingers down you will have counted 1,023, or 1,024 if you are a mathematical purist.
Once convinced that the binary method does have its merits, it may be a little easier to pursue a mastery of representing numbers in binary notation, difficult as it may seem at the outset. The usual way to convert is to remember, or list, the powers of 2, and start at the left side with the largest power that can be divided into the decimal number we want to convert. Suppose we want to change the number 500 into binary. First we make a chart of the positions:
| Power of 2 | 8 | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0 |
| Decimal Number | 256 | 128 | 64 | 32 | 16 | 8 | 4 | 2 | 1 |
| Binary Number | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 |
Since 256 is the largest number that will go into 500, we start there, knowing that there will be nine binary digits, or “bits” in our answer. We place a 1 in that space to indicate that there is indeed an eighth power of 2 included in 500. Since 128 will go into the remainder, we put a 1 in that space also. Continuing in this manner, we find that we need 1’s until we reach the “8” space which we must skip since our remainder does not contain an 8. We mark a 1 in the 4 space, but skip the 2 and the 1. Our answer, then, in binary notation is 111110100. This number is called “pure binary.” It can also lead to pure torture for human programmers whose eyes begin to bug with this “bit chasing,” as it has come to be called. Everything is of course relative, and the ancient Roman might gladly have changed DCCCLXXXVIII to binary 1101111000, which is two digits shorter.
There is a simpler way of converting that might be interesting to try out. We’ll start with our same 500. Since it is an even number, we put a 0 beneath it. Moving to the left, we divide by two and get 250. This also is an even number, so we mark down a 0 in our binary equivalent. The next division gives 125, an odd number, so we put down a 1. We continue to divide successively, marking a zero for each even remainder, and a 1 for the odd. Although it may not be obvious right away, we are merely arriving at powers of two by a process called mediation, or halving.
| Decimal | 1 | 3 | 7 | 15 | 31 | 62 | 125 | 250 | 500 |
| Binary | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 |
Obviously we can reverse this procedure to convert binary numbers to their decimal equivalents.
There is an interesting extension of this process called duplication by which multiplication can be done quite simply. Let us multiply 95 times 36. We will halve our 95 as we did in the earlier example, while doubling the 36. This time when we have an even number in the left column, we will simply cancel out the corresponding number in the right column.
| 95 | 36 |
| 47 | 72 |
| 23 | 144 |
| 11 | 288 |
| 5 | 576 |
| 2 | |
| 1 | 2304 |
| —— | |
| 3420 |
This clever bit of mathematics is called Russian peasant multiplication, although it was also known to early Egyptians and many others. It permits unschooled people, with only the ability to add and divide, to do fairly complex multiplication problems. Actually it is based on our old stand-by, the binary system. What we have done is to represent the 95 “dyadically,” or by twos, and to multiply 36 successively by each of these powers as applicable. We will not digress further, but leave this as an example of the tricks possible with the seemingly simple binary system.
Even after we have learned to convert from the decimal numbers we are familiar with into binary notation almost by inspection, the results are admittedly unwieldy for human handling. An employee who is used to getting $105 a week would be understandably confused if the computer printed out a check for him reading $1101001. For this reason the computer programmer has reached a compromise with the machine. He speaks decimal, it speaks binary; they meet each other halfway with something called binary-coded decimal. Here’s the way it works.
A little thought will show that the decimal numbers from 0 through 9 can be presented in binary using four bits. Thus:
| Decimal | Binary |
| 0 | 0 |
| 1 | 1 |
| 2 | 110 |
| 3 | 111 |
| 4 | 1100 |
| 5 | 1101 |
| 6 | 1110 |
| 7 | 10111 |
| 8 | 11000 |
| 9 | 11001 |
In the interest of uniformity we fill in the blanks with 0’s, so that each decimal number is represented by a four-digit block, or word, of binary code. Now when the computer programmer wants to feed the number 560 into the computer in binary he breaks it into separate words of 5, 6, and 0; or 0101, 0110, and 0000. In effect, we have changed $5 words into four-bit words! The computer couldn’t care less, since it handles binary digits at the rate of millions a second; and the human is better able to keep his marbles while he works with the computer. Of course, there are some computers that are classed as pure binary machines. These work on mathematical problems, with none of the restrictions imposed by human frailty. For the computer the pure binary system is more efficient than the binary decimal compromise.
The four-digit words can be made to represent not only numbers, but letters as well. When this is done it is called an alpha-numeric or alphameric code. Incidentally, it is conceivable that language could be made up of only 1’s and 0’s, or perhaps a’s and b’s would be better. All it would take would be the stringing together of enough letters to cover all the words there are. The result would be rather dull, with words like aabbababaabbaaba, bbaabbaabababaaabab, and aaaaaaaaabaaa; it is doubtful that the computer will make much headway with a binary alphabet for its human masters.
In the early days of binary computer work, the direct conversion to binary code we have discussed was satisfactory, but soon the designers of newer machines and calculating methods began to juggle the digits around for various reasons. For one thing, a decimal 0 was represented by four binary 0’s. Electrically, this represents no signal at all in the computer’s inner workings. If trouble happened, say a loose connection, or a power failure for a split second, the word 0000 might be printed out and accepted as a valid zero when it actually meant a malfunction. So the designers got busy trying other codes than the basic binary.
One clever result is the “excess-3” code. In this variation 3 is added to each decimal number before conversion. A decimal 30 is then represented by the word 0011 instead of 0000. There is, in fact, no such computer word as 0000 in excess-3 code. This eliminates the possibility of an error being taken for a 0. Excess-3 does something else too. If each digit is changed, that is, if 1’s become 0’s and 0’s become 1’s, the new word is the “9’s complement” of the original. For example, the binary code for 4 in excess-3 is 0111. Changing all the digits, we get 1000, which is decimal 5. This is not just an interesting curiosity, but the 9’s complement of 4 (9 minus 4). Anyone familiar with an adding machine is used to performing subtraction by using complements of numbers. The computer cannot do anything but add; by using the excess-3 code it can subtract by adding. Thus, while the computer cannot subtract 0110 from 1000, it can quite handily add 1001 to 1000 to get the same result.
There are many other reasons for codes, among them being the important one of checking for errors. “Casting out nines” is a well-known technique of the bookkeeper for locating mistakes in work. Certain binary codes, containing what is called a “parity bit,” have the property of self-checking, in a manner similar to casting out nines. A story is told of some pioneer computer designers who hit on the idea of another means of error checking not as effective as the code method.
The idea was clever enough, it being that identical computers would do each problem and compare answers, much like the pairs of abacus-wielders in Japan’s banks. In case both computers did not come up with the same answer, a correction would be made. With high hopes, the designers fed a problem into the machines and sat back to watch. Soon enough a warning light blinked on one machine as it caught an error. But simultaneously a light blinked on the other. After that, chaos reigned until the power plugs were finally pulled. Although made of metal and wires, the computers demonstrated a remarkably human trait; each thought the other was wrong and was doing its best to change its partner’s answer! The solution, of course, was to add a third computer.
Binary decimal, as we have pointed out, is a wasteful code. The decimal number 100 in binary decimal coding is 0001 0000 0000, or 12 digits. Pure binary is 1100100, or only 7 digits. By going to a binary-octal code, using eight numbers instead of ten, the words can be 3-bit instead of 4-bit. This is called an “economy” code, and finds some application. There are also “Gray” codes, reflected binary codes, and many more, each serving a particular purpose. Fortunately for the designer, he can be prodigal with his use of codes. With 4-bit words, 29 billion codes are available, so a number of them are still unused.
Having translated our decimal numbers into code intelligible to our computer, we still have the mathematical operations to perform on it. With a little practice we can add, subtract, multiply, and divide our binary numbers quite easily, as in the examples that follow.
| Addition: | 1100 | (12) |
| 0111 | ( 7) | |
| —— | —— | |
| 10011 | (19) | |
| Subtraction: | 1010 | (10) |
| - 0010 | ( 2) | |
| ——— | —— | |
| 1000 | (8) |
| Multiplication: | 0110 | (6) |
| × 0011 | (3) | |
| ——— | –—— | |
| 0110 | ||
| 0110 | ||
| 0000 | ||
| 0000 | ||
| ——— | ||
| 10010 | (18) | |
| [TN1] Division: | 1010 ÷ 10 = 0101 | (10 ÷ 2 = 5) |
The rules should be obvious from these examples. Just as we add 5 and 5 to get 0 with 1 to carry, we add 1 and 1 and get 0 with 1 to carry in binary. Adding 1 and 0 gives 1, 0 and 0 gives 0. Multiplying 1 times 1 gives 1, 1 times 0 gives 0, and 0 times 0 gives 0. One divides into 1 once, and into 0 no times. Thus we can manipulate in just the manner we are accustomed to.
The computer does not even need to know this much. All it is concerned with is addition: 1 plus 1 gives 0 and 1 to carry; 1 plus 0 gives 1; and 0 plus 0 gives 0. This is all it knows, and all it needs to know. We have described how it subtracts by adding complements. It can multiply by repetitive additions, or more simply, by shifting the binary number to the left. Thus, 0001 becomes 0010 in one shift, and 0100 in two shifts, doubling each time. This is of course just the way we do it in the decimal system. Shifting to the right divides by two in the binary system.
The simplest computer circuitry performs additions in a serial manner, that is, one operation at a time. This is obviously a slow way to do business, and by adding components so that there are enough to handle the digits in each row simultaneously the arithmetic operation is greatly speeded. This is called parallel addition. Both operations are done by parts understandably called adders, which are further broken down into half-adders.
There are refinements to basic binary computation, of course. By using a decimal point, or perhaps a binary point, fractions can be expressed in binary code. If the position to the left of the point is taken as 2 to the zero power, then the position just to the right of the point is logically 2 to the minus one, which if you remember your mathematics you’ll recognize as one-half. Two to the minus two is then one-fourth, and so on. While we are on the subject of the decimal point, sophisticated computers do what is called “floating-point arithmetic,” in which the point can be moved back and forth at will for much more rapid arithmetical operations.
No matter how many adders we put together and how big the computer eventually gets, it is still operating in what seems an awkward fashion. It is counting its fingers, of which it has two. The trick is in the speed of this counting, so fast that one million additions a second is now a commonplace. Try that for size in your own decimally trained head and you will appreciate the computer a little more.
The Logical Algebra
We come now to another most important reason for the effectiveness of the digital computer; the reason that makes it the “logical” choice for not only mathematics but thinking as well. For the digital computer and logic go hand in hand.
Logic, says Webster, is “the science that deals with canons and criteria of validity in thought and demonstration.” He admits to the ironic perversion of this basic definition; for example, “artillery has been called the ‘logic of kings,’” a kind of logic to make “argument useless.” Omar Khayyám had a similar thought in mind when he wrote in The Rubáiyát,
The grape that can with logic absolute,
The Two-and-Seventy Sects confute.
Other poets and writers have had much to say on the subject of logic through the years, words of tribute and words of warning. Some, like Lord Dunsany, counsel moderation even in our logic. “Logic, like whiskey,” he says, “loses its beneficial effect when taken in too large quantities.” And Oliver Wendell Holmes asks,
Have you heard of the wonderful one-hoss shay
That was built in such a logical way
It ran a hundred years to the day?
The words logic and logical are much used and abused in our language, and there are all sorts of logic, including that of women, which seems to be a special case. For our purposes here it is best to stick to the primary definition in the dictionary, that of validity in thought and demonstration.
Symbolic logic, a term that still has an esoteric and almost mystical connotation, is perhaps mysterious because of the strange symbology used. We are used to reasoning in words and phrases, and the notion that truth can be spelled out in algebraic or other notation is hard to accept unless we are mathematicians to begin with.
We must go far back in history for the beginnings of logic. Aristotelian logic is well known and of importance even though the old syllogisms have been found not as powerful as their inventors thought. Modern logicians have reduced the 256 possible permutations to a valid 15 and these are not as useful as the newer kind of logic that has since come into being.
Leibniz is conceded to be the father of modern symbolic logic, though he probably neither recognized what he had done nor used it effectively. He did come up with the idea of two-valued logic, and the cosmological notion of 1 and 0, or substance and nothingness. In his Characteristica Universalis he was groping for a universal language for science; a second work, Calculus Ratiocinator, was an attempt to implement this language. Incidentally, Leibnitz was not yet twenty years old when he formulated his logic system.
Unfortunately it was two centuries later before the importance of his findings was recognized and an explanation of their potential begun. In England, Sir William Hamilton began to refine the old syllogisms, and is known for his “quantification of the predicate.” Augustus De Morgan, also an Englishman, moved from the quantification of the predicate to the formation of thirty-two rules or propositions that result. The stage was set now for the man who has come to be known as the father of symbolic logic. His name was George Boole, inventor of Boolean algebra.
In 1854, Boole published “An Investigation of the Laws of Thought on which are Founded the Mathematical Theories of Logic and Probabilities.” In an earlier pamphlet, Boole had said, “The few who think that there is that in analysis which renders it deserving of attention for its own sake, may find it worth while to study it under a form in which every equation can be solved and every solution interpreted.” He was a mild, quiet man, though nonconformist religiously and socially, and his “Investigation” might as well have been dropped down a well for all the immediate splash it made in the scientific world. It was considered only academically interesting, and copies of it gathered dust for more than fifty years.
Only in 1910 was the true importance given to Boole’s logical calculus, or “algebra” as it came to be known. Then Alfred North Whitehead and Bertrand Russell made the belated acknowledgment in their Principia Mathematica, and Russell has said, “Pure mathematics was discovered by Boole, in a work he called ‘The Laws of Thought.’” While his praise is undoubtedly exaggerated, it is interesting to note the way in which mathematics and thought are considered inseparable. In 1928, the first text on the new algebra was published. The work of Hilbert and Ackermann, Mathematical Logic, was printed first in German and then in English.
What was the nature of this new tool for better thinking that Boole had created? Its purpose was to make possible not merely precise, but exact analytical thought. Historically we think in words, and these words have become fraught with semantic ditches, walls, and traps. Boole was thinking of thought and not mathematics or science principally when he developed his logic algebra, and it is indicative that symbolic logic today is often taught by the philosophy department in the university.
Russell had hinted at the direction in which symbolic logic would go, and it was not long before the scientist as well as the mathematician and logician did begin to make use of the new tool. One pioneer was Shannon, mentioned in the chapter on history. In 1938, Claude Shannon was a student at M.I.T. He would later make scientific history with his treatise on and establishment of a new field called information theory; his early work was titled “A Symbolic Analysis of Relay and Switching Circuits.” In it he showed that electrical and electronic circuitry could best be described by means of Boolean logic. Shannon’s work led to great strides in improving telephone switching circuits and it also was of much importance to the designer of digital computers. To see why this is so, we must now look into Boolean algebra itself. As we might guess, it is based on a two-valued logic, a true-false system that exactly parallels the on-off computer switches we are familiar with.
The Biblical promise “Ye shall know the truth, and the truth shall make you free” applies to our present situation. The best way to get our feet wet in the Boolean stream is to learn its so-called “truth tables.”
Conjunctive Boolean Operation
| A and B equal C | A B C |
| (A · B = C) | ——— |
| 0 0 0 | |
| 1 0 0 | |
| 0 1 0 | |
| 1 1 1 |
Disjunctive Boolean Operation
| A or B equals C | A B C |
| (Ā ∨ B = C) | ——— |
| 0 0 0 | |
| 1 0 1 | |
| 0 1 1 | |
| 1 1 1 |
In the truth tables, 1 symbolizes true, 0 is false. In the conjunctive AND operation, we see that only if both A and B are true is C true. In the disjunctive OR operation, if either A or B is true, then C is also true. From this seemingly naïve and obvious base, the entire Boolean system is built, and digital computers can perform not only complex mathematical operations, but logical ones as well, including the making of decisions on a purely logical basis.
Before going on to the few additional conditions and combinations that complete the algebra, let’s study some analogies that will make clear the AND/OR principles of operation. We can think of AND as two bridges in sequence over two rivers. We can reach our destination only if both bridges are working. However, suppose there are two parallel bridges and only one river. We can then cross if either or both of the bridges is working. A closer example is that of electrical switches. Current will flow through our AND circuit if—and only if—both switches are closed. When the switches are in parallel—an OR circuit—current will flow if either, or both, are closed.
The truth tables resemble the bridge or switch arrangements. We can proceed across the line of 1’s and 0’s in the first table only if both switches are closed. The symbol 1 means that the switch is closed, so we can cross only the bottom line. In the second table, we are told we can proceed across the line if either switch is closed. Thus we can cross lines 2, 3, and 4. We can use many symbols in our two-valued system.
Symbol
| Bridge | No Bridge |
| Power | No Power |
| 1 | 0 |
| True | False |
A little imagination suggests a logic computer of sorts with one switch, a battery, and a light bulb. Suppose we turn on the switch when we drive into our garage. A light in the hallway then indicates that the car is available. By using two switches we can indicate that a second car is also in the garage; or that either of them is, simply by choosing between AND logic and OR logic. Childish as this seems, it is the principle of even our most complex thinking processes. You will remember that the brain is considered a digital computer, since neurons can only be on or off. All it takes is 10 billion neuron switches!
Remington Rand UNIVAC
AND and OR gates in series. Switches 1 and 2, plus 3 or 4, are needed to light the bulb.
In addition to the conjunctives AND and OR, Boolean algebra makes use of the principle of negation. This is graphically illustrated thus:
| Original | Negation |
| A | Ā |
| 1 | 0 |
| 0 | 1 |
The negation device used in computer circuitry is called an inverter, since it changes its input from a 1 to a 0, or vice versa. The usefulness of such an element is obvious when we remember the computer trick of subtracting by adding complements. The inverter circuit used with a code like the excess-3 readily forms these complements.
Further sophistication of the basic Boolean forms leads to units other than the AND and OR gates. Possible are NOT, NOR, and exclusive-OR forms. In the latter, there is an output if one and only one input is present. The NOR circuit is interesting in that it was made possible with the introduction of the transistor; the vacuum tube does not permit this configuration.
Computer Control Co.
The functions of two binary variables.
Present-day symbolic logic is not the pure Boolean as presented back in 1854. Boole’s OR was the exclusive, one and only one, type. Today the logician generally assumes the either-or connotation. The logic has also been amplified, using the commutative, associative, and distributive laws much like those of conventional algebra. We are indebted to De Morgan for most of this work, showing that A and B equals B and A; A and (A and B) equals (A and B) and A; and so on. While these seem intuitively true, the implications are nonetheless of great importance both in pure logic and its practical use in circuitry.
A graphic representation of the metamorphosis from symbolic to actual implementation of Boolean equations follows: The implication of importance is that logic applies equally well whether we are making a qualifying statement such as “A man must have strength and courage to win a barehanded fight with a lion,” or wiring a defensive missile so that it will fire only if a target is within range and is unfriendly.
In the early period of computer design the engineer was faced with the problem of building his own switches and gates. Today many companies offer complete “packaged” components—AND gates, OR gates, and the other configurations. This is the modular approach to building a computer and the advantages are obvious. The designer can treat the components simply as “black boxes” that will respond in a prescribed way to certain input conditions. If he wants, the engineer can go a step further and buy a ready-built logic panel consisting of many components of different types. All he need do to form various logic circuits is to interconnect the proper components with plug-in leads. This brings us to the point of learning what we can do with these clever gates and switches now that we have them available and know something about the way they work.
We talked about the computer adder circuit earlier in this chapter. It is made up of two half-adders, remember, with perhaps an additional OR gate, flip-flop, etc. Each half-adder is composed of two AND gates and an OR gate. So we have put together several basically simple parts and the result is a piece of equipment that will perform addition at a rate to make our heads swim.
There are other things we can do with Boolean logic besides arithmetic. A few gates will actuate a warning signal in a factory in case either of two ventilators is closed and the temperature goes up beyond a safe point; or in case both vents are closed at the same time. We can build a logic computer that will tell us when three of four assembly lines are shut down at the same time, and also which three they are.
General Electric Co., Computer Dept.
Electronic computers are built up of many “building blocks” like this one.
Logic problems abound in puzzle books, and many of us spend sleepless nights trying to solve them in our heads. An example is the “Farnsworth Car Pool” problem. Rita Farnsworth asks her husband if someone in his car pool can drive for him tomorrow so that she may use the car. Joe Farnsworth replies, “Well, when I asked Pete if he would take my turn he said he was flying to Kansas City today, but he’d be glad to drive tomorrow if he didn’t have to stay over and that his wife has been staying home lately and he will drive her car if she doesn’t go to work. Oscar said that since his own car is due back from the garage tomorrow he can drive it even if his wife does use hers, provided the garage gets his back to him. But if this cold of mine gets any worse I’m going to stay home even if those fellows have to walk to work, so you can certainly have the car if I don’t go to work.” This dialogue of Joe’s confuses Rita and most of us are in the same state.
Autonetics Division, North American Aviation, Inc.
Testing an assembled digital computer.
The instruction manual for BRAINIAC, a do-it-yourself computer that sells for a few dollars, gives a simple wiring diagram for solving Rita’s dilemma. Electrically the problem breaks down into three OR gates and one AND gate. All Mrs. Farnsworth has to do is set in the conditions and watch the indicator light. If it glows, she gets the car!
These are of course simple tasks and it might pay to hire a man to operate the vents, and ride to work on the bus when the car pool got complicated. But even with relatively few variables, decision-making can quickly become a task requiring a digital computer operating with Boolean logic principles.
Science Materials Center
Problem in logic reduced to electrical circuits.
The Smith-Jones-Robinson type of problem in which we must find who does what and lives where is tougher than the car pool—tough enough that it is sometimes used in aptitude tests. Lewis Carroll carried this form of logical puzzler to complicated extremes involving not just three variables but a dozen. To show how difficult such a problem is, an IBM 704 required four minutes to solve a Carroll puzzle as to whether any magistrates indulge in snuff-taking. The computer did it the easy way, without printing out a complete “truth table” for the problem—the method a man would have to use to investigate all the combinations of variables. This job would have taken 13 hours! While the question of the use of snuff is perhaps important only to tobacconists and puzzle-makers, our technical world today does encounter similar problems which are not practical of solution without a high-speed computer. A recent hypothetical case discussed in an electronics journal illustrates this well.
A missile system engineer has the problem of modifying a Nike-Ajax launching site so that it can be used by the new Nike-Hercules missile. He must put in switching equipment so that a remote control center can choose either an Ajax system, or one of six Hercules systems. To complicate things, the newer Hercules can be equipped with any of three different warheads and fly either of two different missions. When someone at the control center pushes a button, the computer must know immediately which if any of the missiles are in acceptable condition to be fired.
This doesn’t sound like too big a problem. However, since there are twelve on-off signals to be considered, and since each has two possible states, there are 4,096 possible missile combinations. Not all these are probable, of course, but there is still sufficient variation to make it humanly impossible to check all of them and close a firing switch in the split second the control center can allow.
The answer lies in putting Boolean algebra on the job, with a system of gates and inverters capable of juggling the multiplicity of combinations. Then when the word comes requesting a missile launch, the computer handles the job in microseconds without straining itself unduly.
Just as Shannon pointed out twenty-five years ago, switching philosophy can be explained best by Boolean logic, and the method can be used not only to implement a particular circuit, but also to actually design the circuit in the first place. A simple example of this can be shown with the easy-to-understand AND and OR gates. A technician experimenting with an AND gate finds that if he simply reverses the direction of current, he changes the gate into an OR gate. This might come as a surprise to him if he is unfamiliar with Boolean logic, but a logician with no understanding of electrical circuits could predict the result simply by studying the truth tables for AND and OR.
Reversing the polarity is equivalent to changing a 1 to a 0 and vice versa. If we do this in the AND gate table, we should not be surprised to find that the result looks exactly like the OR table! It acts like it too, as the technician found out.
Boolean logic techniques can be applied to existing circuits to improve and/or simplify them. Problems as simple as wiring a light so that it can be turned on and off from two or more locations, and those as complex as automating a factory, yield readily to the simple rules George Boole laid down more than a hundred years ago.
Watching a high-speed electronic digital computer solve mathematical problems, or operate an industrial control system with speed and accuracy impossible for human monitors, it is difficult to believe that the whole thing hinges on something as simple as switches that must be either open or closed. If Leibnitz were alive, he could well take this as proof of his contention that there was cosmological significance in the concept of 1 and 0. Maybe there is, after all!
Industrial Electronic Engineering & Maintenance
“Luckily I brought along a ‘loaner’ for you to use while I repair your computer.”
“Whatever that be which thinks, understands, wills, and
acts, it is something celestial and divine.”
—Cicero