Addition:1100(12)
0111( 7)
————
10011(19)
Subtraction:1010(10)
- 0010( 2)
—————
1000(8)
Multiplication:0110(6)
× 0011(3)
———–——
0110
0110
0000
0000
———
10010(18)
[TN1] Division:1010 ÷ 10 = 0101(10 ÷ 2 = 5)

The rules should be obvious from these examples. Just as we add 5 and 5 to get 0 with 1 to carry, we add 1 and 1 and get 0 with 1 to carry in binary. Adding 1 and 0 gives 1, 0 and 0 gives 0. Multiplying 1 times 1 gives 1, 1 times 0 gives 0, and 0 times 0 gives 0. One divides into 1 once, and into 0 no times. Thus we can manipulate in just the manner we are accustomed to.

The computer does not even need to know this much. All it is concerned with is addition: 1 plus 1 gives 0 and 1 to carry; 1 plus 0 gives 1; and 0 plus 0 gives 0. This is all it knows, and all it needs to know. We have described how it subtracts by adding complements. It can multiply by repetitive additions, or more simply, by shifting the binary number to the left. Thus, 0001 becomes 0010 in one shift, and 0100 in two shifts, doubling each time. This is of course just the way we do it in the decimal system. Shifting to the right divides by two in the binary system.

The simplest computer circuitry performs additions in a serial manner, that is, one operation at a time. This is obviously a slow way to do business, and by adding components so that there are enough to handle the digits in each row simultaneously the arithmetic operation is greatly speeded. This is called parallel addition. Both operations are done by parts understandably called adders, which are further broken down into half-adders.

There are refinements to basic binary computation, of course. By using a decimal point, or perhaps a binary point, fractions can be expressed in binary code. If the position to the left of the point is taken as 2 to the zero power, then the position just to the right of the point is logically 2 to the minus one, which if you remember your mathematics you’ll recognize as one-half. Two to the minus two is then one-fourth, and so on. While we are on the subject of the decimal point, sophisticated computers do what is called “floating-point arithmetic,” in which the point can be moved back and forth at will for much more rapid arithmetical operations.

No matter how many adders we put together and how big the computer eventually gets, it is still operating in what seems an awkward fashion. It is counting its fingers, of which it has two. The trick is in the speed of this counting, so fast that one million additions a second is now a commonplace. Try that for size in your own decimally trained head and you will appreciate the computer a little more.

The Logical Algebra

We come now to another most important reason for the effectiveness of the digital computer; the reason that makes it the “logical” choice for not only mathematics but thinking as well. For the digital computer and logic go hand in hand.

Logic, says Webster, is “the science that deals with canons and criteria of validity in thought and demonstration.” He admits to the ironic perversion of this basic definition; for example, “artillery has been called the ‘logic of kings,’” a kind of logic to make “argument useless.” Omar Khayyám had a similar thought in mind when he wrote in The Rubáiyát,