We are well accustomed to decimal notation in which we use 10 decimal digits 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 and write them in combinations to designate decimal numbers. In binary notation we use two binary digits 0, 1 and write them in combinations to designate binary numbers. For example, the first 17 numbers, from 0 to 16 in the decimal notation, correspond with the following numbers in binary notation:

Decimal Binary Decimal Binary
00
11 91001
210101010
311111011
4100121100
5101131101
6110141110
7111151111
810001610000 

In decimal notation, 101 means one times a hundred, no tens, and one. In binary notation, 101 means one times four, no twos, and one. The successive digits in a decimal number from right to left count 1, 10, 100, 1000, 10000, ...—successive powers of 10 (for this term, see the end of this supplement). The successive digits in a binary number from right to left count 1, 2, 4, 8, 16, ...—powers of 2.

The decimal notation is convenient when equipment for computing has ten positions, like the fingers of a man, or the positions of a counter wheel. The binary notation is convenient when equipment for computing has just two positions, like “yes” or “no,” or current flowing or no current flowing.

Addition, subtraction, multiplication, and division can all be carried out unusually simply in binary notation. The addition table is simple and consists only of four entries.

+ 0 1
00 1
1110

The multiplication table is also simple and contains only four entries.

× 01
000
101

Suppose that we add in binary notation 101 and 1001:

Binary
Addition
Check
1015
+ 1001 9
111014