[SOLVED] CS代考计算机代写 chain ada Java information theory flex x86 gui computer architecture compiler ER Fortran scheme AI assembler jvm file system assembly algorithm Excel javascript interpreter c++ Elm cache The Hidden Language of Computer Hardware and Software

30 $

The Hidden Language of Computer Hardware and Software
c0DE ••••••••
·
·
·· • — –

•••••••• 1000011 1001111 1000100 1000101
••••••••
Charles Petzold

“[A] gem that will appeal to anyone who wonts to understand computer technology at its essence.”
-David Wall, Amazon.com Edftorial Reviews, www.amazon.com
“You con tell writing CODE was a labor of love for Petzold­ ond reading it is, too.”
-Bill Camarda, Editor, barnesandnoble.com
c0DE The Hidden Language of
Computer Hardware and Software
What do flashlights, the British invasion, block cots, and seesaws have to do with computers? In CODE, they show us the ingenious ways we manipulate languo’ge and invent new means of communicating with each other. And through CODE, we see how this ingenuity and our very human compulsion to communicate have driven the technological innovations of the past two centuries.
Using everyday objects and familiar language systems such as Braille and Morse code, author Charles Petzold weaves an illuminating narrative for anyone who’s ever wondered about the secret inner life of computers and other smart machines.
It’s a cleverly illustrated and eminently comprehensible story-

46
Chapter Six
The relay is a remarkable device. It’s a switch, surely, but a switch that’s turned on and off not by human hands but by a current. You could do amazing things with such devices. You could actually assemble much of a computer with them.
Yes, this relay thing is much too sweet an invention to leave sitting around the telegraphy museum. Let’s grab one and stash it inside our jacket and walk quickly past the guards. This relay will come in very handy. But before we can use it, we’re going to have to learn to count.

Chapter Seven
Our Ten Digits
The idea that language is merely a code seems readily acceptable. Manyofusatleastattemptedtolearnaforeignlanguage inhigh school, so we’re willing to acknowledge that the animal we call a cat in English can also b e a gato, chat, Katze, KOllKl a, or rona.
Numbers, however, seem less culturally malleable. Regardlessof the lan­ guage we speak and the way we pronounce the numbers, just about everybody we’re likely to come in contactwith on thisplanetwrites them the same way:
12345678910
Isn’t mathematics called “the universal language” for a reason?
Numbers are certainly the most abstract codes we deal with on a regu­
lar basis. When we see the number 3
we don’t immediately need to relate it to anything. We might visualize 3 apples or 3 of something else, but we’d be just as comfortable learning from context that the number refers to a child’s birthday, a television channel, a hockey score, or the number of cups of flour in a cake recipe. Because our numbers ar e so abstract to begin w ith, it’s mor e difficu lt for us to understand that this number of apples
.r–L] .�_f-:– !’-4′.
.__j__)__ )

48
Chapter Seven
doesn’t necessarily have to be denoted by the symbol
3
Much of this chapter and the next will be devoted to persuading ourselves that this many apples
can also be indicated by writing
11
Let’s first dispense with the idea that there’s something inherently spe­ cial about the number ten. That most civilizations have based their number systems around ten (or sometimes five) isn’t surprising. From the very be­ ginning, people have used their fingers to count. Had our species developed possessing eight or twelve fingers, our ways of counting would be a little different. It’s no coincidence that the word digit can refer to fingers or toes as well as numbers or that the words five and fist have similar roots.
So in that sense, using a base-ten, or decimal (from the Latin for ten), number system is completely arbitrary. Yet we endow numbers based on ten with an almost magical significance and give them special names. Ten years is a decade; ten decades is a century; ten centuries is a millennium. A thou­ sand thousands is a million; a thousand millions is a billion. These numbers are all powers of ten:
1 01 = 1 0
1 02 = 1 00
1 ()l = 1 000 (thousand)
1()I =10,000
1 05 = 1 00,000
1 ()6 = 1 ,000,000 (million)
1 07 = 1 0,000,000
1 OS = 1 00,000,000
1 0′ = 1 ,000,000,000 (billion)
Most historians believe that numbers were originally invented to count things, such as people, possessions, and transactions in commerce. For ex­ ample, if someone owned four ducks, that might be recorded with drawings of four ducks:

Our Ten Digits 49
Eventually the person whose job it was to draw the ducks thought, ..Why do I have to draw four ducks? Why can’t I draw one duck and indicate that there are four of them with, I don’t know, a scratch mark or something?”
� Ill/
And then there came the day when someone had 27 ducks, and the scratch marks got ridiculous:
� 1/lllllllllllllllllllllllll
Someone said, ..There’s got to be a better way,” and a number system was born.
Of all the early number systems, only Roman numerals are still in com­ mon use. You find them on the faces of clocks and watches, used for dates on monuments and statues, for some page numbering in books, forsome items in an outline, and-most annoyingly-for the copyright notice in movies. (The question ..What year was this picture made?” can often be answered only if one is quick enough to decipher MCMLill as the tail end of the credits goes by.)
Twenty-seven ducks in Roman numerals is
� XXVII
The concept here is easy enough: The X stands for 1 0 scratch marks and the V stands for 5 scratch marks.
. The symbols of Roman numerals that survive today are
IVXLCDM
The I is a one. This could be derived from a scratch mark or a single raised finger. The V, which is probably a symbol for a hand, stands for five. Two V’s make an X, which stands for ten. The L is a fifty. The letter C comes from the word centum, which is Latin for a hundred. D is five hundred. Finally, McomesfromtheLatinwordmilel,orathousand.
Although we might not agree, for a long time Roman numerals were considered easy to add and subtract, and that’s why they survived so long in Europe for bookkeeping. Indeed, when adding two Roman numerals, you simply combine all the symbols from both numbers and then simplify the result using just a few rules: Five l’s make a V, two V’s make an X, five X’s make an L. and so forth.

5 0
Chapter Seven
But multiplying and dividing Roman numerals is difficult. Many other early number systems (such as that of the ancient Greeks) are similarly in­ adequate for working with numbers in a sophisticated manner. While the Greeks developed an extraordinary geometry still taught virtually unchanged in high schools today, the ancient Greeks aren’t known for their algebra.
The number system we use today is known as the Hindu-Arabic or Indo-Arabic. It’s of Indian origin but was brought to Europe by Arab mathematicians. Of particular renown is the Persian mathematician Muhammed ibn-Musa ai-Khwarizmi (from whose name we have derived the word algorithm) who wrote a book on algebra around A.D. 825 that used the Hindu system of counting. A Latin translation dates from A.D. 1 1 20 and was influential in hastening the transition throughout Europe from Roman numerals to our present Hindu-Arabic system.
The Hindu-Arabic number system was different from previous number systems in three ways:
• The Hindu-Arabic number system is said to be positional, which means that a particular digit represents a different quantity de­ pending on where it is found in the number. Where digits appear in a number is just as significant (actually, more significant) than what the digits actually are. Both 1 00 and 1 ,000,000 have only a single 1 in them, yet we all know that a million is much larger than a hundred.
• Virtually all early number systems have something that the Hindu-Arabic system does not have, and that’s a special symbol for the number ten. In our number system, there’s no special symbol for ten.
• On the other hand, virtually all of the early number systems are missing something that the Hindu-Arabic system has, and which turns out to be much more important than a symbol for ten. And that’s the zero.
Yes, the zero. The lowly zero is without a doubt one of the most impor­ tant inventions in the history of numbers and mathematics. It supports positional notation because it allows differentiation of 25 from 205 and 250. The zero also eases many mathematical operations that are awkward in nonpositional systems, particularly multiplication and division.
The whole structure of Hindu-Arabic numbers is revealed in the way we pronounce them. Take 4825, for instance. We say “four thousand, eight hundred, twenty-five.” That means
four thousands eight hundreds two tens and five.

Our Ten Digits 51
Or we can write the components like this:
4825 = 4000 + BOO + 20 + 5
Or breaking it down even further, we can write the number this way:
4825 = 4 x 1000 + 8X 100+
2X10+ 5)(1
Or, using powers of ten, the number can :be rewritten like this:
4825=4X 1()3+ 8X1IY+ 2X 101+
5X1()0
Remember that any number to the 0 power equals 1.
Each position i n a multidigit number has a particular meaning, a s shown
Number of ones Number of tens Number of hundreds
in the following diagram. The seven boxes shown here let us represent any number from 0 through 9,999,999:
‘�
L—- Numberofthousands
Number of ten thousands ‘——- Number of hundred thousands
——- Number of millions
Each position corresponds to a power of ten. We don’t need a special sym­ b:ol for ten because we set the 1 in a different position and we use the 0 as a placeholder.
What’s also really nice is that fractional quantities shown as digits to the right of a decimal point follow this same pattern. The number 42,705.684 is
4X 10,000+ 2X 1000+ 7X100+ 0X10+ 5×1+ 6+10+ 8+100+
4 + 1000
________

52
Chapter Seven
This number can also be written without any division, like this:
4 )( 10,000 + 2 )( 1000 + 7×100+ 0)(10+ 5×1+
6 )( 0.1 + 8 )( 0.01 + 4 )( 0.001
Or, using powers of ten, the number is
4 )( 10′ + 2x1Ql+ 7)(1()2+ 0 )( 101 + 5×100+ 6 )( 10″1 + 8 )( 10″2 + 4 )( 1Q•l
Notice how the exponents go down to zero and then become negative numbers.
We know that 3 plus 4 equals 7. Similarly, 30 plus 40 equals 70, 300 plus 400 equals 700, and 3000 plus 4000 equals 7000. This is the beauty of the Hindu-Arabic system. When you add decimal numbers of any length, you follow a procedure that breaks down the problem into steps. Each step involves nothing more complicated than adding pairs of single-digit numbers. That’s why someone a long time ago forced you to memorize an addition table:
9
9 10 11 12 13 14 15 16 17 18
+
01
2
3
45’
78
0
01
2
3
456
78
1
12
3
4
567
89
2
23
4
5
678
9 10
3
34
5
6
789
10 11
4
45
6
7
8 9 10
11 12
5
56
7
8
9 10 11
12 13
‘ 7
67 78
8 9
9 10
10 11 12 111213
13 14 14 15
8
89
10
11
12 13 14
15 16
9
9 10
11
12
13 14 15
16 17

Our Ten Digits 53
Find the two numbers you wish to add in the top row and the left column. Follow down and across to get the sum. For example, 4 plus 6 equals 1 0.
Similarly, when you need to multiply two decimalnumbers, you follow a somewhat more complicated procedure but still one that breaks down the problem so that you need do nothing more complex than adding or multi­ plying single-digit decimal numbers. Your early schooling probably also entailed memorizing a multiplication table:
X
0
1
2
3
4
5
6
7
8
g
0
0
0
0
0
0
0
0
0
0
0
1
0
1
2
3
4
5
6
7
8
9
2
0
2
4
6
8
10
12
14
16
18
3
0
3
6
9
12
15
18
21
24
27
4
0
4
8
12
16
20
24
28
32
36
5
0
5
10
15
20
25
30
40
6
0
6
12
18
24
30
36
42
48
54
7
0
7
14
21
28
42
49
56
63
8
0
8
16
24
32
40
48
56
64
72
g
0
9
18
27
36
54
63
72
81
What’s best about the positional system of notation isn’t how well it works, but how well it works for counting systems not based on ten. Our number system isn’t necessarily appropriate for everyone. One big problem with our base-ten system of numbers is that it doesn’t have any relevance for cartoon characters. Most cartoon characters have only four fingers on each hand (or paw), so they prefer a number system that’s based on eight. Interestingly enough, much of what we know about decimal numbering can be applied to a numbering system more appropriate for our friends in cartoons.
35
45
45
35

Chapter Eight
Alternatives to Ten
T en is an exceptionally imponant number to us humans. Ten is the number of fingers and toes most of us have, and we cenainly prefer to have all ten of each. Because our fingers are convenient for
counting, we humans have adapted an entire number system that’s based on the number 10.
789
As I mentioned in the previous chapter, the number system that we use is called base ten, or decimal. The number system seems so natural to us that it’s difficult at first to conceive of alternatives. Indeed, when we see the number l0 we can’t help but think that this number refers to this many ducks:
10=

Alternatives to Ten 55
But the only reason that the number 10 refers to this many ducks is that this many ducks is the same as the number of fingers we have. If human beings had a different number of fingers, the way we counted would be different, and 10 would mean something else. That same number 10 could refer to this many ducks:
or this many ducks:
or even this many ducks:
When we get to the point where 10 means just two ducks, we’ll be ready to examine how switches, wires, lightbulbs, and relays (and by extension, computers) can represent numbers.
What if human beings had only four fingers on each hand, like cartoon characters? We probably never would have thought to develop a number system based on ten. Instead, we would have considered it normal and natu­ ral and sensible and inevitable and incontrovertible and undeniably proper to base our number system on eight. We wouldn’t call this a decimal num­ ber system. We’d call it an octal number system, or base eight.
If our number system were organized around eight rather than ten, we wouldn’t need the symbol that looks like this:
9
Show this symbol to any cartoon character and you’ll get the response, “What’s that? What’s it for?” And if you think about it a moment, we also wouldn’t need the symbol that looks like this:
8
In the decimal number system, there’s no special symbol for ten, so in the octal number system there’s no special symbol for eight. ·

56
Chapter Eight
The way we count in the decimal number system is 0, 1 , 2, 3, 4, 5, 6, 7, 8, 9, and then 10. The way we count in the octal number system is 0, 1 , 2, 3, 4, 5, 6, 7, and then what? We’ve run out of symbols. The only thing that makes sense is 10, and that’s correct. In octal, the next number after 7 is 10. But this 10 doesn’t mean the number of fingers that humans have. In octal, 1 0 refers to the number of fingers that cartoon characters have.
267
We can continue counting on our four-toed feet:
12 13 15 16 17
When you’re working with number systems other than decimal, you can avoid some confusion if you pronounce a number like 1 0 as one zero. Similarly, 1 3 is pronounced one three and 20 is pronounced two zero. To really avoid confusion, we can say two zero base eight or two zero octal.
Even though we’ve run out of fingers and toes, we can still continue count­ ing in octaL It’s basically the same as counting in decimal except that we skip every number that has an 8 or a 9 in it. And of course, the actual numbers refer to different quantities:
0, 1, 2, 3. 4, 5, 6, 7, 10, 11, 12, 13, 14, 15, 16, 17, 20, 21, 22, 23, 24, 25, 26, 27, 30, 31, 32, 33, 34, 35, 36, 37, 40, 41, 42, 43, 44, 45, 46, 47, 50, 51, 52, 53, 54, 55, 56, 57, 60, 61, 62, 63, 64, 65, 66, 67, 70, 71, 72, 73, 74, 75, 76, 77, 100…

Alternatives to Ten
That last number is pronounced one zero zero. It’s the number of fingers that cartoon characters have, multiplied by itself.
When writing decimal and octal numbers, we can avoid confusion and denote which is which by using a subscript to indicate the numbering sys­ tem. The subscript TEN means base ten or decimal, and EIGHT means base eight or octal.
Thus, the number of dwarfs that Snow White meets is 7
The number of fingers that cartoon characters have is 8nN or lOEJGHT
The number of symphonies that Beethoven wrote is 9rrs or I I F.IGKT
The number of fingers that humans have is l Oru.; or 12EJGJIT
T h e n u m b e r o f m o n t h s i n a y e a r i s l lr’� o r 1 4 F I G KT
The number of days in a formight is 14m” or 16EJGHT
The “sweet” birthday celebration is 1 6nN or 20FJ<;mThe number of hours in a day is 24TL._ or 30EJGHTThe number of letters in the Latin alphabet is 26ITN or 32EIGKTThe number of fluid ounces in a quan is 32m, or 40uGmThe number of cards in a deck is 5� or 64EIGHTThe number of squares on a chessboard is 64ITN or I OOE,GKTThe most famous address on Sunset Strip is 77ns or 1 1 5EJGJ ITThe number of yards in an American football field is l OOm, or 144EJGJITThe number of starting women singles players at Wimbledon is 1 2 8ns or 200EJGJrr The number of square miles in Memphis is 256n_.., or 400EJGJITNotice that there are a few nice round octal numbers in this list, such as l OOm;Hr and 200FIGm and 400tJGHr By the term nice round number we usually mean a number that has some zeros at the end. Two zeros on the end of a decimal number means that the number is a multiple of l OOTFs’which is I On.._. times t OTL’I” With octal numbers, two zeros on the end means that the number is a multiple of tOOFJ(;HT’ which is tOEJGHT times tOF.I(;Hr (or STEN times sm;• which is 64mJ·You might also notice that these nice round octal numbers t OOI:.lGHT and 200F.IGHT and 400uGHT have the decimal equivalents 64TES’ 128TEs’ and 256TEN’ all of which are powers of two. This makes sense. The number 400EJGHT (for example) is 4EJGHT times tOEJGHT times lOrrc;m• all of which are powers of two. And anytime we multiply a power of two by a power of two, we get another power of two.TL” o r 7 ElGin58Chapter EightThe following table shows some powers of two with the decimal and octal representations:Power of Two Decimal20 1Octal21 12 21 z• 11 2′ 17 2′ 2’1210128 200 256 400 512 1 000211 2121024 2000 2048 4000 4096 1 0000The nice round numbers in the rightmost column are a hint that number systems other than decimal might help in working with binary codes.The octal system isn’t different from the decimal system in any structural way. It just differs in details. For example, each position in an octal num­ ber is a digit that’s multiplied by a power of eight:m Number of ones Number of eightsNumber of sixty-fours ‘—– Number of five hundred twelves’—— Number of four thousand ninety-sixes ‘——- Number of thirty-two thousandseven hundred s1xty-eightsThus, an octal number such as 3725FJ<;m can be broken down like so: 3725£1GifT : 3()()()EIG04T + 7Q0£1GifT + 20EIG04T + 5£1GifTThis can be rewritten in any of several ways. Here’s one way, using the powers of eight in their decimal forms:5X124 4 8 1016 20 32 40 64 10023725[1G04T : 3 )( 512T(N +] )( 64T£N + 2 X STIN +Alternatives to Ten 59This is the same thing with the powers of eight shown in their octal form:3725EIGHT=3X 100QEIGHT+ 7 )( 1OOEIGHT +2 )( 1QEIGHT + 5x1Here’s another way of doing it:3725fiGHT = 3 )( 83 + 7 )( 82 + 2)( 81+5)(soIf you work out this calculation in decimal, you’ll get 2005TEN” This is how you can convert octal numbers to decimal numbers.We can add and multiply octal numbers the same way we add and mul­ tiply decimal numbers. Theonly real difference is thatwe use different tables for adding and multiplying the individual digits. Here’s the addition table for octal numbers: +012345670012345671123456710223456710113345671011124456710111213556710111213146671011121314157710111213141516 Forexample,5EJGHT+7FJGHT= 14EJGHT”Sowecanaddrwolongeroctalnum­ bers the same way we add decimal numbers:135 + 643 1000To begin with the right column, 5 plus 3 equals 10. Put down the 0, carry the 1. One plus 3 plus 4 equals 10. Put down the 0, carry the 1. One plus 1 plus 6 equals 10.Similarly, 2 times 2 is still 4 in octal. But 3 times 3 isn’t 9. How could it be? Instead 3 times 3 is 1 1 EJGHT’ which is the same amount as 9n.s· You can see the entire octal multiplication table at the top of the following page. 60Chapter Eight II012345670000000001012345672024610121630361117222540410142024303450512173136606223036445270716255261 Here we have 4 x 6 equaling 30FJWT’ but 301:JGKT is the same as 24n.-.:• which is what 4 x 6 equals in decimal.Octal is as valid a number system as decimal. But let’s go funher. Now that we’ve developed a numbering srstem for cartoon characters, let’s develop something that’s appropriate for lobsters. Lobsters don’t have fingers exactly, but they do have pincers at the ends of their two front legs. An appropri­ ate number system for lobsters is the quaternary system, or base four:2310Countinginquaternarygoeslikethis:0, 1,2,3, 10, 11, 12, 13,20,21, 22,23,30,31,32,33, 100, 101, 102, 103, 110,andsoforth.I’m not going to spend much time with the quaternary system because we’ll bemovingonshortlytosomethingmuchmoreimportant.Butwecansee how each position in a quaternary nC2._umber corresponds this time to a power of four:Number of ones Number of fours Number of sixteens’—– Numberofsixty·fours ‘—— Number of rwo hund.red fifty-sixes__________ Number of one thousand rwcnry-fours14 1424143443 43 Alternatives to Ten 61 The quaternary number 3 1 232 can be written like this:which is the same as31232FOUR= 3X 256TtN+ 1 )( 64TlN +2 )( 16TtN + 3 )( 4TtN + 2 )( 1TtN31232FOUR= 3X 1QOFOUR+ 1X 1QOOFOUR+2 )( 1OOFOUII + 3 )( 1QFOUR + 2X 1FOURAnd it’s also the same as31232FOUR= 3X 4•+ 1 X 43 + 2 )( 42 + 3 X 41 +2 )( 4°If we do the calculations in decimal, we’ll find that 3 1232FOuR equals 878TE.”�” Now we’re going to make another leap, and this one is extreme. Suppose we were dolphins and must reson to using our two flippers for counting. This is the number system known as base two, or binary (from the Latin for two by two). It seems likely that we’d have only two digits, and these twodigits would be 0 and 1 .Now, 0 and 1 isn’t a whole lot to work with, and it takes some practiceto get accustomed to binary numbers. The big problem is that you run out of digits very quickly. For example, here’s how a dolphin counts using its flippers: 62Chapter EightYes, in binary the next number after 1 is 10. This is startling, but it shouldn’t really be a surprise. No matter what number system we use, whenever we run out of single digits, the first two-digit number is always 10. In binary we count like this:0, 1, 10, 11, 100, 101, 110, 111, 1000, 1001′ 1010, 1011, 1100, 1101, 1110, 1111, 100, 10001…These numbers might look large, but they’re really not. It’s more accurate to say that binary numbers get long very quickly rather than large:The number of heads that humans have is lTEN or 1TW0The number of flippers that dolphins have is 2r,..,. or 1 0TW0The number of teaspoons in a tablespoon is Jn.� or 1 1 TW0The number of sides to a square is 4TEN or 1 00TW0The number of fingers on one human hand is 5nz.;or lOlnroThe number of legs on an insect is 6n.�or l lOTWoThe number of days in a week is 7-m; or 1 1 1TW0The number of musicians in an octet is 8TEN Or 1 QQQnrOThe number of planets in our solar system (including Pluto) is 9-m; or 1001TW0 The number of gallons in a cowboy hat is lOTEN or 1010″”0and so forth.In a multidigit binary number, the positions of the digits correspond topowers of two: 9�99 �::::::: Number of fours’—– Number of eights ‘—— NumberofsixteensSo anytime we have a binary number composed of a 1 followed by all ze­ ros, that number is a power of two. The power is the same as the number of zeros in the binary number. Here’s our expanded table of the powers of two demonstrating this rule:__________ Number of thirty-twosAlternatives to Ten 63PowerofTwo Decimal Octal Quaternary Binary 20 I I 1 2′ 2l 212′ 2• 27 2•2′ 210 211 21l2 2 2 4 4 10 8 10 2016 20 100 40 200 64 100 1000 128 200 2000 256 400 10000 512 1000 20000 1024 2000 1 00000 2048 4000 200000 4096 10000 100000010 100 1000 10000 100000 1000000 10000000 1 00000000 1000000000 1 0000000000 1 00000000000 1000000000000Let’s say we have the binary number 101 10101 1010. This can be writ­ ten as101101011010TWO= 1X2Q48TEN+ 0X 1024TEN+1 X 512TfN+ 1 X 25�N + 0X 128TEN+ 1 X 64TEN + 0X 32TEN+1 X 16TfN+1 )( BTEN + 0 X 4TEN + 1 )( 2TfN + 0X 1TENThe same number can be written this way:101101011010TWO = 1 X 211 + 0 )( 2’0 +1 )( 29 + 1 )( 21 +0 )( 27 + 1 )( 26 +0 )( 25 + 1 )( 2• +1 )( 2) + 0 )( 2l + 1 )( 2’ + 0 )( 2°2.3264Chapter EightIfwejustaddupthepartsindecimal,weget2048+ 512+ 256+ 64+ 16 + 8 + 2, which is 2,906TFs”To convert binary numbers to decimal more concisely, you might prefer a method that uses a template I’ve prepared:DDDDDDDDX128 X64 XJ2 X]6 X8 X2 X)This template allows you to convert numbers up to eight binary digits in length, but it could easily be extended. To use it, put up to eight binary digits in the 8 boxes at the top, one digit to a box. Do the eight multiplications and put the products in the 8 lower boxes. Add these eight boxes for the final result. This example shows how to find the decimal equivalent of 1 00 1 0 1 1 0 :00000000X128 X64 XJ2 X)6 X8 X4 X2 Xl�+0+0+�+0+0+0+0=�Converting from decimal to binary isn’t quite as straightforward, but here’s a template that let’s you convert decimal numbers from 0 through 255 to binary:DDDDDDDD+128 +64 +32 +16 +8 +4 +2 +I DDDDDDDDThe conversion is actually trickier than it appears, so follow the directions carefully. Put the entire decimal number (less than or equal to 255) in the box in the upper left corner. Divide that number (the dividend) by the first divisor ( 128), as indicated. Put the quotient in the box below (the box at the lower left comer), and the remainder in the box to the right (the second box on the top row). That first remainder is the dividend for the next calculation, which uses a divisor of 64. Continue in the same manner through the template.Keep in mind that each quotient will be either 0 or 1 . If the dividend is less than the divisor. the quotient is 0 and the remainder is simply the dividend. Ifthe dividend is greater than or equal to the divisor, the quotient is 1 and the remainder is the dividend minus the divisor. Here’s how it’s done with 150:+12!1 +64 +32 +16 +8 +4 +2 +1X4Alternatives to Ten 65If you need to add or multiply two binary numbers, it’s probably easier to do the calculation in binary rather than convert to decimal. This is the part you’re really going to like. Imagine how quickly you could have mas­ tered addition if the only thing you had to memorize was this:+01 001 1 110Let’s use this table to add two binary numbers:1100101 +0110110 10011011Starting at the right column: 1 plus 0 equals 1 . Second column from right: 0 plus 1 equals 1 . Third column: 1 plus 1 equals 0, carry the 1 . Fourth column: 1 (carried) plus 0 plus 0 equals I . Fifth column: 0 plus 1 equals 1 . Sixth col­ umn: 1 plus 1 equals 0, carry the 1 . Seventh column: 1 (carried) plus 1 plus 0equals 10.The multiplication table is even simpler than the addition table because it can be entirely derived by using two of the very basic rules of multiplica­ tion: Multiplying anything by 0 gets you 0, and multiplying any number by 1 has no effect on the number.X01 000 101Here’s a multiplication of 13n.-.. by 1 1TFN in binary:1101 )( 1011 11011101()()()()1101 10001111 The result is 143m,·66Chapter EightPeople who work with binary nwnbers often write them with leading zeros (that is, zeros to the left of the first 1 )-for example, 001 1 rather than just 1 1 . This doesn’t change the value of the number at all; it’s just for cosmetic purposes. For example, here are the first sixteen binary numbers with their decimal equivalents:Binary Decimal 0000 0 0001 1 0010 2 001 1 3 0100 4 0101 5 0110 6 0111 7 1000 8 1001 9 1010 10 1011 11 1100 121 101 13 1110 14 1111 15Let’s take a look at this list of binary numbers for a moment. Consider each of the four vertical columns of zeros and ones, and notice how the digits alternate going down the column:• The rightmost digit alternates between 0 and 1 .• The next digit from the right alternates between two Os andtwo 1s.• The next digit alternates between four Os and four ls.• The next digit alternates between eight Os and eight ls.This is very methodical, wouldn’t you say? Indeed, you can easily write the next sixteen binary numbers by just repeating the first sixteen and putting a 1 in front:Alternatives to Ten67Decimal16 10001 17 10010 18 10011 19 10100 2010101 21 10110 22 10111 23 1 1 00011001 25 1 1010 26 11011 27 11100 28 11101 29 11110 30 11111 31Here’s another way of looking at it: When you count in binary, the rightmost digit (also called the least significant digit), alternates between 0 and 1 . Every time it changes from a 1 to a 0, the digit second to right (that is, the next most significant digit) also changes, either from 0 to 1 or from 1 to0.Soeverytimeabinarydigitchangesfroma 1 toa0,thenextmost significant digit also changes, either from a 0 to a 1 or from a 1 to a 0.When we’re writing large decimal numbers, we use commas every three places so that we can more easily know what the number means at a glance. For example, if you see 12000000, you probably have to count digits, but if you see 1 2,000,000, you know that means twelve million.Binary numbers can get very long very quickly. For example, twelve mil­ lion in binary is 101 101 1 10001 101 100000000. To make this a little more readable, it’s customary to separate every four binary digits with a dash, for example 101 1-01 1 1 -0001-101 1-0000-0000 or with spaces: 101 1 01 1 1 0001 1 0 1 1 0000 0000. Later on in this book, we’ll look at a more concise way of expressing binary numbers.By reducing our number system to just the binary digits 0 and 1, we’ve gone as far as we can go. We can’t get any simpler. Moreover, the binary number system bridges the gap between arithmetic and electricity. In previous chapters, we’ve been looking at switches and wires and lightbulbs and relays, and any of these objects can represent the binary digits 0 and 1 :A wire can be a binary digit. I f current is flowing through the wire, the binary digit is 1 . If not, the binary digit is 0.A switch can be a binary digit. If the switch is on, or closed, the binary digit is 1 . If the switch is off, or open, the binary digit is 0.A lightbulb can be a binary digit. If the lightbulb is lit, the binary digit is 1 . If the lightbulb is not lit, the binary digit is 0.Binary100002468Chapter EightA telegraph relay can be a binary digit. If the relay is closed, the binary digit is 1 . If the relay is at rest, the binary digit is 0.Binary numbers have a whole lot to do with computers.Sometime around 1 948, the American mathematician John Wilder Tukey ( born 1 9 1 5 ) realized that the words binary digit were likely to assume a much greater importance in the years ahead as computers became more prevalent. He decided to coin a new, shorter word to replace the unwieldy five syllables of binary digit. He considered bigit and binit but settled instead on the short, simple, elegant, and perfectly lovely word bit.�Chapter NineBit by Bit by BitW h en Tony Orlando requested in a 1 973 song that his beloved “lie aYellowRibbonRoundtheOleOakTree,” hewasn’taskingfor elaborate explanations or extended discussion. He didn’t wantany ifs, ands, or buts. Despite the complex feelingsand emotional histories that would have been at play in the real-life situation the song was based on, all the man really wanted was a simple yes or no. He wanted a yellow ribbon tied around the tree to mean “Yes, even though you messed up big time and you’ve been in prison for three years, I still want you back with me under my roof.” And he wanted the absence of a yellow ribbon to mean “Don’t even think about stopping here.”These are two clear-cut, mutually exclusive alternatives. Tony Orlando did not sing, “lie half of a yellow ribbon if you want to think about it for a while” or “Tie a blue ribbon if you don’t love me anymore but you’d still like to be friends.” Instead, he made it very, very simple.Equally effective as the absence or presence of a yellow ribbon (but per­ haps more awkward to put into verse) would be a choice of trafficsigns in the front yard: Perhaps “Merge” or “Wrong Way.”Or a sign hung on the door: “Closed” or “Open.”Or a flashlight in the window, turned on or off.You can choose from lots of ways to say yes or no if that’s all you needto say. You don’t need a sentence tosay yesor no; you don’t need a word, and you don’t even need a letter. All you need is a bit, and by that I mean all you need is a 0 or a 1 .Aswediscoveredin thepreviouschapters, there’snothingreally allthat special about the decimal number system that we normally usc for count­ ing. It’s pretty clear that we base our number system on ten because that’s 70Chapter Ninethe number of fingers we have. We could just as reasonably base our num­ ber system on eight (if we were cartoon characters) or four (if we were lob­ sters) or even two (if we were dolphins).But there is something special about the binary number system. What’s special about binary is that it’s the simplest number system possible. There are only two binary digits-0 and 1. If we want something simpler than binary, we’ll have to get rid of the 1, and then we’ll be left with just a 0. We can’t do much of anything with just a 0.The word bit, coined to mean binary digit, is surely one of the loveliest words invented in connection with computers. Of course, the word has the normal meaning, “a small portion, degree, or amount,” and that normal meaning is perfect because a bit�ne binary digit-is a very small quantity indeed.Sometimes when a new word is invented, it also assumes a new meaning. That’s certainly true in this case. A bit has a meaning beyond the binary digits used by dolphins for counting. In the computer age, the bit has come to be regarded as the basic building block ofinformation.Now that’s a bold statement, and of course, bits aren’t the only things that convey information. Letters and words and Morse code and Braille and decimal digits convey information as well. The lhi.ng about the bit is that it conveys very little information. A bit of information is the tiniest amount ofinformation possible. Anything less than a bit is no information at all. But because a bit represents the smallest amount of information possible, more complex information can be conveyed with multiple bits. (By saying that a bit conveys a ..smaU” amount of information, I surely don’t mean that the information borders on the unimportant. Indeed, the yellow ribbon is a very important bit to the two people concerned with it.)..Listen, my children, and you shaU hear I Of the midnight ride of Paul Revere,” wrote Henry Wadsworth Longfellow, and while he might not have been historically accurate when describing how Paul Revere alerted the American colonies that the British had invaded, he did provide a thought­ provoking example of the use of bits to communicate important information:He said to his friend “If the British march By land or sea from the town to-night, Hang a lantern aloft in the belfry archOf the North Church tower as a special light,- One, if by land, and two, if by sea. . . “To summarize, Paul Revere’s friend has two lanterns. If the British are in­ vading by land, he will put just one lantern in the church tower. If the Brit­ ish are coming by sea, he will put both lanterns in the church tower.However, Longfellow isn’t explicitly mentioning all the possibilities. He left unspoken a third possibility, which is that the British aren’t invading just yet. Longfellow implies that this possibility will be conveyed by the absence of lanterns in the church rower. Bit by Bit by Bit71Let’s assume that the two lanterns are actuaUy permanent fixtures in the church tower. Normally they aren’t lit:This means that the British aren’t yet invading. If one of the lanterns is lit,orthe British are coming by land. If both lanterns are lit,the British are coming by sea.Each lantern is a bit. A lit lantern is a 1 bit and an unlit lantern is a 0 bit.Tony Orlando demonstrated to us that only one bit is necessary to convey one of two possibilities. If Paul Revere needed only to be alerted that the 72Chapter NineBritish were invading (and not where they were coming from), one lantern would have been sufficient. The lantern would have been lit for an invasion and unlit for another evening of peace.Conveying one of three possibilities requires another lantern. Once that second lantern is present, however, the two bits allows communicating one of four possibilities:00 = The British aren’t invading tonight. 01 = They’re coming by land.10 = They’re coming by land.1 1 = They’re coming by sea.What Paul Revere did by sticking to just three possibilities was actually quite sophisticated. In the lingo of communication theory, he used redun­ dancy to counteract the effect of noise. The word noise is used in commu­ nication theory to refer to anything that interferes with communication. Static on a telephone line is an obvious example of noise that interferes with a telephone communication. Communication over the telephone is usually successful, nevertheless, even in the presence of noise because spoken lan­guage is heavily redundant. We don’t need to hear every syllable of every word in order to understand what’s being said.In the case of the lanterns in the church tower, noise can refer to the darkness of the night and the distance of Paul Revere from the tower, both of which might prevent him from distinguishing one lantern from the other. Here’s the crucial passage in Longfellow’s poem:And lo! As he looks, on the belfry’s height A glimmer, and then a gleam of light! He springs to the saddle, the bridle he turns, But lingers and gazes, till full on his sight A second lamp in the belfry burns!It certainly doesn’t sound as if Paul Revere was in a position to figure out exactly which one of the two lanterns was first lit.The essential concept here is that information represents a choice among two or more possibilities. For example, when we talk to another person, every word we speak is a choice among all the words in the dictionary. If we numbered all the words in the dictionary from 1 through 351,482, we could just as accurately carry on conversations using the numbers rather than words. (Of course, both participants would need dictionaries where the words are numbered identically, as well as plenty of patience.)T h e f l i p s i d e o f t h i s i s t h a t a n y i n fo rm a t i o n t h a t ca ” b e r e d u ce d t o a c h o i c e amongtwoormorepossibilitiesca” beexpressedusingbits.Needlesstosay, there are plenty of forms of human communication that do not represent choices among discrete possibilities and that are also vital to our existence. This is why people don’t form romantic relationships with computers. ( let’s hope they don’t, anyway. ) If you can’t express something in words, pictures, or sounds, you’re not going to be able to encode the information in bits. Nor would vou want to.Bit by Bit by Bit 73A thumb up or a thumb down is one bit of information. And two thumbs up or down-such as the thumbs of film critics Roger Eben and the late Gene Siskel when they rendered their final verdicts on the latest movies-convey two bits of information. (We’ll ignore what they actually had to say about the movies; all we care about here are their thumbs.) Here we have four possibilities that can be represented with a pair of bits:00 = They both hated it.01 = Siskel hated it; Ebert loved it. 10 = Siskel loved it; Ebert hated it. 1 1 = They both loved it.The first bit is the Siskel bit, which is 0 if Siskel hated the movie and 1 if he liked it. Similarly, the second bit is the Eben bit.So if your friend asked you, “What was the verdict from Siskel and Eben about that movie Impolite Encounter?” instead of answering, “Siskel gave it a thumbs up and Ebert gave it a thumbs down” or even “Siskel liked it; Ebert didn’t,” you could have simply said, “One zero.” As long as your friend knew which was the Siskel bit and which was the Eben bit, and that a 1 bit meant thumbs up and a 0 bit meant thumbs down, your answer would be perfectly understandable. But you and your friend have to know the code.We could have declared initially that a 1 bit meant a thumbs down and a 0 bit meant a thumbs up. That might seem counterintuitive. Naturally, we like to think of a 1 bit as representing something affirmative and a 0 bit as the opposite, but it’s really just an arbitrary assignment. The only requirement is that everyone who uses the code must know what the 0 and 1 bits mean.The meaning of a particular bit or collection of bits is always understood contextually. The meaning of a yellow ribbon around a particular oak tree is probably known only to the person who put it there and the person who’s supposed to see it. Change the color, the tree, or the date, and it’s just a meaningless scrap of cloth. Similarly, to get some useful information out of Siskel and Ebert’s hand gestures, at the very least we need to know what movie is under discussion.. If you maintained a list of the movies that Siskel and Ebert reviewed and how they voted with their thumbs, you could add another bit to the mix to include your own opinion. Adding this third bit increases the number of different possibilities to eight:000 = 001 = 010 = 011= 100= 101 = 1 1 0 = 1 1 1 =Siskel hated it; Ebert hated it; I hated it. Siskel hated it; Ebert hated it; I loved it. Siskel hated it; Ebert loved it; I hated it. Siskel hated it; Ebert loved it; I loved it. Siskel loved it; Ebert hated it; I hated it. Siskel loved it; Ebert hated it; I loved it. Siskel loved it; Ebert loved it; I hated it. Siskel loved it; Ebert loved it; I loved it.74Chapter NineOne bonus of using bits to represent this information is that we know that we’ve accounted for all the possibilities. We know there can be eight and only eight possibilities and no more or fewer. With 3 bits, we can count only from zero to seven. There are no more 3-digit binary numbers.Now, during this description of the Siskel and Ebert bits, you might have been considering a very serious and disturbing question, and that question is this: What do we do about Leonard Maltin ‘s Movie & Video Guide? After all, Leonard Maltin doesn’t do the thumbs up and thumbs down thing. Leonard Maltin rates the movies using the more traditional star system.To determine how many Maltin bits we need, we must first know a few things about his system. Maltin gives a movie anything from 1 star to 4 stars, with half stars in between. Uust to make this interesting, he doesn’t actu­ ally award a single star; instead, the movie is rated as a BOMB.) There are seven possibilities, which means that we can represent a particular rating us­ ing just 3 bits:000 = BOMB 001 = *Y2 010= ** 011= **Y2100= *** 101 = ***’h 110= ****”What about 1 1 1 ? ” you may ask. Well, that code doesn’t mean anything. It’s not defined. If the binary code 1 1 1 were used to represent a Malrin rat­ ing, you’d know that a mistake was made. (Probably a computer made the mistake because people never do.)You’ll recall that when we had two bits to represent the Siskel and Ebert ratings, the leftmost bit was the Siskel bit and the rightmost bit was the Ebert bit. Do the individual bits mean anything here? Well, sort of. If you take the numeric value of the bit code, add 2, and then divide by 2, that will give you the number of stars. But that’s only because we defined the codes in a rea­ sonable and consistent manner. We could j ust as well have defined the codes this way:000 = * * * 001 = *’h010 = **’h 011 = **** 101 = *** Y2 110=**1 1 1 = BOMBThis code is just as legitimate as the preceding code so long as everybody knows what it means.Bit by Bit by Bit 75If Maltin ever encountered a movie undeserving of even a single fuU star, he could award a half star. He would certainly have enough codes for the half-star option. The codes could be redefined like so:000 = MAJOR BOMB 001 = BOMB010= *Yl011 = **100= **Yl 101 = *** 110=***Yl 111 = ****But if he then encountered a movie not even wonhy of a half star and de­ cided to award no stars (ATOMIC BOMB?), he’d need another bit. No more3-bit codes are available.The magazine Entertainment Weekly gives grades, not only for moviesbut for television shows, CDs, books, CD-ROMs, Web sites, and much else. The grades range from A+ straight down to F (although it seems that only Pauly Shore movies are wonhy of that honor). If you count them, you see13 possible grades. We would need 4 bits to represent these grades:00=F 0001 = D- 0010 = D 0011 = D+ 0100 = C- 0101 = c 01 1 0 = C+ 0111 =B- 1000 = B 1001 = B+ 1010 = A- 1011 =A1 1 00 = A+Wehavethreeunusedcodes: 1101,1110,and1111,foragrandtotalof16. Whenever we talk about bits, we often talk about a certain number o f bits. The more bits we have, the greater the number of different possibilities wecan convey.It’s the same situation with decimal numbers, of course. For example, howmany telephone area codes are there? The area code is three decimal digits long, and if all of them are used (which they aren’t, but we’ll ignore that), there are 101, or 1000, codes, ranging from 000 through 999. How many 7-digit phone numbers are possible within the 2 1 2 area code? That’s 1 07, or 10,000,000. How many phone numbers can you have with a 212 area codeand a 260 prefix? That’s 104, or 10,000.Similarly, in binary the number of possible codes is always equal to 2 tothe power of the number of bits:Number of Bits 12 3 4 5 6 7 8 910Number of Codes 21 = 22l=421 = 8 2•=16 21=32 2•=64 2’=128 21 = 256 2’ = 512 210= 1024Every additional bit doubles the number of codes.If you know how many codes you need, how can you calculate how manybits you need? In other words, how do you go backward in the preceding table?The method you use is something called the base two logarithm. The logarithm is the opposite of the power. We know that 2 to the 7th power equals 128. The base two logarithm of 128 equals 7. To use more mathemati­ cal notation, this statementis equivalent to this statement:log2128 = 7So if the base two logarithm of 1 28 is 7, and the base two logarithm of 256 is 8, then what’s the base two logarithm of 200? It’s actually about 7.64, but we really don’t have to know that. If we needed to represent 200 different things with bits, we’d need 8 bits.Bits are often hidden from casual observation deep within our electronic appliances. We can’t see the bits encoded in our compact discs or in our digital watches or inside our computers. But sometimes the bits are in clear view.Here’s one example. If you own a camera that uses 35-millimeter film, take a look at a roll of film. Hold it this way:Chapter Nine 76Bit by Bit by Bit 77 You’II see a checkerboard-like grid of silver and black squares that I’ve numbered 1 through 12 in the diagram. This is called DX-encoding. These 1 2 squares are actually 12 bits. A silver square means a 1 bit and a black square meansa 0 bit. Square 1 andsquare 7 are alwayssilver (1).What do the bits mean? You might be aware that some films are more sensitive to light than others. This sensitivity to light is often called the film speed. A film that’s very sensitive to light is said to be fast because it can be exposed very quickly. The speed of the film is indicated by the film’s ASA (American StandardsAssociation) rating, the most popular being 100, 200, and 400. This ASA rating isn’t only printed on the box and the film’s cas­ sene but is also encoded in bits.There arc 24 standard ASA ratings for photographic film. Here they are:800 1000 1250 1 600 1000 2500 3200 4000 5000How many bi ts arc required to encode the ASA rating? The answer is 5. We know that 2� equals 16, so that’s too few. But 21 equals 32, which is more than sufficient.25 32 5080 100 1H 160 200 250 320 400 500 640406478Chapter NineThe bits that correspond to the film speed are shown in the following table:FilmSquare 6 Spe0 25 32 000 140 1 0 0 1 0 50 1000 64 100 80Square 2 Square 3 Square 4 Square 5 0 0 0 1 00000 1 0 0100 01010 100 125 160 0 200 250 1 320 00 1 0 4001 1 11 10 1 00 0 100 0 00 1500 1 640 1 1 0 800 1 0 1 1000 101 1250 0 1 0 1600 0 0 2000 01 1 2500 1 1 0 3200 0 4000 5000Most modern 35-millimeter cameras use these codes. (Exceptions are cameras on which you must set the exposure manually and cameras that have built-in light meters but require you to set the film speed manually.) If you take a look inside the camera where you put the film, you should see six metal contacts that correspond to squares 1 through 6 on the film canister. The silver squares are actually the metal of the film cassette, which is a conduc-tor. The black squares are paint, which is an insulator.1 0 1 0Bit by Bit by Bit 79The electronic circuitry of the camera runs a current into square 1 , which is always silver. This current will be picked up (or not picked up) by the five contacts on squares 2 through 6, depending on whether the squares are bare silver or are painted over. Thus, if the camera senses a current on contacts 4 and 5 but not on contacts 2, 3, and 6, the film speed is 400 ASA. The camera can then adjust film exposure accordingly.Inexpensive cameras need read only squares 2 and 3 and assume that the film speed is 50, 100, 200, or 400 ASA.Most cameras don’t read or use squares 8 through 1 2. Squares 8, 9, and 1 0 encode the number of exposures on the roll of film, and squares 1 1 and 1 2 refer to the exposure latitude, which depends on whether the film is for black-and-white prints, for color prints, or for color slides.Perhaps the most common visual display of binary digits is the ubiqui­ tous Universal Product Code ( UPC), that little bar code symbol that appears on virtually every packaged item that we purchase these days. The UPC has come to symbolize one of the ways computers have crept into our lives.Although the UPC often inspires fits of paranoia, it’s really an innocent little thing, invented for the purpose of automating retail checkout and in­ ventory, which it does fairly successfully. When it’s used with a well-designed checkout system, the consumer can have an itemized sales receipt, which isn’t possible with conventional cash registers.Of interest to us here is that the UPC is a binary code, although it might not seem like one at first. So it will be instructive to decode the UPC and examine how it works.In its most common form, the UPC is a collection of 30 vertical black bars of various widths, divided by gaps of various widths, along with some dig­ its. For example, this is the UPC that appears on the 1 0 J/4-ounce can of Campbell’s Chicken Noodle Soup:07We’re tempted to try to visually interpret the UPC in terms of thin bars and black bars, narrow gaps and wide gaps, and indeed, that’s one way to look at it. The black bars in the UPC can have four different widths, with the thicker bars being two, three, and four times the width of the thinnest bar. Similarly, the wider gaps between the bars are two, three, and four times the width of the thinnest gap. 80Chapter NineBut another way to look at the UPC is as a series of bits. Keep in mind that the whole bar code symbol isn’t exactly what the scanning wand “sees” at the checkout counter. The wand doesn’t try to interpret the numbers at the bottom, for example, because that would require a more sophisticated computing technique known as optical character recognition, or OCR. In­ stead, the scanner sees just a thin slice of this whole block. The UPC is as large as it is to give the checkout person something to aim the scanner at. The slice that the scanner sees can be represented like this:II•1•I•I•1•I•111•1••••I•••IIIIThis looks almost like Morse code, doesn’t it?As the computer scans this information from left to right, it assigns a 1bit to the first black bar it encounters, a 0 bit to the next white gap. The subsequent gaps and bars are read as series of bits 1 , 2, 3, or 4 bits in a row, depending on the width of the gap or the bar. The correspondence of the scanned bar code to bits is simply:II•1•I•I•1•I•111•1••••I•••IIII1 8 1 8 9 8 1 1 8 1 1 1 U 8 8 1 8 8 1 1 8 8 1 8 8 1 1 1 8 1 8 1 8 1 1 8 1 8 81 1 1 8 1 8 1 1 1 8 1 1 1 8 8 1 8 1 1 8 8 1 1 1 1 1 8 1 1 88 1 88 1 1 1 8 1 1 8 8 1 1 8 1 8 H I 8 8 1 8 1So the entire UPC is simply a series of 95 bits. In this particular example, the bits can be grouped as follows:Bits Meaning101 Left-hand guard panem 0001 10101100010011001 0001 1010001 1010001 10101010 Center guard pancm 1 1 10010Left-side digits110011011011001001 1 1011001101000100101 Right-hand guard panemRight-side digitsThe first 3 bits are always 101.This is known as the left-handguardpattern, and it allows the computer-scanning device to get oriented. From the guard pattern, the scanner can determine the width of the bars and gaps that cor­ respond to single bits. Otherwise, the UPC would have to be a specific size on all packages.Bit by Bit by Bit 81The left-hand guard pattern is followed by six groups of 7 bits each. Each of these is a code for a numeric digit 0 through 9, as I’ll demonstrate shortly. A 5-bit center guard pattern follows. The presence of this fixed pattern (always 0 1 0 1 0) is a form of built-in error checking. If the computer scanner doesn’t find the center guard pattern where it’s supposed to be, it won’t acknowledge that it has interpreted the UPC. This center guard pat­ tern is one of several precautions against a code that has been tampered with or badly printed.The center guard pattern is followed by another six groups of 7 bits each, which are then followed by a right-hand guard pattern, which is always 1 0 1 . As I’ll explain later, the presence of a guard pattern a t the end allows the UPC code to be scanned backward (that is, right to left) as well as forward.So the entire UPC encodes 12 numeric digits. The left side of the UPC encodes 6 digits, each requiring 7 bits. You can usc the following table to decode these bits:Left-Side Codes0001101 = 0 0011001 = 1 0010011 = 2 0111101 = 3 0100011 = 40110001 = 5 0101111 =6 0111011 = 7Notice that each 7-bit code begins with a 0 and ends with a 1. If the scan­ ner encounters a 7-bit code on the left side that begins with a 1 or ends with a 0, it knows either that it hasn’t correctly read the UPC code or that the code has been tampered with. Notice also that each code has only two groups of consecutive 1 bits. This implies that each digit corresponds to two verti­ cal bars in the UPC code.You’ll see that each code in this table. has an odd number of 1 bits. This is another form of error and consistency checking known as parity. A group of bits has even parity if it has an even number of 1 bits and odd parity if it has an odd number of 1 bits. Thus, all of these codes have odd parity.To interpret the six 7-bit codes on the right side of the UPC, use the fol­ lowing table:Right-Side Codes1 1 1 00 1 0 =0 1 1001 10 =I 1 101 100 =21001110= 5 1010000 = 6 1000100= 710000101 0 1 1 1 00 = 48=31001000 =1 1 1 0 1 00 = 90110111 = 0001011 = 9882Chapter NineThese codes are the complements of the earlier codes: Wherever a 0 appeared is now a 1 , and vice versa. These codes always begin with a 1 and end with a 0. In addition, they have an even number of 1 bits, which is even parity.So now we’re equipped to decipher the UPC. Using the two preceding tables, we can determine that the 1 2 digits encoded in the 1 0 J/4-ounce can of Campbell’s Chicken Noodle Soup are05100001251 7This is very disappointing. As you can see, these are precisely the same numbers that are conveniendy printed at the bottom of the UPC. (This makes a lot of sense because if the scanner can’t read the code for some reason, the person at the register can manually enter the numbers. Indeed, you’ve un­ doubtedly seen this happen.) We didn’t have to go through all that work to decode them, and moreover, we haven’t come close to decoding any secret information. Yet there isn’t anything left in the UPC to decode. Those 30 venical lines resolve to just 12 digits.The first digit (a 0 in this case) is known as the number system charac­ ter. A 0 means that this is a regular UPC code. I f the UPC appeared on vari­ able-weight grocery items such as meat or produce, the code would be a 2. Coupons are coded with a 5.The next five digits make up the manufacturer code. In this case, 51000 is the code for the Campbell Soup Company. All Campbell products have this code. The five digits that follow (01251) are the code for a panicular product of that company, in this case, the code for a 1 0 J/4-ounce can of chicken noodle soup. This product code has meaning only when combined with the manufacturer’s code. Another company’s chicken noodle soup might haveadifferentproductcode,andaproductcodeof01251 mightmean something totally different from another manufacturer.Contrary to popular belief, the UPC doesn’t include the price of the item. That information has to be retrieved from the computer that the store uses in conjunction with the checkout scanners.The final digit (a 7 in this case) is called the modulo check character. This character enables yet another form of error checking. To examine how this works,let’sassigneachofthefirst 11 digits(05100001251 inourexample) a letter:A BCDEF GHIJK 3x(A+C+E+ G+ I+K)+ (8+D+ F+H+J)and subtract that from the next highest multiple of 1 0. That’s called the modulo check character. In the case of Campbell’s Chicken Noodle Soup, we have3X(0+1+0+0+2+1)+(5+0+0+1+5)=3.X4+11= 23Now calculate the following: Bit by Bit by Bit 83The next highest multiple of 10 is 30, so 30 – 23 = 7and that’s the modulo check character printed and encoded in the UPC. This is a form of redundancy. If the computer controlling the scanner doesn’t calculate the same modulo check character as the one encoded in the UPC, the computer won’t accept the UPC as valid.Normally, only 4 bits would be required to specify a decimal digit from 0 through 9. The UPC uses 7 bits per digit. Overall, the UPC uses 95 bits to encode only 1 1 useful decimal digits. Actually, the UPC includes blank space (equivalent to nine 0 bits) at both the left and the right side of the guard panern. That means the entire UPC requires 1 1 3 bits to encode 1 1 decimal digits, or over 10 bits per decimal digit!Part of this overkill is necessary for error checking, as we’ve seen. A prod­ uct code such as this wouldn’t be very useful if it could be easily altered by a customer wielding a felt-tip pen.The UPC also benefits by being readable in both directions. If the first digits that the scanning device decodes have even parity (that is, an even number of 1 bits in each 7-bit code), the scanner knows that it’s interpret­ ing the UPC code from right to left. The computer system then uses this table to decode the right-side digits:Right-Side Codes in Reverse0100111=0 0110011 = 1 0011011 = 2 0100001 = 3 0011101=4and this table for the left-side digits:0111001=5 0000101=6 0010001=7 0001001=8 0010111=9Left-Side Codes in Reverse1011000= 0 1001100= 1 1100100= 2 1011110:3 1100010 = 41000110=5 1111010=6 1101110=7 1110110:8 1101000=9These 7-bit codes are all different from the codes read when the UPC is scanned from left to right. There’s no ambiguity.We began looking at codes in this book with Morse code, composed of dots, dashes, and pauses between the dots and dashes. Morse code doesn’t immediately seem like it’s equivalent to zeros and ones, yet it is.Recall the rules of Morse code: A dash is three times as long as a dot. The dots and dashes of a single lener are separated by a pause the length of a84Chapter Ninedot. Letterswithin aword are separated by pausesequal inlength to adash. Words are separated by pauses equal in length to two dashes.Just to simplify this analysis a bit, let’s assume that a dash is twice the length of a dot rather than three times. That means that a dot can be a 1 bit and a dash can be two 1 bits. Pauses are 0 bits.Here’s the basic table of Morse code from Chapter 2: A ·-B -···c -·-·D_, M–E. N-· w·–s … T – uJ ·— KI. ·-··vX -··-···-F ··-· G –· H ….0 — p ·–· Q –·-y -·– z –·· R ·-·A 101100 J 101101101100 s 1010100I ..Here’s the table converted to bits: BcD1101010100 K 11010110100 l 11010100 M 100 N 1 0 1 0 1 1 0 1 00 0 1 101 10100 p1 10 1 0 1 1 00 T 101 1010100 u 1 1 0 1 1 00 v1 1 00 10101 100 1 0 1 0 1 0 1 1 00 EFGHI 10100 R 10110100110100 w 101101100 1 1 0 1 1 0 1 1 0 0 X H 0 1 0 1 0 1 1 00101101 10100 y 1 10101101 100 101010100 Q 110110101100 z 11011010100 Notice that all the codes begin with a 1 bit and end with a pair of 0 bits. The p air of 0 bits repre se nts the p ause be twee n letters in the same word. The code for the space between words is another pair of 0 bits. So the Morse code for ” h i there” is normally given as•••••• -…..·-·.but Morse code using bits can look like the cross section of the UPC code:IIIIII •IIIIII.II 18191919919199991199191919199199191 1919919999-·-..-Bit by Bit by Bit 85In tenns of bits, Braille is much simpler than Morse code. Braille is a 6- bit code. Each character is represented by an array of six dots, and each of the six dots can be either raised or not raised. As I explained in Chapter 3, the dots are commonly numbered 1 through 6:004 200s 3006The word “code” (for example) is represented by the Braille symbols:•••••••• …….. ……..If a raised dot is 1 and a flat dot is 0, each of the characters in Braille can be represented by a 6-bit binary number. The four Braille symbols for the letters in the word “code” are then simply:100100 101010 100110 100010where the leftmost bit corresponds to the 1 position in the grid, and the rightmost bit corresponds to the 6 position.As we shall see later in this book, bits can represent words, pictures, sounds, music, and movies as well as product codes, film speeds, movie ratings, an invasion of the British army, and the intentions of one’s beloved. But most fundamentally, bits are numbers. All that needs to be done when bits represent other information is to count the number of possibilities. This determines the number of bits that are needed so that each possibility can be assigned a number.Bits atso play a part iri logic, that strange blend of philosophy and math­ ematics for which a primary goal is to determine whether certain statements are true or false. True and false can also be 1 and 0.Chapter TenLogic and SwitchesWhat is truth? Aristotle thought that logic had something to do with it. The collection of his teachings known as the Organon (which dates from the fourth century B.C.E.) is the earliest ex­ tensive writing on the subject of logic. To the ancient Greeks, logic was a means of analyzing language in the search for truth and thus was consid­ ered a form of philosophy. The basis of Aristotle’s logic was the syllogism. The most famous syllogism (which isn’t actually found in the works of Aristotle) isAll men are mortal; Socrates is a man; Hence, Socrates is mortal.In a syllogism, two premises are assumed to be correct, and from these a conclusion is deduced.The mortality of Socrates might seem straightforward enough, but there are many varieties of syllogisms. For example, consider the following two premises, proposed by the nineteenth-century mathematician Charles Dodgson (also known as Lewis Carroll):All philosophers are logical;An illogical man is always obstinate.The conclusion isn’t obvious at all. (It’s “Some obstinate persons are not philosophers.” Notice the unexpected and disturbing appearance of the word “some.”)Logic and Switches 87For over rwo thousand years, mathematicians wrestled with Aristotle’s logic, auempting to corral it using mathematical symbols and operators. Prior to the nineteenth century, the only person to come close was Gottfried WilhelmvonLeibniz (1648-1716), who dabbledwith logicearly in life but thenwent on to other interests (such as independently inventing calculus at the same time as Isaac Newton).And then came George Boole.George Boole was born in England in 1815ro aworldwhere the oddswere certainly stacked against him. Because he was the son of a shoe­ maker and a former maid, Britain’s rigid class srrm:ture would normally have prevented Boole from achieving anything much different fromhis ancestors. But aided by an inquisitive minda n d h i s h e l p f u l f a t h e r ( w h o h a d s t r o n g i n te r e s t sin science, mathematics, and literature), young George gave himself the type of education nor­ mally the privilege of upper-class boys; his stud­ies includedLatin, Greek, and mathematics. Asa result of his early papers on mathematics, in1 849 Boole was appointed the first Professor of Mathematics at Queen’s College, Cork, in Ireland.Several mathematicians in the mid-1800s had been working on a math­ ematical definition of logic (most notably Augustus De Morgan), but it was Boole who had the real conceptual breakthrough, first in the short book The Mathematical Analysis of Logic, Being an Essay Towards a Calculus of Deductive Reasoning ( 1 847) and then in a much longer and more ambitious tex t, An Investigation of the Laurs of Thought on Which Are Founded the Mathematical Theories ofLogic and Probabilities ( 1 854), more conveniently referred to as The Laws ofThought. Boole died in 1 864 at the age of 49 after hurrying to class in the rain and contracting pneumonia. .The titleofBoole’s1854 booksuggestsanambitiousmotivation: Becausethe rational human brain uses logic to think, if we were to find a way in which logic can be represented by mathematics, we would also h ave a math­ ematical description of how the brain works. Of course, nowadays this view ofthe mindseemstousquite naive. (Either thator it’sway aheadofitstime.)Boole invented a kind of algebra that looks and acts very much like con­ ventional algebra. In conventional algebra, the operands (which are usually letters)standfornumbers,andtheoperators(mostoften+ andx)indicate how these numbers are to be combined. Often we use conventional algebra to solve problems such as this: Anya has 3 pounds of tofu. Betty has rwice asmuch tofuasAnya. Carmenhas5 pounds more tofu thanBetty. Deirdre hasthree timesthe tofuthatCarmenhas. How much tofudoesDeirdrehave? 8 8Chapter TenTo solve this problem, we first conven the English to arithmetical state­ ments, using four letters to stand for the pounds of tofu that each of the four women has:A=3 B=2xA C=B+5 0=3xCWe can combine these four statements into one statement by substitution and then finally perform the additions and multiplications:0=3xCD=3X (8+5)D = 3 X ((2 X A) + 5) D=3X ((2X 3)+5) D = 33When we do conventional algebra, we follow certain rules. These rules have probably become so ingrained in our practice that we no longer think of them as rules and might even forget their names. But rules indeed under­ lie all the workings of any form of mathematics.The first rule is that addition and multiplication are commutative. That means we can switch around the symbols on each side of the operations:A+B=B+A AxB=BxABy contrast, subtraction and division are not commutative. Addition and multiplication are also associative, that isA + (8 + C) = (A + B) + C AX (8X C)=(AX 8)X cAnd finally, multiplication is said to be distributive over addition: AX (8+C)=(AX 8)+(AX C)Another characteristic of conventional algebra is that it always deals with numbers, such as pounds of tofu or numbers of ducks or distances that a train travels or the ages of family members. It was Boole’s genius to make alge­ bra more abstract by divorcing it from concepts of number. In Boolean al­ gebra (as Boole’s algebra was eventually called), the operands refer not to numbers but instead to classes. A class is simply a group of things, what in later times came to be known as a set.Let’s talk about cats. Cats can be either male or female. For convenience, we can use the letter M to refer to the class of male cats and F to refer to the class of female cats. Keep in mind that these rwo symbols do not representLogic and Switches 89numbers of cats. The number of male and female cats can change by the minute as new cats are born and old cats (regrettably) pass away. The let­ ters stand for classes of cats-cats with specific characteristics. Instead of referring to male cats, we can just say .. M.”We can also use other letters to represent the color of the cats: For ex­ ample, T can refer to the class of tan cats, B can be the class of black cats, W the class of white cats, and 0 the class of cats of all ..other” colors-all cats not in the class T, B, or W.Finally (at least as far as this example goes), cats can be either neutered or unneutered. Let’s use the letter N to refer to the class of neutered cats and U for the class of unneutered cats.In conventional (numeric) algebra, the operators + and x arc used to in­ dicate addition and multiplication. In Boolean algebra, the same + and x symbols are used, and here’s where things might get confusing. Everybody knows how to add and multiply numbers in conventional algebra, but how do we add and multiply classes?Well, we don’t actually add and multiply in Boolean algebra. Instead, the + and x symbols mean something else entirely.The + symbol in Boolean algebra means a union of two classes. A union of two classes is everything in the first class combined with everything in the second class. For example, B + W represents the class of all cats that are either black or white.The x symbol in Boolean algebra means an intersection of two classes. An intersection of two classes is everything that is in both the first class and the second class. For example, F x T represents the class of all cats that are both female and tan. As in conventional algebra, we can write F x T as F·T or simply FT (which is what Boolc preferred ). You can think of the two letters as two adjectives strung together: “female tan” cats.To avoid confusion between conventional algebra and Boolean algebra, sometimes the symbols U and n are used for union and intersection insteadof + andmake the use of familiar operators more abstract, so I’ve decided to stick with his decision not to introduce new symbols into his algebra.x.But part of Boole’s liberating influence on mathematics was toThe commutative, associative, and distributive rules all hold for Boolean algebra. What’s more, in Boolean algebra the + operator is distributive over the x operator. This isn’t true of conventional algebra:w+(8X F)= (W+8)X (W+F)The union of white cats and black female cats is the same as the intersec­ tion of two unions: the union of white cats and black cats, and the union of white cats and female cats. This is somewhat difficult to grasp, but it works.Two more symbols are necessary to complete Boolean algebra. These two symbols might look like numbers, but they’re really not because they’re sometimes treated a little differently than numbers. The symbol 1 in Boolean90Chapter Tenalgebra means “the universe”-that is, everything we’re talking about. In this example, the symbol 1 means “the class of all cats.” Thus,M+F=1This means that the union of male cats and female cats is the class of all cats. Similarly, the union of tan cats and black cats and white cats and other colored cats is also the class of all cats:And you achieve the class of all cats this way, too:N+U=1The 1 symbol can be used with a minus sign to indicate the universe excluding something. For example,1-Mis the class of all cats except the male cats. The universe excluding all male cats is the same as the class of female cats:1-M=FThe other symbol that we need is the 0, and in Boolean algebra the 0 means an empty class-a class of nothing. The empty class results when we take an intersection of two mutually exclusive classes, for example, cats that are both male and female:FxM=ONotice that the 1 and 0 symbols sometimes work the same way in Bool­ ean algebra as in conventional algebra. For example, the intersection of all cats and female cats is the class of female cats:1xf=FThe intersection of no cats and female cats is the class of no cats:Oxf=OThe union of no cats and all female cats is the class of female cats:O+f=FBut sometimes the result doesn’t look the same as in conventional alge­ bra. For example, the union of all cats and female cats is the class of all cats:1+f=1This doesn’t make much sense in conventional algebra.Logic and Switches 9 1 Because F is the class of all female cats, and ( 1 – F) is the class of all catsthat aren’t female, the union of these two classes is 1: F + ( 1 – F) = 1and the intersection of the two classes is 0: F x (1 – F) = 0Historically, this formulation represents an important concept in logic: It’s called the Law of Contradiction and indicates that something can’t be both itself and the opposite of itself.Where Boolean algebra really looks different from conventional algebra is in a statement like this:FxF=FThe statement makes perfect sense in Boolean algebra: The intersection of female cats and female cats is still the class of female cats. But it sure wouldn’t look quite right if F referred to a number. Boole considered)(2=Xto be the single statement that differentiates his algebra from conventional algebra. Another Boolean statement that looks funny in terms of conven­ tional algebra is this:F+F=FThe union of female cats and female cats is still the class of female cats. Boolean algebra provides a mathematical method for solving the syllo­ gisms of Aristotle. Let’s look at the first two-thirds of that famous syllogismagain, but now using gender-neutral language:All persons are mortal; Socrates is a person.We’ll use P to represent the class of all persons, M to represent the class of mortal things, and S to represent the class of Socrates. What does it mean to say that “all persons are mortal” ? It means that the intersection of the class of all persons and the class of all mortal things is the class of all persons:PxM=PIt would be wrong to say that P x M = M, because the class of all mortal things includes cats, dogs, and elm trees.To say, “Socrates is a person,” means that the intersection of the class containing Socrates (a very small class) and the class of all persons (a much larger class) is the class containing Socrates:SxP=S92Chapter TenBecause we know from the first equation that P equals (P x M) we can substitute that into the second equation:sX (PX M)=s By the associative law, this is the same as(SX P)X M=sBut we already know that (S x P) equals S, so we can simplify by using thissubstitution:sX M=sAnd now we’re finished. This formula tells us that the intersection of Soc­ rates and the class of aU mortal things is S, which means that Socrates is mortal. If we found instead that (S x M) equaled 0, we’d conclude that Socrates wasn’t mortal. If we found that (S x M) equaled M, the conclusion would have to be that Socrates was the only mortal thing and everything else was immortal!Using Boolean algebra might seem like overkill for proving the obvious fact (particularly considering that Socrates proved himself mortal 2400 years ago), but Boolean algebra can also be used to determine whether something satisfies a certain set of criteria. Perhaps one day you walk into a pet shop and say to the salesperson, “I want a male cat, neutered, either white or tan; or a female cat, neutered, any color but white; or I’ll take any cat you have as long as it’s black.” And the salesperson says to you, “So you want a cat from the class of cats represented by the following expression: + > + a
Right?” And you say, “Yes! Exactly!”
In verifying that the salesperson is correct, you might want to forgo the
concepts of union and intersection and instead switch to the words OR and AND. I’m capitalizing these words because the words normally represent concepts in English, but they can also represent operations in Boolean al­ gebra. When you form a union of two classes, you’re actually accepting things from the first class OR the second class. And when you form an in­ tersection, you’re accepting only those things in both the first class AND the second class. In addition, you can use the word NOT wherever you see a 1 followed by a minus sign. In summary,
• The + (previously known as a union) now means OR.
• The x (previously known as an intersection) now means AND.
• The 1 – (previously the universe without something) now means
NOT.

Logic and Switches 93
So the expression can also be wrinen like this:
(M AND N AND rN OR n> OR (F AND N AND (NOT W)) OR B
This is very nearly what you said. Notice how the parentheses clarify your
intentions. You want a cat from one of three classes:
(M AND N AND rN OR n> OR
(F AND N AND (NOT W)) OR
B
With this formula wrinen down, the salesperson can perform something called a Boolean test. Without making a big fuss about it, I’ve subtly shifted to a somewhat different form of Boolean algebra. In this form of Boolean algebra, leners no longer refer to classes. Instead, the leners can now be assigned numbers. The catch is that they can be assigned only the number 0 or 1. The numeral 1 means Yes, True, this particular cat satisfies these cri­ teria. The numeral 0 means No, False, this cat doesn’t satisfy these criteria.
First the salesperson brings out an unneutered tan male. Here’s the ex­ pression of acceptable cats:
(M x N x (MI + n> + (F x N x (1 – W)) + B and here’s how it looks with Os and ls substituted:
(1)(0)((0+1))+(0)(0)( (1-0))+0
Notice that the only symbols assigned l s are M and T because the cat is male and tan.
What we must do now is simplify this expression. If it simplifies to 1 , the cat satisfies your criteria; if it simplifies to 0, the cat doesn’t. While we’re simplifying the expression, keep in mind that we’re not really adding and multiplying, although generally we can pretend that we are. Most of the same rules apply when + means OR and x means AND. (Sometimes in modern texts the symbols ” and v are used for AND and OR instead of x and +. But here’s where the + and x signs perhaps make the most sense.)
When the x sign means AND, the possible results are
Ox0=0 Ox1=0 1×0=0 1×1=1
In other words, the result is 1 only if both the left operand AND the right operand are 1 . This operation works exactly the same way as regular

94
Chapter Ten
multiplication, and it can be summarized in a little table, similar to the way the addition and multiplication tables were shown in Chapter 8:
AND 0 1 000 101
When the + sign means OR, the possible results are
0+0=0 0+1=1 1+0=1 1+1=1
The result is 1 if either the left operand OR the right operand is 1 . This operation produces results very similar to those of regular addition, except that in this case 1 + 1 equals 1 . The OR operation can be summarized in another little table:
OR 0 1 001 111
We’re ready to use these tables to calculate the result of the expression
(1 )(0)(1)+(0)(0)(1)+0=0+0+0=0
The result 0 means No, False, this kitty won’t do.
Next the salesperson brings out a neutered white female. The original
expression was
Substitute the Os and 1s again:
And simplify it:
(0 )( 1 )( (1 + 0)) + (1 )( 1 )( (1 – 1)) + 0
(0 )( 1 )( 1) + (1 )( 1 )( 0) + 0 = 0 + 0 + 0 = 0
(Mx Nx (W+n>+(Fx Nx (1-W))+8
And another poor kitten must be rejected.
Next the salesperson brings out a neutered gray female. (Gray qualifies
as an “other” color-not white or black or tan.) Here’s the expression:
(0 )( 1 )( (0 + 0)) + (1 )( 1 )( (1 – 0)) + 0

Logic and Switches 95 Now simplify it:
(0 X 1 X 0) + (1 X 1 X 1) + 0 : 0 + 1 + 0 : 1
The final result 1 means Yes, True, a kitten has found a home. (And it was the cutest one too!)
Later that evening, when the kitten is curled up sleeping in your lap, you wonder whether you could have wired some switches and a lightbulb to help you determine whether particular kittens satisfied your criteria. (Yes, you are a strange kid.) Little do you realize that you’re about to make a crucial conceptual breakthrough. You’re about to perform some experiments that will unite the algebra of George Boole with electrical circuitry and thus make possible the design and construction of computers that work with binary numbers. But don’t let that intimidate you.
To begin your experiment, you connect a lightbulb and battery as you would normally, but you use two switches instead of one:
Switches connected in this way-one right after the other-are said to be wired in series. If you close the left switch, nothing happens:
Similarly, if you leave the left switch open and close the right switch, noth­ ing happens. The lightbulb lights up only if both the left switch and the right switch are closed, as shown on the next page.

96
Chapter Ten
The key word here is and. Both the left switch and the right switch must be closed for the current to flow through the circuit.
This circuit is performing a little exercise in logic. In effect, the lightbulb is answering the question “‘Are both switches closed?” We can summarize the workings of this circuit in the following table:
Left Switch
Open
Open
Closed
Closed
Right Switch
Open
Closed
Open
Closed
Ligbtbulb
Not lit
Not lit
Not lit
In the preceding chapter, we saw how binary digits, or bits, can represent information-everything from numbers to the direction of Roger Ebert’s thumb. We were able to say that a 0 bit means “‘Ebert’s thumb points down” and a 1 bit means “Ebert’s thumb points up.” A switch has two positions, so it can represent a bit. We can say that a 0 means “switch is open” and a 1 means “switch is closed.” A lightbulb has two states; hence it too can represent a bit. We can say that a 0 means ” lightbulb is not lit” and a 1 means “lightbulb is lit.” Now we simply rewrite the table:
Left Switch Right Switch Lightbulb 0 010
0
00
Notice that if we swap the left switch and the right switch, the results are the same. We really don’t have to identify which switch is which. So the table can be rewritten to resemble the AND and OR tables that were shown earlier:
Switches
In Series
0
101
01
00
Lit

Logic and Switches 97 And indeed, this is the same as the AND table. Check it out:
AND 0 1 000 101
This simple circuit is actually performing an AND operation in Boolean algebra .
Now try connecting the two switches a linle differently:
These switches are said to be connected in parallel. The difference between this and the preceding connection is that this lightbulb will light if you dose the top switch:

98
Chapter Ten
or dose the bottom switch:
or close both switches:
The lightbulb lights if the top switch or the bottom switch is dosed. The key word here is or.
Again, the circuit is performing an exercise in logic. The lightbulb answers the question, ..Is either switch dosed?” The following table sumam rizes how this circuit works:
Left Switch Open Open Closed Closed
Right Switch
Open
Closed
Open
Closed
Lightbulb
Not lit Lit Lit Lit
Again, using 0 to mean an open switch or an unlit lightbulb and 1 to mean a closed switch or a lit lightbulb, this table can be rewritten this way:

Logic and Switches 99
Left Switch Right Switch Lightbulb
000 0
0
1
Again it doesn’t matter if the two switches are swapped, so the table can also be rewritten like this:
And you’ve probably already guessed that this is the same as the Boolean OR:
OR 0 1 001 111
which means that two switches in parallel are performing the equivalent of a Boolean OR operation.
When you originally entered the pet shop, you told the salesperson, “I want a male cat, neutered, either white o r tan; o r a female cat, neutered, any color but white; or I’ll take any cat you have as long as it’s black,” and the salesperson developed this expression:
(M x N x (W + n> + (F x N x (1 – W)) + 8
Now that you know that two switches wired in series perform a logical AND (which is represented by a x sign) and two switches in parallel perform a logical OR (which is represented by the + sign), you can wire up eight switches like so:
8
Switches In Parallel
0
1
0
0
1
1
1
1

100
Chapter Ten
Each switch in this circuit is labeled with a letter-the same letters as in the Boolean expression. (W means NOT W and is an alternative way to write 1 – W). Indeed, if you go through the wiring diagram from left to right stan­ ing at the top and moving from top to bottom, you’ll encounter the letters in the same order that they appear in the expression. Each )( sign in the expression corresponds to a point in the circuit where two switches (or groups of switches) are connected in series. Each + sign in the expression corresponds to a place in the circuit where two switches (or groups of
switches) are connected in parallel.
As you’ll recall, the salesperson first brought out an unneutered tan male.
Close the appropriate switches:
M
�–w-�-�- 8
Although the M, T, and NOT W switches are dosed, we don’t have a com­ plete circuit to light up the lightbulb. Next the salesperson brought out a neutered white female:
FN
8
w

Logic and Switches
1 01
Again, the right switches aren’t dosed to complete a circuit. But finally, the salesperson brought out a neutered gray female:
And that’s enough to complete the circuit, light up the lightbulb, and indi­ cate that the kitten satisfies all your criteria.
George Boole never wired such a circuit. He never had the thriU of see­ ing a Boolean expression realized in switches, wires, and lightbulbs. One obstacle, of course, was that the incandescent lightbulb wasn’t invented until
15 years after Boole’s death. But Samuel Morse bad demonstrated his tele­ graph in 1 844 -ten years before the publication of Boole’s The Laws of Thought-and it would be simple to substitute a telegraph sounder for the lightbulb in the circuit shown above.
But nobody in the nineteenth century made the connection between the ANDs and ORs of Boolean algebra and the wiring of simple switches in series and in parallel. No mathematician, no electrician, no telegraph operator, nobody. Not even that icon of the computer revolution Charles Babbage (1792-1871), who had corresponded with Boole and knew his work, and who struggled for much of his life designing first a Difference Engine and then an Analytical Engine that a century later would be regarded as the precursors to modern computers. What might have helped Babbage, we know now, was the realization that perhaps instead of gears and levers to perform calculations, a computer might better be built out of telegraph relays.
Yes, telegraph relays.

Chapter Eleven
Gates (Not Bill)
I n some far-off distant time, when the twentieth century history of primi­ tive computing is just a murky memory, someone is likely to suppose that devices known as logic gates were named after the famous co­
founder of Microsoft Corporation. Not quite. As we’ll soon see, logic gates bear a much greater resemblance to those ordinary gates through which pass water or people. Logic gates perform simple tasks in logic by blocking or letting through the flow of electrical current.
You’ll recall how in the last chapter you went into a pet shop and an­ nounced, “I want a male cat, neutered, either white or tan; or a female cat, neutered, any color but white; or I’ll take any cat you have as long as it’s black.” This is summarized by the following Boolean expression:
(M x N x (W + D) + (f x N x (1 – W)) + B and also by this circuit made up of switches and a lightbulb:
8

Gates (Not Bil)l 103
Such a circuit is sometimes called a network, except that nowadays that word is used much more often to refer to connected computers rather than an as­ semblage of mere switches.
Although this circuit contains nothing that wasn’t invented in the nine­ teenth century, nobody in that century ever realized that Boolean expressions could be directly realized in electrical circuits. This equivalence wasn’t dis­ covered until the 1930s, most notably by Claude Elwood Shannon (born
1916), whose famous 1938 M.I.T. master’s thesis was entitled “A Symbolic Analysis of Relay and Switching Circuits.” (Ten years later, Shannon’s ar­ ticle “The Mathematical Theory of Communication” was the first publica­ tion that used the word bit to mean binary digit.)
Prior to 1 938, people knew that when you wired two switches in series, both switches had to be closed for current to flow, and when you wired two switches in parallel, one or the other had to be closed. But nobody had shown with Shannon’s clarity and rigor that electrical engineers could use all the tools of Boolean algebra to design circuits with switches. In particular, if you can simplify a Boolean expression that describes a network, you can sim­ plify the network accordingly.
For example, the expression that indicates the characteristics you want in a cat looks like this:
+ > + a
Using the associative law, we can reorder the variables that are combined
with the AND ( x ) signs and rewrite the expression this way: + > + a
In an anempt to clarify what I’m going to do here, I’ll define two new symbols named X and Y:
X = M x + m + a

104
Chapter Eleven
Due to the plethora of parentheses, this expression hardly looks simplified. But there’s one less variable in this expression, which means there’s one less switch in the network. Here’s the revised version:
8
Indeed, it’s probably easier to see that this network is equivalent to the ear­ lier one than to verify that the expressions are the same.
Actually, there are still three too many switches in this network. In theory, you need only four switches to define your perfect cat. Why four? Each switch is a bit. You should be able to get by with one switch for the sex (off for male, on for female), another switch that’s on for neutered, off for unneutered, and two more switches for the color. There are four possible colors (white, black, tan, and “other”), and we know that four choices can be defined with 2 bits, so all you need are two color switches. For example, both switches can be off for white, one switch on for black, the other switch on for tan, and both switches on for other colors.
Let’s make a control panel right now for choosing a cat. The control panel is simply four switches (much like the on/off switches you have on your walls for controlling your lights) and a lightbulb moumed in a panel:
Dream Kitty
��¥,:&.�:�;=�·Cbnrrol Panel
The switches are on (closed) when they’re up, and off (open) when they’re down. The two switches for the eat’s color are labeled somewhat obscurely, I’m afraid, but that’s a drawback of reducing this panel to the bare minimum: The left switch of the pair is labeled 8; that means that the left switch on
_,�:.{.;.___ 1.- -.

Gates (Not Bill) 105
by itself (as shown) indicates the color black. The right switch of the pair is labeled T; that switch on by itself means the color tan. Both switches up means other colors; this choice is labeled 0. Both switches down means the color white, indicated by W, the letter at the bottom.
In computer terminology, the switches are an input device. Input is infor­ mation that controls how a circuit behaves. In this case, the input switches correspond to 4 bits of information that describe a cat. The output device is the lightbulb. This bulb lights up if the switches describe a satisfactory cat. The switches shown in the control panel on page 1 04 are set for a female unneutered black cat. This satisfies your criteria, so the lightbulb is lit.
Now all we have to do is design a circuit that makes this control panel work.
You’ll recall that Claude Shannon’s thesis was entitled ..A Symbolic Analysis of Relay and Switching Circuits.” The relays he was referring to were quite similar to the telegraph relays that we encountered in Chapter 6. By the time of Shannon’s paper. however. relays were being used for other purposes and, in particular, in the vast network of the telephone system.
Like switches, relays can be connected in series and in parallel to perform simple tasks in logic. These combinations of relays are called logic gates. When I say that these logic gates perform simple tasks in logic, I mean as simple as possible. Relays have an advantage over switches in that relays can be switched on and off by other relays rather than by fingers. This means that logic gates can be combined to perform more complex tasks, such as simple functions in arithmetic. Indeed, the nextchapterwill demonstrate how to wire switches, lightbulbs, a battery, and telegraph relays to make an adding machine (albeit one that works solely with binary numbers).
As you recall, relays were crucial to the workings of the telegraph system. Over long distances, the wires connecting telegraph stations had a very high resistance. Some method was needed to receive a weak signal and send an identical strong signal. The relay did this by using an electromagnet to control a switch. In effect, the relay amplified a weak signal to create a strong signal.
For our purposes, we’re not interested in using the relay to amplify a weak signal. We’re interested only in the idea of a relay being a switch that can be controlled by electricity rather than by fingers. We can wire a relay with a switch, a lightbulb, and a couple of batteries like this:

106
Chapter Eleven
Notice that the switch at the left is open and the lightbulb is off. When you dose the switch, the battery at the left causes current to flow through the many turns of wire around the iron bar. The iron bar becomes magnetic and pulls down a flexible metal contact that connects the circuit to turn on the lightbulb:
When the electromagnet pulls the metal contact, the relay is said to be trig­ gered. When the switch is turned off, the iron bar stops being magnetic, and the metal contact returns to its normal position.
This seems like a rather indirect route to light the bulb, and indeed it is. If we were interested only in lighting the bulb, we could dispense with the relay entirely. But we’re not interested in lighting bulbs. We have a much more ambitious goal.
We’re going to be using relays a lot in this chapter (and then hardly at all after the logic gates have been built), so I want to simplify the diagram. We can eliminate some of the wires by using a ground. In this case, the grounds simply represent a common connection; they don’t need to be connected to the physical eanh:

Gates (Not Bill) 107
I know this doesn’t look like a simplification, but we’re not done yet. No­ tice that the negative terminals of both batteries are connected to ground. So anywhere we see something like this:
let’s replace it with the capital letter V (which stands for voltage), as we did in Chapters 5 and 6. Now our relay looks like this:
v
When the switch is closed, a current flows between V and ground through the coils of the electromagnet. This causes the electromagnet to pull the flex­ ible metal contact. That connects the circuit between V, the lightbulb, and ground. The bulb lights up:
v
I …
..
.
v

1 08
Chapter Eleven
These diagrams of the relay show two voltage sources and two grounds, but in all the diagrams in this chapter, all the V’s can be connected to one another and all the grounds can be connected to one another. All the net­ works of relays and logic gates in this chapter and the next will require only one battery, although it might need to be a big battery. For example, the pre­ ceding diagram can be redrawn with only one battery like this:

But for what we need to do with relays, this diagram isn’t very clear. It’s betet r to avoid the circular circuits and look at the relay-like the control panel earlier-in terms of inputs and outputs:
v
“Ou ut” or “Out”
If a current is flowing through the input (for example, if a switch connects the input to V), the electromagnet is triggered and the output has a voltage.
“Input”
or “In”

Gates (Not Bill) 109
The input of a relay need not be a switch, and the output of a relay need not be a lightbulb. The output of one relay can be connected to the input of another relay, for example, like this:
vv
When you tum the switch on, the first relay is triggered, which then pro­ vides a voltage to the second relay. The second relay is triggered and the light goes on:
vv.- v1
Connecting relays is the key to building logic gates.
Actually, the lightbulb can be connected to the relay in two ways. Notice
the flexible metal piece that’s pulled by the electromagnet. At rest, it’s touch­ ing one contact; when the electromagnet pulls it, it hits another contact. We’ve been using that lower contact as the output of the relay, but we could

1 1 0
Chapter Eleven
just as well use the upper contact. When we use this contact, the output of the relay is reversed and the lightbulb is on when the input switch is open:
v
And when the input switch is closed, the bulb goes out:
v v
Using the terminology of switches, this type of relay is called a double-throw relay. It has two outputs that are electrically opposite-when one has a voltage, the other doesn’t.
By the way, if you’re having a tough time visualizing what modern relays look like, you can see a few in conveniently transparent packaging at your local Radio Shack. Some, like the heavy-duty relays with Radio Shack pan numbers 275-206 and 275-21 4, are about the size of ice cubes. The insides are encased in a clear plastic shell, so you can see the electromagnet and the metal contacts. The circuits I’ll be describing in this chapter and the next could be built using Radio Shack part number 275-240 relays, which are smaller (about the size of a Chiclet) and cheaper ($2.99 apiece).

Gates ot Bill) 1 1 1 Juca two withe anbeconn tdin erie rworelay canbe on­
v
The ourpur of the cop relay supplies a voltage ro the econd r lay. A you an ee, when botb witche are open the lightbulb i n’t lit. We can try do –
ing the top witch:
v
ne ted in erie :

1 12
Chapter Eleven
Still the lightbulb doesn’t light because the bottom switch is still open and that relay isn’t triggered. We can try opening the top switch and closing the bottom switch:
v
The lightbulb is still not lit. The current can’t reach the lightbulb because the first relay isn’t triggered. The only way to get the bulb to light up is to close both switches:
v

v
v

Gates (Not Bill}
1 1 3
Now both relays are triggered, and current can flow between V, the lightbulb, and ground.
Like the two switches wired in series, these two relays are performing a little exercise in logic. The bulb lights up only if both relays are triggered. These two relays wired in series are known as an AND gate. To avoid ex­ cessive drawing, electrical engineers have a special symbol for an AND gate.
That symbol looks like this: =D– lnpu” Ou u< The NAND gate has the following behavior:NAND 0 1 011 110Notice that the output of the NAND gate is opposite the AND gate. The output of the AND gate is 1 only if both inputs are 1; otherwise, the output is 0.At this point, we’ve looked at four different ways of wiring relays that have two inputs and one output. Each configuration behaves in a slightly different way. To avoid drawing and redrawing the relays, we’ve called them logic gates and decided to use the same symbols to represent them that are128Chapter Elevenused by electrical engineers. The output of the panicular logic gate depends on the input, which is summarized here:AND 0 1 OR 0 1 000 001 101 111NAND 0 1 NOR 0 1 011010 110100So now we have four logic gates and the invener. Completing this array of tools is just a regular old relay:vThis is called a buffer, and this is the symbol for it:It’s the same symbol as the invener but without the little circle. The buffer is remarkable for not doing much. The output of the buffer is the same as the input:0—-0 1—-1But you can usc a buffer when an input signal is weak. You’ll recall that this was the reason relays were used with the telegraph many years ago. Or a buffer can be used to slightly delay a signal. This works because the relay requires a little time-some fraction of a second-to be triggered.Gates (Not Bill) 129From here on in the book, you’ll see very few drawings of relays. Instead, the circuits that follow will be built from buffers, inverters, the four basic logic gates, and more sophisticated circuits (like the 2-Line-to-4-Line Decoder) built from these logic gates. All these other components are made from re­ lays, of course, but we don’t actually have to look at the relays.Earlier, when building the 2-Line-to-4-Line Decoder, we saw a little cir­ cuit that looked like this:Two inputs are inverted and become inputs to an AND gate. Sometimes a configuration like this is drawn without the inverters:Notice the little circles at the input to the AND gate. Those little circles mean that the signals are inverted at that point-a 0 (no voltage) becomes a 1 (volt­ age) and vice versa.An AND gate with two inverted inputs does exactly the same thing as a NOR gate:The output is 1 only if both inputs are 0.Similarly, an OR gate with the two inputs inverted is equivalent to aNAND gate:The output is 0 only if both inputs are 1 .These two pairs o f equivalent circuits represent a n electrical implemen­tation of De Morgan ‘s Laws. Augustus De Morgan was another Victorian­ era mathematician, nine years older than Boole, whose book Formal Logic was published in 1 847, the very same day (the story goes) as Boote’s The Mathematical Analysis of Logic. Indeed, Boote had been inspired to investi­ gate logic by a very public feud that was being waged between De Morgan 130Chapter Elevenand another British mathematician involving accusations of plagiarism. (De Morgan has been exonerated by history.) Very early on, De Morgan recog­ nized the importance of Boole’s insights. He unselfishly encouraged Boole and helped him along the way, and is today sadly almost forgotten except for his famous laws.De Morgan’s Laws are most simply expressed this way:AxB=A+B A+B=AXBA and B are two Boolean operands. In the first expression, they’re inverted and then combined with the Boolean AND operator. This is the same as combining the two operands with the Boolean OR operator and then invert­ ing the result (which is the NOR). In the second expression, the two oper­ ands are inverted and then combined with the Boolean OR operator. This is the same as combining the operands with the Boolean AND operator and then inverting (which is the NAND).De Morgan’s Laws are an important tool for simplifying Boolean expres­ sions and hence, for simplifying circuits. Historically, this was what Claude Shannon’s paper really meant for electrical engineers. But obsessively sim­ plifying circuits won’t be a major concern in this book. It’s preferable to get things working rather than to get things working as simply as possible. And what we’re going to get working next is nothing less than an adding machine.Chapter TwelveA Binary Adding MachineAddition is the most basic of arithmetic operations, so if we want to build a computer (and that is my hidden agenda in this book), we must first know how to build something that adds two numbers together. When you come right down to it, addition is just about the only thing tha t computers do. If we can build something tha t adds, we’re well on our way to building something that uses addition to also subtract, multiply, divide, calculate mortgage payments, guide rockets to Mars, play chess, and foul up our phone bills.The adding machine that we’ll build in this chapter will be big, clunky, slow, and noisy, at least compared to the calculators and computers of modern life. What’s most interesting is that we’re going to build this add­ ing machine entirely out of simple electrical devices that we’ve learned about’in previous chapters-switches, lightbulbs, wires, a battery, and relays that have been prewired into various logic gates. This adding machine will con- tain nothing that wasn’t invented at least 120 years ago. And what’s really nice is that we don’t have to actually build anything in our living rooms; instead, we can build this adding machine on paper and in our minds.This adding machine will work entirely with binary numbers and will lack some modern amenities. You won’t be able to use a keyboard to indicate the numbers you want to add; instead you’ll use a row of switches. Rather than a numeric display to show the results, this adding machine will have a row of lightbulbs.But this machine will definitely add two numbers together, and it will do so in a way that’s very much like the way that computers add numbers. 132Chapter TwelveAdding binary numbers is a lot like adding decimal numbers. When you want to add two decimal numbers such as 245 and 673, you break the prob­ lem into simpler steps. Each step requires only that you add a pair of deci­ mal digits. In this example, you begin with 5 plus 3. The problem goes a lot faster if you memorized an addition table sometime during your life.The big difference between adding decimal and binary numbers is that you use a much simpler table for binary numbers:+01 001 1 1 10If you actually grew up with a community of whales and memorized this table in school, you might have chanted aloud:0 plus 0 equals 0.0 plus 1 equals 1 .1 plus 0 equals 1 .1 plus 1 equals 0, carry the 1 .You can rewrite the addition table with leading zeros so that each result is a 2-bit value:Viewed like this, the result of adding a pair of binary numbers is 2 bits, which are called the sum bit and the carry bit (as in ” 1 plus 1 equals 0, carry the 1 ” ). Now we can divide the binary addition table into two tables, the first one for the sum bit:+SUm 0 1 001 110and the second one for the carry bit:+carry 0 1 000 101+001()() 01101 10A Binary Adding Machine1 33It’s convenient to look at binary addition in this way because our adding machine will do sums and carries separately. Building a binary adding machine requires that we design a circuit that performs these operations. Working solely in binary simplifies the problem immensely because all the parts of a circuit-switches, lightbulbs, and wires-can be binary digits.As in decimal addition, we add rwo binary numbers column by column beginning with the rightmost column:01100101 + 10110110 100011011Notice that when we add the third column from the right, a 1 is carried over to the next column. This happens again in the sixth, seventh, and eighth col­ umns from the right.What size binary numbers do we want to add? Since we’re building our adding machine only in our minds, we could build one to add very long num­ bers. But let’s be reasonable and decide to add binary numbers up to 8 bits long. That is, we want to add binary numbers that can range from 0000-0000 through 1111-1111,ordecimal0through255.Thesumofrwo8-bitnumbers canbeashighas 1-1111-1110,or510.The control panel for our binary adding machine can look like this:1f·uuutf ·1f·-·U- UtrfJWe have on this panel rwo rows of eight switches. This collection of switches is the input device, and we’ll use it to “key in” the two 8-bit numbers. In this input device, a switch is off (down) for 0 and on (up) for 1, just like the wall switches in your home. The output device at the bottom of the panel is a row of nine lightbulbs. These bulbs will indicate the answer. An unlit’ b u l b i s a 0 a n d a l i t b u l b i s a 1 . We n e e d n i n e b u l b s be c a u s e t h e s u m o f t h e rwo 8-bit numbers can be a 9-bit number.The rest of the adding machine will consist of logic gates wired together in various ways. The switches will trigger the relays in the logic gates, which will then turn on the correct lights. For example, if we want to add 0110-0101 and 101 1-01 10 (the rwo numbers shown in the preceding ex­ ample), we throw the appropriate switches as shown on the following page.1 34Chapter TwelveThe bulbs light up to indicate the answer of 1-0001-101 1. (Well, let’s hope so, anyway. We haven’t built it yet!)I mentioned in the last chapter that I’ll be using lots of relays in this book. The 8-bit adding machine we’re building in this chapter requires no fewer than 144 relays-18 for each of the 8 pairs of bits we’re adding together. If I showed you the completed circuit in its entirety, you’d definitely freak. There’s no way that anyone could make sense of 144 relays wired together in strange ways. Instead, we’re going to approach this problem in stages using logic gates.Maybe you saw right away a connection between logic gates and binary addition when you looked at the table of the carry bit that results from adding two 1 -bit numbers together:01 000 101You might have realized that this was identical to the output of the AND gate shown in the last chapter:AND 0 1 000 101So the AND gate calculates a carry bit for the addition of two binary digits. Aha! We’re definitely making progress. Our next step seems to be topersuade some relays to behave like this:+SUm 0 1 001 110This is the other half of the problem in adding a pair of binary digits. The sum bit turns out to be not quite as straightforward as the carry bit, but we’ll get there.+canyA Binary Adding Machine 135 The first thing to realize is that the OR gate is close to what we wantexcept for the case in the lower right comer:OR 0 1 001 111The NAND gate is also close to what we want except for the case in the upper left comer:NAND 0 1 011 110So let’s connect both an OR gate and a NAND gate to the same inputs: OR OutNAND OutThe following table summarizes the outputs of these OR and NAND gatesand compares that to what we want for the adding machine: NAND What A In B in OR Out Out we want000 0 001 11 00Notice that what we want is 1 only if the output from the OR gate and the ‘ NAND gate are both 1 . This suggests that these rwo outputs can be an in­put to an AND gate: And that’s it.Out136Chapter TwelveNotice that there are still only two inputs and one output to this entire circuit. The two inputs go into both the OR gate and the NAND gate. The outputs from the OR and NAND gates go into the AND gate, and that gives us exactly what we want:A In 8 In OR Out NAND Out AND Out000I0011 011There’s actually a name for what this circuit does. It’s called the Exclusive OR gate or, more briefly, the XOR gate. It’s called the Exclusive OR gate because the output is 1 ifthe A input is 1orthe B input is 1, but not both. So, instead of drawing an OR gate, NAND gate, and AND gate, we can use the symbol that electrical engineers use for the XOR gate:It looks very much like the OR gate except that it has another curved line at the input side. The behavior of the XOR gate is shown here:XOR 0 1 001 110The XOR gate is the final logic gate I describe in detail in this book. (A sixth gate sometimes shows up in electrical engineering. It’s called the coincidence or equivalence gate because the output is 1 only if the two inputs are the same. The coincidence gate describes an output opposite that of the XOR gate, so this gate’s symbol is the same as the XOR gate but with a little circle at the output end.)Let’s review what we know so far. Adding two binary numbers produces a sum bit and a carry bit:+sum 0 1 001 110You can use the following two logic gates to get these results:00+cany01 000 101 A Binary Adding Machine 1 3 7 XOR 0 1 AND 0 1 001 000 110101The sum of two binary numbers is given by the output of an XOR gate, and the carry bit is given by the output of an AND gate. So we can combine an AND gate and an XOR gate to add two binary digits called A and B:·And instead of drawing and redrawing an AND gate and an XOR gate, you can simply draw a box like this:A In Sum Out B In Carry OutThis box is labeled Half Adder for a reason. Certainly it adds two binary digits and gives you a sum bit and a carry bit. But the vast majority of bi­ nary numbers are longer than 1 bit. What the Half Adder fails to do is add a possible carry bit from a previous addition. For example, suppose we’re adding two binary numbers like these:1111 +1111 1 1 1 10We can use the Half Adder only for the addition of the rightmost column: 1 plus 1 equals 0, carry the 1 . For the second column from the right, we really need to add three binary numbers because of the carry. And that goes for all subsequent columns. Each subsequent addition of two binary numberscan include a carry bit from the previous column.To add three binary numbers, we need two Half Adders and an OR gate,wired this way:Carry In —- A Half S I—– Sum OutA In– B in -Sum OutCarry Out A In Half S I– B Addrr CO 1– B in Adder CO I——Carry Out 138Chapter TwelveTo understand this, begin with the A and B inputs to the first Half Adder at the left. The output is a sum and a carry. That sum must be added to the carry from the previous column, so they’re inputs to the second Half Adder. The sum from the second Half Adder is the final sum. The two Carry Outs from the Half Adders are inputs to an OR gate. You might think another Half Adder is called for here, and that would cenainly work. But if you go through all the possibilities, you’ll find that the Carry Outs from the two Half Adders are never both equal to 1 . The OR gate is sufficient for adding them because the OR gate is the same as the XOR gate if the inputs are never both 1 .Instead of drawing and redrawing that diagram, we can just call it a Full Adder:Carry In A In 8 InFull Adders Sum Our Carry OurThe following table sumam rizes all the possible combinations of inputs to the Full Adder and the resultant outputs:A ID B in Car In Sum Our Car Our 00000 01010 10010 110000 10 01I0010 1I said early on in this chapter that we would need 144 relays for our adding machine. Here’s how I figured that out: Each AND, OR, and NAND gate requires 2 relays. So an XOR gate comprises 6 relays. A Half Adder is an XOR gate and an AND gate, so a Half Adder requires 8 relays. Each Full Adder is two Half Adders and an OR gate, or 1 8 relays. We need 8 FuJI Adders for our 8-bit adding machine. That’s 1 44 relays.Recall our original control panel with the switches and lightbulbs:·!e ltfnutruuu1r+uuu·Uuuu1J ·e e �J -�} e e e e e coA Binary Adding Machine1 3 9We can now start wiring the switches and lightbulbs to the Full Adder. First connect the two rightmost switches and the rightmost lightbulb toa Full Adder:sFull AdderCOl—- Carry OutWhen you begin adding two binary numbers, the first column of digits that you add is different. It’s different because every subsequent column might include a carry bit from the previous column. The first column doesn’t in­ dude a carry bit, which is why the carry input to the Full Adder is connected to ground. That means a 0 bit. The addition of the first pair of binary dig­ its could, of course, result in a carry bit. That carry output is an input to the next column.For the next two digits and the next lightbulb, you use a Full Adder wired this way: CarInas v. L_.Full AddervcoCarry OutL_. The carry output from the first Full Adder is an input to this second Full Adder. Each subsequent column of digits is wired the same way. Each carry output from one column is a carry input to the next column.140Chapter TwelveFinally the eighth and last pair of switches are wired to the last Full Adder: Carry Inv��Cl sFull Adderco vHere the final carry output goes to the ninth lightbulb.We’re done.Here’s another way to look at this assemblage of eight Full Adders, witheach Carry Out serving as input to the next Carry In:. Carry Out8-Bit SumHere’s the complete 8-Bit Adder drawn as one box. The inputs are labeled A0 through A, and 80 through 87• The outputs are labeled S0 through S, (for sum):A Input B lnputCarry In coCarry OutA,…� B,…Bo8-Bit Adder ClSum OutputA BinaryAddingMachine 141This is a common way to label the separate bits of a multibit number. The bits A0, 80, and 50 are the least-significant, or rightmost, bits. The bits A,, 8,. and S, are the most-significant, or leftmost, bits. For example, here’s how these subscripted letters would apply to the binary number 0110-1001:A, A6 A., A4 � � A, Aa 01101001The subscripts start at 0 and get higher for more significant digits because they correspond to the exponents of powers of two:27 2′ 25 24 23 22 2′ 2°01101001If you multiply each power of two by the digit below it and add, you’ll get the decimal equivalent of 0110-1001, which is 64 + 32 + 8 + 1, or 105.Another way an 8-8it Adder might be drawn is like this:A lnput B lnputCarry lns.:….SoCarry OutThe double-line arrows have an 8 inside to indicate that each represents a group of eight separate signals. They are labc;led A,…A0, 8,…80, and S, …S0 also to indicate 8-bit numbers.Once you build one 8-8it Adder, you can build another. It then becomes easy to cascade them to add two 1 6-bit numbers:A Input B Input (high 8 bits)(high 8 bits)Carry OutA Input B Input (low 8 bits) (low 8 bits) 16-Bit SumCO 8-Bit Adder a142Chapter TwelveThe Carry Out of the adder on the right is connected to the Carry In of the adder on the left. The adder on the left has as input the most-significant eight digits of the two numbers to be added and creates as output the most­ significant eight digits of the result.And now you might ask, “Is this really the way that computers add num­ bers together?”Basically, yes. But not exactly.First, adders can be made faster than this one. If you look at how this circuit works, a carry output from the least-significant pair of numbers is required for the next pair to be added, and a carry output from the second pair is required for the third pair to be added, and so forth. The total speed of the adder is equal to the number of bits times the speed of the Full Adder component. This is called a ripple carry. Faster adders use additional circuitry called a look-ahead carry that speeds up this process.Second (and most important), computers don’t use relays any more! They did at one time, however. The first digital computers built beginning in the 1930s used relays and later vacuum tubes. Today’s computers use transis­ tors. When used in computers, transistors basically function the same way relays do, but (as we’ll see) they’re much faster and much smaller and much quieter and use much less power and are much cheaper. Building an 8-Bit Adder still requires 144 uansistors (more if you replace the ripple carry with a look-ahead carry), but the circuit is microscopic. Chapter ThirteenBut What About Subtraction?A fter you’ve convinced yourself that relays can indeed be wired to­ gether to add binary numbers, you might ask, “But what about subtraction?” Rest assured that you’re not making a nuisance ofyourself by asking questions like this; you’re actually being quite perceptive. Addition and subtraction complement each other in some ways, but the mechanics of the two operations are different. An addition marches consis­ tently from the rightmost column of digits to the leftmost column. Each carry from one column is added to the next column. We don’t carry in subtrac­ tion, however; we bo”ow, and that involves an intrinsically different mecha­ nism-a messy back-and-forth kind of thing.For example, let’s look at a typical borrow-laden subtraction problem:253 – 176 ???To do this, we start with the rightmost column. First we see that 6 is bigger than 3, so we have to borrow 1 from the 5, and then subtract 6 from 1 3, which is 7. Then we have to remember that we borrowed 1 from the 5, so it’s really a 4, and this 4 is smaller than 7, so we borrow 1 from the 2 and subtract 7 from 1 4, which is 7. Then we have to remember that we borrowed 1 from the 2, so it’s really a 1 , and then we subtract 1 from it to get 0. Our answer is 77:253 – 176 77 1 44·Chapter ThirteenNow how are we ever going to persuade a bunch of logic gates to go through such perverse logic?Well, we’re not going to try. Instead, we’re going to use a linle trick that lets us subtract without borrowing. This will please Polonius (“Neither a borrower nor a lender be..) and the rest of us as well. Moreover, examining subtraction in detail is useful because it directly relates to the way in which binary codes are used for storing negative numbers in computers.For this explanation, I need to refer to the two numbers being subtracted. Their proper names are the minuend and the subtrahend. The subtrahend is subtracted from the minuend, and the result is the difference:Minuend – Subtrahend DifferenceTo subtract without borrowing, you first subtract the subtrahend not from the minuend but from 999:999 – 176 823You use 999 here because the numbers have 3 digits. If the numbers had 4 digits, you would use 9999. Subtracting a number from a string of 9s results in a number called the nines’ complement. The nines’ complement of 1 76 is 823. And it works in reverse: The nines’ complement of 823 is 1 76. What’s nice is this: No maner what the subtrahend is, calculating the nines’ comple­ ment never requires a borrow.After you’ve calculated the nines’ complement of the subtrahend, you add it to the original minuend:253 + 823 1076And finally you add 1 and subtract 1 000:1076+1- 1000–77You’re finished. The result is the same as before, and never once did you borrow.Why does this work? The original subtraction problem is253 – 176 But What About Subtraction? 145If any number is both added to and subtracted from this expression, the result will be the same. So let’s add 1000 and subtract 1000:253-176+ 1000-1000 This expression is equivalent to253-176+999+ 1-1000 Now the various numbers can be regrouped, this way:253+(999-176)+ 1-1000And this is identical to the calculation I demonstrated using the nines’ complement. We replaced the one subtraction with two subtractions and two additions, but in the process we got rid of all the nasty borrows.What if the subtrahend is larger than the minuend? For example, the subtraction problem could be176 – 253 777Normally, you would look at this and say, “Hmmm. I see that the subtra­ hend is larger than the minuend, so I have to switch the two numbers around, perform the subtraction, and remember that the result is really a negative number.” You might be able to switch them around in your head and write the answer this way:176 – 253 nDoing this calculation without borrowing is a little different from the earlier example. You begin as you did before by subtracting the subtrahend (253) from 999 to get the nines’ complement:999 – 253 746Now add the nines’ complement to the original minuend:176 + 746 922At this point in the earlier problem, you were able to add 1 and subtract 1 000 to get the final result. But in this case, that strategy isn’t going to work well. You would need to subtract 1000 from 923, and that really means subtract­ ing 923 from 1000, and that requires borrowing. 146Chapter ThirteenInstead, since we effectively added 999 earlier, let’s subtract 999 now:922 – 999 ?17When we see this, we realize that our answer will be a negative number and that we really need to switch around the two numbers by subtracting 922 from 999. This again involves no borrowing, and the answer is as we expect:922 – 999 – 77This same technique can also be used with binary numbers and is actu­ ally simpler than with decimal numbers. Let’s see how it works.The original subtraction problem was253 – 176 17?When these numbers are converted to binary, the problem becomes1 1 1 1 1 1 01 – 101 100 ?1717?17Step 1 . Subtract the subtrahend from 1 1 1 1 1 1 1 1 (which equals 255):11111111 – 101 100 01001 1 1 1When we were working with decimal numbers, the subtrahend was sub­ tracted from a string of nines, and the result was called the nines’ comple­ ment. With binary numbers, the subtrahend is subtracted from a string of ones and the result is called the ones’ complement. But notice that we don’t really have to do a subtraction to calculate the ones’ complement. That’s because every 0 bit in the original number becomes a 1 bit in the ones’ complement, and every 1 bit becomes a 0 bit. For this reason, the ones’ complement is also sometimes called the negation, or the inverse. (At this point, you might recall from Chapter 1 1 that we built something called an inverterthatchangeda0toa1 anda1toa0.)Step 2. Add the ones’ complement of the subtrahend to the minuend:1 1 1 1 1 1 01 + 01001111 101001 100But What About Subtraction? Step 3. Add 1 to the result:101001 100 +1 101001 101Step 4. Subtract 100000000 (which equals 256):101001 101 -100 1001 101The result is equivalent to 77 in decimal.Let’s try it again with the two numbers reversed. In decimal, the subtrac­147 tion problem is176 – 253 ?17101 100 – 11111101 ?17171717Step 1. Subtract the subtrahend from 11111111. You get the ones’ complement:11111111 – 1 1 1 1 1 1 01 0010Step 2. Add the ones’ complement of the subtrahend to the minuend:101 100 + 001 0 10110010Now 11111111 must be subtracted from the result in some way. When the original subtrahend is smaller than the minuend, you accomplish this task by adding 1 and subtracting 100000000. But you can’t subtract this way without borrowing. So instead, we subtract this result from 1 1 1 1 1 1 1 1 :11111111 – 10110010 01001101Again, this strategy really means that we’re just invening all the bits to get the result. The answer again is 77, but really -77. and in binary it looks like this: 148Chapter ThirteenAt this point, we have all the knowledge we need to modify the adding machine developed in the last chapter so that it can perform subtraction as well as addition. So that this doesn’t become too complex, this new adding and subtracting machine will perform subtractions only when the subtra­ hend is less than the minuend, that is, when the result is a positive number.The core of the adding machine was an 8-8it Adder assembled from logic gates:A InputB InputCarry In coCarry OutCl Sum OutputAs you probably recall, the inputs AO through A7 and 80 through 87 were connected to switches that indicated two 8-bit values to be added. The Carry In input was connected to ground. The SO through 57 outputs were connected to eight lightbulbs that displayed the result of the addition. Because the addition could result in a 9-bit value, the Carry Out output was also con­ nected to a ninth lightbulb.The control panel looked like this:In this diagram, the switches are set to add 183 (or 10110111) and 22 (000101 10), producing the result of205, or 1 1001 101 as shown in the row of lightbulbs.The new control panel for adding and subtracting two 8-bit numbers is just slightly modified. It includes an extra switch to indicate whether we want to add or subtract.But What About Subtraction? 149 �L l �� �� l �� �� �� eu•88·u898aYou turn this switch off for addition and on for subtraction, as labeled. Also, only the rightmost eight lightbulbs are used to display results. The ninth lightbulb is now labeled “Overflow/Underflow.” This lightbulb indicates that a number is being calculated that can’t be represented by the eight lightbulbs. This will happen if an addition produces a number greater than 255 (that’s called an overflow) or if a subtraction produces a negative number (an un­ derflow). A subtraction will produce a negative number if the subtrahend is larger than the minuend.The major addition to the adding machine is some circuitry that calcu­ lates a ones’ complement of an 8-bit number. Recall that the ones’ comple­ ment is equivalent to inverring bits, so something to calculate the ones’ complement of an 8-bit number might look as simple as eight inverrers:InputsttttttttOutputsThe problem with this circuit is that it always inverts the bits that enter intoit. We’re trying to create a machine that does both addition and subtraction, , so the circuitry needs to inverr the bits only if a subtraction is being per­formed. A bener circuit looks like this:Inputs 1 50Chapter Thirteen A single signal labeled Invert is input to each of eight XOR (exclusive OR)gates. Recall that the XOR exhibits the following behavior:XOR 0 1 001 110So if the Invert signal is 0, the eight outputs of the XOR gates are the same as the eight inputs. For example, if 01100001 is input, then 01100001 is output. If the Invert signal is 1, the eight input signals are inverted. If 01100001 isinput, 10011110isoutput.Let’s package these eight XOR gates in a box labeled Ones ‘ Complement: ��������ln7 l��t lns ln.. In1 In2 lnl lno lnvenOnes’ ComplementOut7 Out6 Out.1 Out4 Out1 Outz Out1 Out0The Ones’ Complement box, the 8-Bit Adder box, and a final exclusive OR gate can now be wired together like this: A InputA7···AoB InputOnes’ Complement 8-Bit Adder �–·Soct-@ Overflow/ UnderflowSum OutputBut What About Subtraction? 151Notice the three signals all labeled SUB. This is the Add/Subtract switch. This signal is 0 if an addition is to be performed and 1 if a subtraction is to be performed. For a subtraction, the B inputs (the second row of switches) are all inverted by the Ones’ Complement circuit before entering the adder. Also for a subtraction, you add 1 to the result of the addition by setting the Cl (Carry In) input of the adder to 1. For an addition, the Ones’ Comple­ ment circuit has no effect and the CI input is 0.The SUB signal and the CO (Carry Our) output of the adder also go into an XOR gate that’s used to light up the Overflow/Underflow lamp. If the SUB signal is 0 (which means an addition is being performed), the lightbulb will be lit if the CO output of the adder is 1 . This means that the result of the addition is greater than 255.If a subtraction is being performed and if the subtrahend (the B switches) is less than the minuend (the A switches), it’s normal that the CO output from the adder is 1 . This represents the 1 00000000 that must be subtracted in the final step. So the Overflow/Underflow lamp is lit only if the CO output from the adder is 0. This means that the subtrahend is greater than the minuend and the result is negative. The machine shown above isn’t designed to dis­ play negative numbers.You must surely be glad you asked, “But what about subtraction?”I’ve been talking about negative numbers in this chapter, but I haven’t yet indicated what negative binary numbers look like. You might assume that the traditional negative sign is used with binary just as it is in decimal. For example, -77 is written in binary as -1001101. You can certainly do that, but one of the goals in using binary numbers is to represent everything us­ ing Os and 1 s–even tiny symbols such as the negative sign.Of course, you could simply use another bit for the negative sign. You could make that extra bit 1 for a negative number and 0 for a positive num­ ber, which would work, although it doesn’t go quite far enough. There’s another solution for representing negative numbers that also provides a hassle-free method for adding negative and positive numbers together. The drawback of this other method is that you must decide ahead of time how many digits are required for all the numbers you might encounter.let’s think about this for a moment. The advantage of writing positive and negative numbers the way we normally do is that they can go on for­ ever. We imagine 0 as the middle of an infinite stream of positive numbers going off in one direction and an infinite stream of negative numbers going off in another:…-1,000,000-999,999… -3-2-10123… 999,9991,000,000…But suppose we don’t need an infinite number of numbers. Suppose we know at the outset that every number we come across will be within a particular range.Let’s look at a checking account, which is one place people sometimes see negative numbers. Let’s assume that we never have as much as $500 in our checking account and that the bank has given us a no-bounce checking l imit152Chapter Thirteenof $500. This means that the balance in our checking account is always a number somewhere between $499 and -$500. Let’s also assume that we never deposit as much as $500, we never write a check for more than $500, and we deal only in dollars and don’t care about cents.This set of conditions means that the range of numbers we deal with in using our checking account include -500 through 499. That’s a total of 1000 numbers. This restriction implies that we can use just three decimal digits and no negative sign to represent all the numbers we need. The trick is that we really don’t need positive numbers ranging from 500 through 999. That’s because we’ve already established that the maximum positive number we need is 499. So the three-digit numbers from 500 through 999 can actually represent negative numbers. Here’s how it works:To mean -500, we use 500. To m e a n -4 9 9 , w e u s e 5 0 1 . To mean -498, we use 502. (yada, yada, yada)To mean -2, we use 998. To mean -1, we use 999. To mean 0, we use 000. To mean 1, we use 00 1 . To mean 2, we use 002. (yada, yada, yada)To mean 497, we use 497. To mean 498, we use 498. To mean 499, we use 499.In other words, every 3-digit number that begins with a 5, 6, 7, 8, or 9 is actually a negative number. Instead of writing the numbers like this:-500-499-498••o -4-3-2-101234000 497498499 we write them this way:500501502000 996997998999000001002003004000 497498499Notice that this forms a circle of sorts. The lowest negative number (500) looks as if it continues from the highest positive number (499)0 And the number 999 (which is actually – 1 ) is one less than zero. If we add 1 to 999, we’d normally get 1 000. But since we’re only dealing with three digits, it’s actually OOOoThis type of notation is called ten’s complement. To convert a 3-digit negative number to ten’s complement, we subtract it from 999 and add 1. In other words, the ten’s complement is the nines’ complement plus one. For example, to write -255 in ten’s complement, subtract it from 999 to get 744 and then add 1 to get 745.But What About Subtraction?1 53You’ve probably heard it said that ..Subtraction is merely addition using negative numbers.” To which you’ve probably replied, ..Yeah, but you still have to subtract them.” Well, using the ten’s complement, you don’t subtract numbers at all. Everything is addition.Suppose you have a checking account balance of $ 143. You write a check for $78. That means you have to add a negative $78 to $ 143. In ten’s comple­ ment, -78 is written as 999 – 078 + 1, or 922. So, our new balance is $143 + $922, which equals (ignoring the overflow), $65. If we then write a check for $150 dollars, we have to add -150, which in ten’s complement equals 850. So our previous balance of 065 plus 850 equals 9 1 5 , our new balance. This is actually equivalent to -$85.The equivalent system in binary is called two’s complement. Let’s assume that we’re working with 8-bit numbers. These range from 00000000 to 1 1 1 1 1 1 1 1 , which normally correspond to decimal numbers 0 through 255. But if you also want to express negative numbers, every 8-bit number that begins with a 1 will actually represent a negative number, as shown in the following table:Binary1 0000000 1 0000001 10000010 1000001 11 1 1 1 1 101 11111110 11111111 00000000 0000000 1 000000 1 001111100 01111101 01 1 1 1 1 10 01 1 1 1 1 1 1Decimal -128 – 1 27 -126 -125-3-2 -1 0 1 2124 125 126 127The range of numbers that you can represent is now limited to -128 through + 1 27. The most significant ( leftmost) bit is known as the sign bit. The sign bit is 1 for negative numbers and 0 for positive numbers.To calculate the two’s complement, first calculate the ones’ complement and then add 1 . This is equivalent to invening all the digits and adding 1 . F o r e x a m p l e , t h e d e c i m a l n u m b e r 1 2 5 i s 0 1 1 1 1 1 0 1 . To e x p r e s s – 1 2 5 i n tw o ‘ s complement, first invert the digits of 01 1 1 1 101 to get 10000010, and then add 1 to get 1 00000 1 1 . You can verify the result using the preceding table. To go backward, do the same thing-inven all the bits and add 1 .1 54Chapter ThirteenThis system gives us a way to express positive and negative numbers without using negative signs. It also lets us freely add positive and negative numbers using only the rules of addition. For example, let’s add the binary equivalents of – 1 27 and 1 24. Using the preceding table as a cheat sheet, this is simply1001 + 01111100 1 1 1 1 1 1 01The result is equivalent to -3 in decimal.What you need to watch out for here is overflow and underflow condi­tions. That’s when the result of an addition is greater than 1 27 or less than -128. For example, suppose you add 1 25 to itself:01 1 1 1 1 01 + 01111101 1 1 1 1 1 01 0Because the high bit is set to 1 , the result must be interpreted as a negative number, specifically the binary equivalent of -6. Something similar happens when – 1 25 is added to itself:1 0001 1 + 100011 1 0001 1 0We decided at the outset that we’re restricting ourselves to 8-bit numbers, so the leftmost digit of the result must be ignored. The rightmost 8 bits are equivalent to +6.In general, the result of an addition involving positive and negative num­ bers is invalid if the sign bits of the two operands are the same but the sign bit of the result is different.Now we have two different ways of using binary numbers. Binary num­ bers can be either signed or unsigned. Unsigned 8-bit numbers range from 0 through 255. Signed 8-bit numbers range from -128 through 127. Noth­ ing about the numbers themselves will tell you whether they’re signed or unsigned. For example, suppose someone says, “I have an 8-bit binary num­ berandthevalueis 10110110.What’sthedecimalequivalent?” You must first inquire, “Is that a signed or an unsigned number? It could be -74 or 1 82 . “That’s the trouble with bits: They’re just zeros and ones and don’t tell you anything about themselves.Chapter FourteenFeedback and Flip-FlopsE verybody knows that electricity makes things move. A brief glance around the average home reveals electric motors in appliances as diverse as clocks, fans, food processors, and compact disc players.Electricity also makes the cones in loudspeakers vibrate, bringing forth sounds, speech, and music from the stereo system and the television set. But perhaps the simplest and most elegant way that electricity makes things move is illustrated by a class of devices that are quickly disappearing as electronic counterparts replace them. I refer to the marvelously retro electric buzzers and bells.Consider a relay wired this way with a switch and battery:If this looks a linle odd to you, you’re not imagining things. We haven’t seen a relay wired quite like this yet. Usually a relay is wired so that the input is 156Chapter Fourteenseparate from the output. Here it’s all one big circle. If you close the switch, a circuit is completed:•L-DThe completed circuit causes the electromagnet to pull down the flexiblecontact:But when the contact changes position, the circuit is no longer complete, so the electromagnet loses its magnetism and the flexible contact flips back up:which, of course, completes the circuit again. What happens is this: As long as the switch is closed, the metal contact goc back and forth-alternately lo ing the circuit and op nmg it-most hkel ‘ making a sound. If the con- ta tmake ara pin und,ir’ abuzzer.Ifyouarra hahamm rtoitandpr v1 m·r Ig ng, ·ou’llh v rh making ofanclectrt II.Feedback and Flip-Flops1 57You can choose from a couple of ways to wire this relay to make a buzzer. Here’s another way to do it using the conventional voltage and ground symbols:vYou might recognize in this diagram the invener from Chapter 1 1 . The cir­ cuit can be drawn more simply this way:OutputAs you’ll recall, the output of an invener is 1 if the input is 0, and 0 if the input is 1 . Closing the switch on this circuit causes the relay in the invener to alternately open and close. You can also wire the invener without a switch to go continuously:This drawing might seem to be illustrating a logical contradiction because the output of an invener is supposed to be opposite the input, but here the output is the input! Keep in mind, however, that the invener is actually just a relay, and the relay requires a little bit of time to change from one state to another. So even if the input is the same as the output, the output will soon change, becoming the inverse of the input (which, of course, changes the input, and so fonh and so on).What is the output of this circuit? Well, the output quickly alternates between providing a voltage and not providing a voltage. Or, we can say, the output quickly alternates between 0 and 1 .This circuit is called an oscillator. It’s intrinsically different from every­ thing else we’ve looked at so far. All the previous circuits have changed their state only with the intervention of a human being who closes or opens a switch. The oscillator, however, doesn’t require a human being; it basically runs by itself.•1 5 8Chapter FourteenOf course, the oscillator in isolation doesn’t seem to be very useful. We’ll see later in this chapter and in the next few chapters that such a circuit con­ nected to other circuits is an essential part of automation. All computers have some kind of oscillator that makes everything else move in synchronicity.The output of the oscillator alternates between 0 and 1 . A common way to symbolize that fact is with a diagram that looks like this:This is understood to be a type ofgraph.The horizontal axis represents time, and the vertical axis indicates whether the output is 0 or 1:All this is really saying that as time passes, the output o f the oscillator al­ ternates between 0 and 1 on a regular basis. For that reason, an oscillator is sometimes often referred to as a clock because by counting the number of oscillations you can tell time (kind of).How fast will the oscillator run? That is, how quickly will the metal contact of the relay vibrate back and forth? How many times a second? That obviously depends on how the relay is built. One can easily imagine a big, sturdy relay that clunks back and forth slowly and a small, light relay that buzzes rapidly.A cycle of an oscillator is defined as the interval during which the out­ put of the oscillator changes and then comes back again to where it started:The time required for one cycle is called the period of the oscillator. Let’s assume that we’re looking at a particular oscillator that has a period of 0.05 second. We can then label the horizontal axis in seconds beginning from some arbitrary time we denote as 0: Feedback and Flip-Flops 159 The frequency of the oscillator is 1 divided by the period. In this example, if the period of the oscillator is 0.05 second, the frequency of the oscillator is 1 + 0.05, or 20 cycles per second. Twenty times per second, the output of the oscillator changes and changes back.Cycles per second is a fairly self-explanatory term, much like miles per hour or pounds per square inch or calories per serving. But cycles per second isn’t used much any more. In commemoration of Heinrich Rudolph Hertz ( 1 857-1 894), who was the first person to transmit and receive radio waves, the word hertz; is now used instead. This usage started first in Germany in the 1920s and then expanded into other countries over the decades.Thus, we can say that our oscillator has a frequency of 20 hertz, or (to abbreviate) 20 Hz.Of course, we just guessed at the actual speed of one particular oscilla­ tor. By the end of this chapter, we’ll be able to build something that lets us actually measure the speed of an oscillator.To begin this endeavor, let’s look at a pair of NOR gates wired a particular way. You’ll recall that the output of a NOR gate is a voltage only if both inputs aren’t voltages:NOR 0 1 010 100Here’s a circuit with two NOR gates, two switches, and a lightbulb:v�-�-��— v�-Notice the oddly contorted wiring: The output of the NOR gate on the left is an input to the NOR gate on the right, and the output of the right NOR gate is an input to the left NOR gate. This is a type of feedback. Indeed, just as in the oscillator, an output circles back to become an input. This idiosyn­ crasy will be a characteristic of most of the circuits in this chapter. 160Chapter FourteenAt the outset, the only current flowing in this circuit is from the output of the left NOR gate. That’s because both inputs to that gate are 0. Now close the upper switch. The output from the left NOR gate becomes 0, which means the output from the right NOR gate becomes 1 and the lightbulb goes on:II …w� v/v�–The magic occurs when you now open the upper switch. Because the output of a NOR gate is 0 if either input is 1, the output of the left NOR gate remains the same and the light remains lit:Now this is odd, wouldn’t you say? Both switches are open-the same as in the first drawing-yet now the lightbulb is on. This situation is certainly different from anything we’ve seen before. Usually the output of a circuit is dependent solely upon the inputs. That doesn’t seem to be the case here. Moreover, at this point you can close and open that upper switch and the light remains lit. That switch has no further effect on the circuit because the output of the left NOR gate remains 0.Now close the lower switch. Because one of the inputs to the right NOR gate is now 1 , the output becomes 0 and the lightbulb goes out. The output of the left NOR gate becomes 1 :- …Feedback and Flip-Flops1 6 1Now you can open the bottom switch and the lightbulb stays off:vWe’re back where we started. At this time, you can dose and open the bottom switch with no further effect on the lightbulb. In summary• Closing the top switch causes the lightbulb to go on, and it stays on when the top switch is opened.• Closing the bottom switch causes the lightbulb to go off, and it stays off when the bottom switch is opened.The strangeness of this circuit is that sometimes when both switches are open the light is on, and sometimes when both switches are open, the light is off. We can say that this circuit has two stable states when both switches are open. Such a circuit is called a flip-flop, a word also used for beach san­ dals and the tactics of politicians. The flip-flop dates from 1 9 1 8 with the work of English radio physicist William Henry Eccles ( 1 875-1966) and F.W. Jordan (about whom not much seems to be known).A flip-flop circuit retains infonnation. It “remembers.., In particular, the flip-flop shown previously remembers which switch was most recently dosed. If you happen to come upon such a flip-flop in your travels and you see that the light is on, you can surmise that it was the upper switch that was most recently dosed; if the light is off, the lower switch was most recently dosed.A flip-flop is very much like a seesaw. A seesaw has two stable states, never staying long in that precarious middle position. You can always tell from looking at a seesaw which side was pushed down most recently.Although it might not be apparent yet, flip-flops are essential tools. They add memory to a circuit to give it a history of what’s gone on before. Imag­ ine trying to count if you couldn’t remember anything. You wouldn’t know what number you were up to and what number comes next! Similarly, a circuit that counts (which I’ll show you later in this chapter) needs flip-flops.There are a couple of different types of flip-flops. What I’ve just shown is the simplest and is called an R-S (or Reset-Set) flip-flop. The two NOR gates are more commonly drawn and labeled as in the diagram at the top of the next page to give it a symmetrical look. __1 62Chapter Fourteen The output that we used for the lightbulb is traditionally called Q. In addi­ tion, there’s a second output called Q (pronounced Q bar) that’s the oppo­ site of Q. If Q is 0, then Q is 1, and vice versa. The two inputs are called S for set and R for reset. You can think of these verbs as meaning “set Q to 1 ” and ..reset Q to 0.” When S is 1 (which corresponds to closing the top switch in the earlier diagram), Q becomes 1 and Q becomes 0. When R is 1 (corresponding to closing the bottom switch in the earlier diagram), Q becomes 0 and Q becomes 1 . When both inputs are 0, the output indicates whether Q was last set or reset. These results are summed up in the follow­ ing table:InputsOutputssRQQ100 00 11111QDisallowed0Q0 This is called a function table or a logic table or a truth table. It shows the outputs that result from particular combinations of inputs. Because there are only two inputs to the R-S flip-flop, the number of combinations of inputs is four. These correspond to the four rows of the table under the headings.Notice the row second from the bottom when S and R are both 0: The outputs are indicated as Q and Q. This means that the Q and Q outputs remain what they were before both the S and R inputs became 0. The final row of the table indicates that a situation in which the S and R inputs are both 1 is disallowed or illegal. This doesn’t mean you’ll get arrested for doing it, but if both inputs are 1 in this circuit, both outputs are 0, which violates the notion of Q being the opposite of Q. So when you’re designing circuitry that uses the R-S flip-flop, avoid situations in which the S and R inputs are both 1 .The R-S flip-flop is often drawn as a little box with the two inputs and two outputs labeled like this:-� Feedback and Flip-Flops1 63The R-S flip-flop is certainly interesting as a first example of a circuit that seems to “remember” which of two inputs was last a voltage. What turns out to be much more useful, however, is a circuit that remembers whether a particular signal was 0 or 1 at a particular point in time.Let’s think about how such a circuit should behave before we actually try to build it. It would have two inputs. Let’s call one of them Data. Like all digital signals, the Data input can be 0 or 1. Let’s call the other one Hold That Bit, which is the digital equivalent of a person saying “Hold that thought.” Normally the Hold That Bit signal is 0, in which case the Data signal has no effect on the circuit. When Hold That Bit is 1 , the circuit re­ flects the value of the Data signal. The Hold That Bit signal can then go back to being 0, at which time the circuit remembers the last value of the Data signal. Any changes in the Data signal have no further effect.In other words, we want something that has the following function table:InputsOutputsData Hold That BitQ01 11 00 1001Q Q In the first two cases, when the Hold That Bit signal is 1, the output Q is the same as the Data input. In the second two cases, when the Hold That Bit signal is 0, the Q output is the same as it was before. Notice in the sec­ ond two cases that when Hold That Bit is 0, the Q output is the same re­ gardless of what the Data input is. The function table can be simplified a linle, like this:InputsOutputsData Hold That BitQ01 11 X001Q The X means “don’t care.” It doesn’t maner what the Data input is because if the Hold That Bit input is 0, the output Q is the same as it was before.Implementing a Hold That Bit signal based on our existing R-S flip-flop requires that we add two AND gates at the input end, as in the diagram at the top of the following page.1 64Chapter Fourteen Hold That BitRecall that the output of an AND gate is 1 only if both inputs are 1 . In this diagram, the Q output is 0 and the Q output is 1 .As long as the Hold That Bit signal is 0, the Set signal has no effect on the outputs:Hold That BitSimilarly, the Reset signal has no effect:Hold That BitOnly when the Hold That Bit signal is 1 will this circuit function the same way as the normal R-S flip-flop shown earlier:Hold That Bit Feedback and Flip-Flops 165lr behaves like a normal R-S flip-flop because now rhe ourpur of rhe upper AND gate is rhe same as rhe Reser signal, and rhe ourpur of rhe lower AND gare is rhe same as rhe Set signal.Bur we haven’t yet achieved our goal. We want only rwo inputs, nor three. How is rhis done? If you recall the original function table of rhe R-S flip-flop, rhe case in which Set and Reser were both 1 was disallowed, so we want ro avoid rhar. And ir doesn’t make much sense for rhe Set and Reser signals ro now both be 0 because rhar’s simply rhe case in which the output didn’t change. We can accomplish the same thing in this circuit by setting Hold That Bit ro 0.So ir makes sense that if Ser is 1, Reser is 0; and if Set is 0, Reser is 1 . A signal called Data can be equivalent ro a Set, and rhe Data signal inverted can be rhe Reset signal:Hold That BitIn this case, both inputs are 0 and the output Q is 0 (which means that Q is 1 ). As long as Hold That Bit is 0, the Data input has no effect on rhe circuit:Hold That BitBut when Hold That Bit is 1 , the circuit reflects the value of the Data input:Hold That Bit1 66Chapter Fourteen The Q output is now the same as the Data input, and Q is the opposite. NowHold That Bit can go back to being 0:Hold That BitThe circuit now remembers the value of Data when Hold That Bit was last 1 , regardless of how Data changes. The Data signal could, for example, go back to 0 with no effect on the output:Hold That BitThis circuit is called a level-triggered D-type flip-flop. The D stands for Data. Level-triggered means that the flip-flop saves the value of the Data input when the Hold That Bit input is at a particular level, in this case 1 . (We’ll look at an alternative to level-triggered flip-flops shortly.)Usually when a circuit like this appears in a book, the input isn’t labeled Hold That Bit. It’s usually labeled Clock. Of course, this signal isn’t a real clock, but it might sometimes have clocklike attributes, which means that it might tick back and forth between 0 and 1 on a regular basis. But for now, the Clock input simply indicates when the Data input is to be saved: Feedback and Flip-FlopsAnd usually when the function table is shown, Data is abbreviated as D andClock is abbreviated as Clk:1 67 InputsOutputsD ClkQ Q-bar01 11 X001 10Q Q-barThis circuit is also known as a level-triggered D-type latch, and that term simply means that the circuit latches onto one bit of data and keeps it around for further use. The circuit can also be referred to as a 1-bit memory. I’ll demonstrate in Chapter 16 how very many of these flip-flops can be wired together to provide many bits of memory.Saving a multibit value in latches is often useful. Suppose you want to use the adding machine in Chapter 12 to add three 8-bit numbers together_ You’d key in the first number on the first set of switches and the second number on the second set of switches as usual, but then you’d have to write down the result. You’d then have to key in that result on one set of switches and key in the third number on the other set of switches. You really shouldn’t have to key in an intermediate result. You should be able to use it directly from the first calculation.Let’s solve this problem using latches. Let’s assemble eight latches in a box. Each of the eight latches uses two NOR gates and two AND gates and one invener, as shown previously. The Clock inputs are all connected. Here’s the resultant package:Clk 8-Bit lat�hThis latch is capable of saving 8 bits at once. The eight inputs on the top are labeled D0 through D7, and the eight outputs on the bottom are labeled Q0 through Q7• The input at the left is the Clock. The Clock signal is normally 0. When the Clock signal is 1, the 8-bit value on the D inputs is transferred to the Q outputs. When the Clock signal goes back to 0, that 8-bit value stays there until the next time the Clock signal is 1.The 8-Bit Latch can also be drawn with the eight Data inputs and eight Q outputs grouped together as you see on the following page. 1 68 Chapter FourteenHere’s the 8-Bit Adder: CO 8-Bit Adder ClNormally (ignoring what we did with subtraction in the last chapter), the eight A inputs and eight B inputs are connected to switches, the CI (Carry In) input is connected to ground, and the eight S (Sum) outputs and CO (Carry Out) are connected to lightbulbs.In this revised version, the eight S outputs of the 8-Bit Adder can be con­ nected to both the lightbulbs and the D inputs of the 8-Bit Latch. A switch labeled Save can be the Clock input of the latches to save a result from the adder: � Switches SwitchesQJjv�_j 88Clk 8-Bit Latch Q7…QoD8-Bit Latch ClkSave v AB2-to-1 Selector Sci Our�Cl sal�t-‘� From larch ,..ABll8-Bit Adder LighrhulbughrbulhscoFeedback and Flip-Flops1 69The box labeled 2-Line-to-1-Une Selector lets you choose with a switch whether you want the B inputs to the adder to come from the second row of switches or from the Q outputs of the latches. You close the switch to select the outputs from the 8-Bit Latch. The 2-Line-to-1-Line Selector uses eight of the following circuits:B_ -. SelectAOut If the Select input is 1 , the output of the OR gate is the same as the B input. That’s because the output of the top AND gate is the same as the B input, and the output of the bonom AND gate is 0. Similarly, if the Select input is 0, the output is the same as the A input. This is summed up in the follow­ ing function table:InputsOutputsSelect A 8Q00X01X IX0 1XI0I0I The box shown in the revised adding machine comprises eight of these 1 – bit selectors. All the Select inputs are wired together.This revised adding machine isn’t handling the Carry Out signal very well. If the addition of two numbers causes the Carry Out signal to be 1 , the sig­ nal is ignored when the next number is added in. One possible solution is to make the Adder. the Latch, and the Selector all 16 bits wide, or at least wider than the largest sum you’ll encounter. I won’t really be solving this problem until Chapter 1 7.A more interesting approach to the adding machine eliminates an entire row of eight switches. But first we need to modify the D-type flip-flop slightly by adding an OR gate and an input signal called Clear. The Clear input is normally 0. But when it’s 1, the Q output becomes 0, as shown here: 1 70Chapter FourteenThis signal forces Q to be 0 regardless of the other input signals, in effect clearing the flip-flop.Why do we need this, you might ask? Why can’t we clear the flip-flop by sening the Data input to 0 and the Clock input to 1 ? Well, maybe we can’t control exactly what’s going into the Data input. Maybe we have a set of eight of these latches wired to the outputs of an 8-Bit Adder, like so:Switchrs8-Bit AdderClear Or 8-Bit Latch Add LightbulbsNotice that the switch labeled Add now controls the Clock input of the latch. You might find this adder easier to use than the previous one, particularly if you need to add a long list of numbers. You begin by pressing the Clear switch. That action causes the output of the latches to be 0, turning off all the lights and also sening the second set of inputs to the 8-Bit Adder to all Os. You key in the first number and press the Add button. That number appears on the lights. You then key in the second number and again press the Add bunon. The number set up by the switches is added to the previ­ ous total, and it appears on the lights. Just continue keying in more num­bers and pressing the Add switch.I mentioned earlier that the D-type flip-flop we designed was level­triggered. This means that the Clock input must change its level from 0 to 1 in order for the value of the Data input to be stored in the latch. But dur­ ing the rime that the Clock input is 1 , the Data input can change; any changes in the Data input while the Clock input is 1 will be reflected in the values of the Q and Q outputs.For some applications, a level-triggered Clock input is quite sufficient. But for other applications, an edge-triggered Clock input is preferred. An Feedback and Flip-Flops 171edge trigger causes the outputs to change only when the Clock makes a transition from 0 to 1 . As with the level-triggered flip-flop, when the Clock input is 0, any changes to the Data input don’t affect the outputs. The dif­ ference in an edge-triggered flip-flop is that changes to the Data input also don’t affect the outputs when the Clock input is 1 . The Data input affects the outputs only at the instant that the Clock changes from 0 to 1 .An edge-triggered D-type flip-flop is constructed from two stages of R-S flip-flops, wired together this way:The idea here is that the Clock input controls both the first stage and the second stage. But notice that the clock is inverted in the first stage. This means that the first stage works exactly like a D-type flip-flop except that the Data input is stored when the Clock is 0. The outputs of the second stage are inputs to the first stage, and these are saved when the Clock is 1 . The overall re­ sult is that the Data input is saved when the Clock changes from 0 to 1.Let’s take a closer look. Here’s the flip-flop a t rest with both the Data and Clock inputs at 0 and the Q output at 0:1 72 Chapter Fourteen ow change rh Dara input to 1 :Thi hange the fir t flip-flop tage becau e the inverted Clo k input i 1 . Bur the econd rage remain un hanged b cau e the uninvertcd lo k in­ puti 0. owchangetheClockinputrol:ClockThi cau e rhc e ond ragetochange andtheQoutputgoe to1.The differen e i that the Data input an now hange (for example, ba k ro 0) without affe ring the Q output:The Q and Q output an hange only at the in rant rhar the lo k inpur hngefrm0t 1. Feedback and Flip-Flops1 73The function table of the edge-triggered D-type flip-flop requires a new symbol, which is an arrow pointing up (i). This symbol indicates a signal making a transition from a 0 to a 1 : InputsOutputsD ClkQQ0i 1i X001 10QQThe arrow indicates that the output Q becomes the same as the Data input when the Clock makes a transition from 0 to 1 . This is known as a positive transition of the Clock signal. (A negative transition is the transition from1 to 0.) The flip-flop has a diagram like this:�The little angle bracket indicates that the flip-flop is edge triggered.Now I want to show you a circuit using the edge-triggered D-type flip­ flop that you can’t duplicate with the level-triggered version. You’ll recall the oscillator that we constructed at the beginning of this chapter. The out­put of the oscillator alternates between 0 and 1 : c– J OutputLet’s connect the output of the oscillator to the Clock input of the edge­ triggered D-type flip-flop. And let’s connect the Q output to the D input:The output of the flip-flop is itself an input to the flip-flop. It’s feedback upon feedback! (In practice, this could present a problem. The oscillator is con­ structed out of a relay that’s flipping back and forth as fast as it can. The output of the oscillator is connected to the relays that make up the flip-flop. These oti;ler relays might not be able to keep up with the speed of the oscillator. To avoid these problems, let’s assume that the relay used in the oscillator is much slower than the relays used elsewhere in these circuits.) To ee what happ n in thi cir uit I t look at a function table that iJ­ lu trate the various change . Ar the tart let ay thar the Clo k input i 0 andthe Q output i 0. That mean that the Q ourpur i 1 ‘ hich is conne ted ro the D input:Input OutputsWhen the Clock input change from 0 to 1 the Q output will become theamea theDinput:Inputs D Clk1001 1i10But because the output change to 0 the D input will al o change co 0. TheClockinputi now1:Chapter Fourteen D ClkQq1001OutputSQQInputs D Clk 1011’ 01The Clo k input change to back to 0 without affecting the output :Inputs OutputsOutputsQQ01 10 10 QQ 0110D CJk 100J I100010owtheClokinputchange to1again.BcauetheDinputi 0 theQ output become 0 and the Q output be orne 1 :174Feedback and Flip-Flops 175InputsOutputsD ClkQQ10 1i 01 00 0i01 10 10 10 01So the D input also becomes 1 : InputsOutputsDOkQQ10 1i 01 00 0i 1101 10 10 10 01 01What’s happening here can be summed up very simply: Every time the Clock input changes from 0 to 1, the Q output changes, either from 0 to 1 or from 1 to 0. The situation is clearer if we look at the timing diagram: Clk0 / Q l _. _ . L When the Clock input goes from 0 to 1, the value of D (which is the same as Q) is transferred to Q, thus also changing Q and D for the next transi­ tion of the Clock input from 0 to 1 .If the frequency o f the oscillator is 20 Hz (which means 2 0 cycles per sec­ ond), the frequency of the Q output is half that, or 1 0 Hz. For this reason, su�h a circuit-in which the Q output is routed back to the Data input of a flip-flop-is also known as a frequency divider.1 76Chapter FourteenOf course, the output from the frequency divider can be the Clock input of another frequency divider to divide the frequency once again. Here’s an arrangement of three of them: �� lDClk ClkQ–ClkQClkQ�Q lo Q lo QLet’s look at the four signals I’ve labeled at the top of that diagram: I’ll admit that I’ve started and ended this diagram at an opportune spot, but there’s nothing dishonest about it: The circuit will repeat this pattern over and over again. But do you recognize anything familiar about it?I’ll give you a hint. Let’s label these signals with Os and ls:mQ.Q3Do you see it yet? Try turning the diagram 90 degrees clockwise, and read the 4-bit numbers going across. Each of them corresponds to a decimal number from 0 through 15: 001100110011001100001111000011110000000011111111 QlQ2Feedback and Flip-Flops1 77Binary0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111Decimal02 3 4 5 6 7 8 910 11 12 13 14 15 Thus, this circuit is doing nothing less than counting in binary numbers, and the more flip-flops we add to the circuit, the higher it will count. I pointed out in Chapter 8 that in a sequence of increasing binary numbers, each col­ umn of digits alternates between 0 and 1 at half the frequency of the column to the right. The counter mimics this. At each positive transition of the Clock signal, the outputs of the counter are said to increment, that is, to increase by 1.Let’s string eight flip-flops together and put them i n a box: Clk 8-Bit Ripple CounterThis is called a ripple counter because the output of each flip-flop becomes the Clock input of the next flip-flop. Changes ripple through the stages se­ quentially, and the flip-flops at the end might be delayed a little in chang­ ing. More sophisticated counters are synchronous, in which all the outputs change at the same time.,I’ve labeled the outputs Q0 through Q7• These are arranged so that the ourput from the first flip-flop in the chain (Q0) is at the far right. Thus, if you connected lightbulbs to these outputs, you could read an 8-bit number.1 78Chapter FourteenA timing diagram of such a counter could show all eight outputs separately, or it could show them together, like this:ClkQAt each positive transition of the Clock, some Q outputs might change and some might not, but together they reflect increasing binary numbers.I said earlier in this chapter that we’d discover some way to determine the frequency of an oscillator. This is it. If you connect an oscillator to the Clock input of the 8-Bit Counter, the counter will show you how many cycles the oscillator has gone through. When the total reaches 1 1 1 1 1 1 1 1 (255 in deci­ mal), it goes back to 00000000. Probably the easiest way to use this coun­ ter to determine the frequency of an oscillator is to connect eight lightbulbs to the outputs of the 8-Bit Counter. Now wait until all the outputs are 0 (that is, when none of the lightbulbs are lit) and start a stopwatch. Stop the stop­watch when all the lights go out again. That’s the time required for 256 cycles of the oscillator. Say it’s 1 0 seconds. The frequency of the oscillator is thus 256+ 10,or25.6Hz.As flip-flops gain features, they also gain in complexity. This one is called an edge-triggered D-type flip-flop with preset and clear:The Preset and Clear inputs override the Clock and Data inputs. Normally these two inputs arc 0. When the Preset input is 1, Q becomes 1 and Q becomes 0. When the Clear input is 1 , Q becomes 0 and Q becomes 1 . ( LikeFeedback and Flip-Flops 179the Set and Reset inputs of an R-S flip-flop, Preset and Clear shouldn’t be 1 at the same time.) Otherwise, this behaves like a normal edge-triggered D­ rype flip-flop:InputsOutputsPre Clr D ClkQq0XX0 000i 001i 00X0IXXII00 01 10QQI The diagram for the edge-triggered D-type flip-flop with preset and clear looks like this:We have now persuaded telegraph relays to add, subtract, and count in binary numbers. This is quite an accomplishment, pa”icularly considering that all the hardware we’ve been using was available more than a hundred years ago. We have still more to discover. But let’s now take a sho” break from building things and have another look at number bases.Chapter FifteenBytes and HexT he two improved adding machines of the last chapter illustrate dearly the concept of data patbs. Throughout the circuitry, 8-bit values move from one component to another. Eight-bit values areinputs to the adders, latches, and data selectors, and also outputs from these units. Eight-bit values are also defined by switches and displayed by lightbulbs. The data path in these circuits is thus said to be 8 bits wide. But why 8 bits? Why not 6 or 7 or 9 or 10?The simple answer is that these improved adding machines were based on the original adding machine in Chapter 12, which worked with 8-bit values. But there’s really no reason why it had to be built that way. Eight bits just seemed at the time to be a convenient amount-a nice biteful of bits, if you will. And perhaps I was being just a little bit sneaky, for I now confess that I knew all along (and perhaps you did as well) that 8 bits of data are known as a byte.The word byte originated at IBM, probably around 1 956. The word had its origins in the word bite but was spelled with a y so that nobody would mistake the word for bit. For a while, a byte meant simply the number of bits in a particular data path. But by the mid-1 960s, in connection with the development of IBM’s System/360 (their large complex of business comput­ ers), the word came to mean a group of 8 bits.As an 8-bit quantity, a byte can take on values from 00000000 through 1 1 1 1 1 1 1 1 . These values can represent positive inrcgcrs from 0 through 255, or if two’s complements arc used to represent negative numbers, they can represent both positive and negative integers in the range -128 through 127. Or a particular byte can simply rcprcsenr one of 28, or 256, different things. It turns out that 8 is, indeed, a nice bite size of bits. The byte is right, in more ways than one. One reason that IBM gravitated toward 8-bit bytes Bytes and Hex 181was the ease in storing numbers in a format known as BCD (which I’ll de­ scribe in Chapter 23). But as we’ll see in the chapters ahead, quite by coin­ cidence a byte is ideal for storing text because most written languages around the world (with the exception of the ideographs used in Chinese, Japanese, and Korean) can be represented with fewer than 256 characters. A byte is also ideal for representing gray shades in black-and-white photographs be­ cause the human eye can differentiate approximately 256 shades of gray. And where 1 byte is inadequate (for representing, for example, the aforemen­ tioned ideographs of Chinese, japanese, and Korean), 2 bytes-which allow the representation of 2 16, or 65,536, things-usually works just fine.Half a byte-that is, 4 bits-is sometimes referred to as a nibble (and is sometimes spelled nybble), but this word doesn’t come up in conversation nearly as often as byte.Because bytes show up a lot in the internals of computers, it’s convenient to be able to refer to their values in as succinct a manner as possible. The eight binary digits 1 0 1 1 0 1 1 0, for example, are certainly explicit but hardly succinct.We could always refer to bytes by their decimal equivalents, of course, but that requires converting from binary to decimal-not a particularly nasty calculation, but certainly a nuisance. I showed one approach in Chapter 8 that’s fairly straightforward. Because each binary digit corresponds to a power of 2, we can simply write down the digits of the binary number and the powers of 2 underneath. Multiply each column and add up the products. Here’s the conversion of 101 101 10:[!] [IJ QJ QJ [IJ QJ QJ [IIXl28 X64 XJ2 X)6 X8 X4 X2 X)�+0+�+�+0+0+0+0=�Converting a decimal number to binary is a bit more awkward. You start with the decimal number and divide by decreasing powers of 2. For each division, the quotient is a binary digit and the remainder is divided by theTJext smallest power of 2. Here’s the conversion of 1 82 back to binary: ����0000+128 +64 +32 +16 +8 +4 +2 +I 000[!]0000Chapter 8 has a more extensive description of this technique. Regardless, converting between binary and decimal is usually not something that can be d�ne without a paper and pencil or lots of practice.ln Chapter 8, we also learned about the octal, or base-8, number system. Octal uses only the digits 0, 1 , 2, 3, 4, 5, 6, and 7. Converting between octal and binary is a snap. All you need remember is the 3-bit equivalent of each Octal digit, as shown in the table on the next page.182Chapter FifteenBinary000 001 010 01 1 100 101 110 111Octal 01234567If you have a binary number (such as 10110110), start at the rightmost digits. Each group of 3 bits is an octal digit:10110110.I.I.266So the byte 101 101 10 can be expressed as the octal digits 266. This is cer­ tainly more succinct, and octal is indeed one good method for representing bytes. But octal has a little problem.The binary representations of bytes range from 00000000 through 1 1 1 1 1 1 1 1 . The octal representations of bytes range from 000 through 3 77. As is dear in the preceding example, 3 bits correspond to the middle and rightmost octal digits, but only 2 bits correspond to the leftmost octal digit.This means that an octal representation of a 1 6-bit number10110011110001011.,11. 1. 1. I. I.131705isn’t the same as the octal representations of the 2 bytes that compose the16-bit numberIn order for the representations of multibyte values to be consistent with the representations of the individual bytes, we need to use a system in which each byte is divided into equal numbers of bits. That means that we need to divide each byte into four values of 2 bits each (that would be base 4) or two val­ ues of4 bits each (base 16).10110011 11000101.I.I. .I.I.263 305 Bytes and Hex 183Base 16. Now that’s something we haven’t looked at yet, and for good reason. The base-16 number system is called hexadecimal, and even the word itself is a mess. Most words that begin with the hexa- prefix (such as hexa­ gon or hexapod or hexameter) refer to six of something. Hexadecimal is sup­ posed to mean sixteen. And even though The Microsoft Manual ofStyle for Technical Publications clearly states, “Do not abbreviate as hex,.. everyone always does and I will too.That’s not the only peculiarity of hexadecimal. In decimal, we count like this:0 1 2 3 4 5 6 7 8 9 10 11 12…In octal, you’ll recall, we no longer need digits 8 and 9:012345671011 12…Similarly, the base-4 number system also doesn’t need 4 , 5 , 6 , or 7:0 1 2 3 10 11 12…And binary, o f course, needs only 0 and 1 :01 1011 100…But hexadecimal i s different because it requires more digits than decimal. Counting in hexadecimal goes something like this:01234567897?7777101112…where 10 (pronounced one-zero) is actually 16TEN. The question marks in­ dicate that we need six more symbols to display hexadecimal numbers. What are these symbols? Where do they come from? Well, they weren’t handed down to us in tradition like the rest of our number symbols, so the rational thing to do is make up six new symbols, for example:Unlike the symbols used for most of our numbers, these have the benefit of being easy to remember and identify with the actual quantities they repre­ sent. There’s a tO-gallon cowboy hat, a football (11 players on a team), a dozen donuts, a black cat (associated with unlucky 13), a full moon that occurs about a fortnight ( 14 days) after the new moon, and a knife that reminds us of the assassination of Julius Caesar on the ides (the 15th day) of March.Each byte can be expressed as two hexadecimal digits. In other words, a hexadecimal digit is equivalent to 4 bits, or 1 nibble. The table on the next page shows how to convert hetween binary, hexadecimal, and decimal.1 84Chapter FifteenBinaryHexadecimalDecimalBinaryHexadecimalDecimal()()()()001000880001111001990010221010�100011331011�110100441100�1201 01551 1 01�1301 10661 1 10�1401 1 1771111I15 Here’s how to represent the byte 10110110 in hexadecimal:10110110And it doesn’t matter if we’re dealing with multibyte numbers:….���610110110 11000101�� ���6�5One byte is always represented by a pair of hexadecimal digits. Unfortunately (or perhaps, much to your relief), we really aren’t going to be using footballs and donuts to write hexadecimal numbers. It could have been done that way, but it wasn’t. Instead, the hexadecimal system ensures that everybody gets really confused and stays that way. Those six missing hexadecimal digits are actually represented by the first six letters of the Latinalphabet, like this:0123456789ABCDEF101112…Bytes and HexThe following table shows the real conversion between binary, hexadecimal,and decimal:1 85Binary Hexadecimal Decimal 0000 0 0 0001 1 1 0010 2 2 0011 3 3 0100 4 4 0101 5 5 0110 6 6 011177 1000 8 8 1001 9 9 1010 A 10 101 1 8 11 1100 c 12 1101 D 13 1110 E 14 1111 F 15The byte 1 0 1 1 0 1 1 0 can thus be represented by the hexadecimal number 86 without your drawing a football. As you’ll recall from previous chapters, I’ve been indicating number bases by subscripts, such as101 101 10TWOfor binary, and2312fOU�2 66EIGHTfor octal, and1 82TENfor decimal. To continue the same system, we can usef hexadecimal. But that’s clearly excessive. Fortunately, several other, terser, methods of denoting hexadecimal numbers are common. You can indicate 86. .• _for quaternary, and8651XTUNthe numbers this way:1 86Chapter FifteenIn this book, I’ll be using mostly a very common approach, which is a low­ ercase h following the number, like so:B6hIn a hexadecimal number,�the positions of each digit correspond to powers of 16:Number of onesNumber of sixteensNumber of two hundred fifty-sixesL—- Number of four thousand ninety-sixes L—– Number of si�-five thousandfive hundred thirty-sixesThe hexadecimal number 9A48Ch is9A48Ch= 9)(1OOh+ A )( 1 00 h0 +4 )( 100h + 8 )( 10h + c)(1hThis can be written using powers of 16:9A48Ch= 9)(16•+ A )( 163 + 4 x 162 +8 )( 161 + C x 1 6°Or using the decimal equivalents of those powers:9A48Ch = 9 x 65,536 + A x 4096 +4 )( 256 + 8)(16+ Cx1Notice that there’s no ambiguity in writing the single digits of the number (9, A, 4, 8, and C) without a subscript to indicate the number base. A 9 is a 9 whether it’s decimal or hexadecimal. And an A is obviously hexadeci­ mal-equivalent to 10 in decimal. Bytes and Hex1 87Converting all the digits to decimal lets us actually do the calculation:9A48Ch = 9 x 65,536 + 10 )( 4096 +4 )( 256 +8)(16+ 12)(1And the answer is 6 3 1 ,948. This is how hexadecimal numbers are converted to decimal.Here’s a template for converting any 4-digit hexadecimal number to decimal:DODD X4096 X256 Xl6 X}D+D+D+D=DFor example, here’s the conversion of 79ACh. Keep in mind that the hexa­ decimal digits A and C are decimal 10 and 12, respectively:0ITJ0mX4096 X256 XJ6 Xl12s,6ni+I Bo4 1+�+[EJ=I3t,t4sl Converting decimal numbers to hexadecimal generally requires divisions. If the number is 255 or smaller, you know that it can be represented by 1 byte, which is two hexadecimal digits. To calculate those two digits, divide the number by 1 6 to get the quotient and the remainder. Let’s use an earlier example-the decimal number 1 82. Divide 1 82 by 16 to get 1 1 (which is a 8 in hexadecimal) with a remainder of 6. The hexadecimal equivalent is 86h.If the decimal number you want to convert is smaller than 65,536, the hexadecimal equivalent will have four digits or fewer. Here’s a template for converting such a number to hexadecimal:DODD +4096 +256 +16 +1DODDYou start by putting the entire decimal number in the box in the upper left cor;ner. That’s your first dividend. Divide by 4096, the first divisor. The quo­ tient goes in the box below the dividend, and the remainder goes in the box1 88Chapter Fifteento the right of the dividend. That remainder is the new dividend that you divide by 256. Here’s the conversion of 31,148 back to hexadecimal:+4096 +256 +16 +1O f course, decimal numbers 1 0 and 1 2 correspond to hexadecimal A and C. The result is 79ACh.One problem with this technique is that you probably want to use a cal­ culator for the divisions, but calculators don’t show remainders. If you di­ vide 31,148 by 4096 on a calculator, you’ll get 7.6044921 875. To calculate the remainder, you need to multiply 4096 by 7 (to get 28,672) and subtract that from 31,148. Or multiply 4096 by 0.6044921875, the fractional pan of the quotient. (On the other hand, some calculators can conven between decimal and hexadecimal.)Another approach to convening decimal numbers through 65,535 to hex involves first separating the number into 2 bytes by dividing by 256. Then for each byte, divide by 16. Here’s a template for doing it:D+256DD +16+16DODDStan at the top. With each division, the quotient goes in the box to the left below the divisor, and the remainder goes in the box to the right. For ex­ ample, here’s the conversion of 5 1 ,966:15 1 ,9661 +256 ��+16 +16@](!£]@]�The hexadecimal digits are 1 2, 1 0, 1 5, and 1 4, or CAFE, which looks more like a word than a number! (And if you go there, you may want to order yourcoffee 56,495.)As for every other number base, there’s an addition table associated withhexadecimal:Jytes and Hex+012345678gABcDEF 00123456789A8cDEF 1 1 2 3 4 5 6 7 8 9 A 8 c D E F 102 2 3 4 5 6 7 8 9 A 8 c D E F 10 113 3 4 5 6 7 8 9 A 8 c D E F 10 11 124 4 5 6 7 8 9 A 8 c D E F 10 11 12 135 5 6 7 8 9 A 8 c D E F 10 11 12 13 146 6 7 8 9 A 8 c D E F 10 11 12 13 14 157 7 8 9 A 8 c D E F 10 11 12 13 14 15 168 8 9 A 8 c D E F 10 11 12 13 14 15 16 17g 9 A 8 c D E F 10 11 12 13 14 ABc16 17 18 A 88ccDDE EFF 1010 1111 12 12 1313 1414 1515 16 17 16 17 1818 1919 1AcDEF 10111213 14151617 18 191A18DEF10 11121314 15161718 19 1A181CEF1011 12131415 16171819 1A 181C10DEF F 10 11 12 13 14 15 16 17 18 19 1A 18 1C 10 1EYou can use the table andnormal carry rulestoaddhexadecimal numbers:4A3378E2 + 877AB982 D1AE3264You’ll recall from Chapter 13 that you can use two’s complements to represent negative numbers. If you’re dealing with 8-bit signed values in �inary,thenegativenumbersallbeginwith 1.Inhexadecimal,2-digitsigned 1umbers are negative if they begin with 8, 9, A, 8, C, D, E, or F because the binary representations of these hexadecimal digits all begin with 1 . For �xample, 99h could represent either decimal 153 (ifyou know you’re deal­ mgwith 1-byteunsignednumbers) ordecimal-103(ifyou’redealingwith >igned numbers).
Or the byte 99h could actually be the number 99 in decimal! This has a �ertain appeal toit,ofcourse,butitseemstoviolateeverythingwe’velearned >o far. I’ll explain how it works in Chapter 23. But next I must talk about me mory.
15

Chapter Sixteen
An Assemblage of Memory
�. .
A s we rouse ourselves from sleep every morning, memory fills in the blanks. We remember where we are, what we did the day before, and what we plan to do today. These memories might come in a
rush or a dribble, and maybe after some minutes a few lapses might persist ( “Funny, I don’t remember wearing my socks to bed”), but all in all we can usually reassemble our lives and achieve enough continuity to commence living another day.
Of course, human memory isn’t very orderly. Try to remember something about high school geometry, and you’re likely to stan thinking about the kid who sat in front of you or the day there was a fire drill just as the teacher was about to explain what QED meant.
Nor is human memory foolproof. Indeed, writing was probably invented specifically to compensate for the failings of human memory. Perhaps last night you suddenly woke up at 3:00 A.M. with a great idea for a screenplay. You grabbed the pen and paper you keep by your bed specifically for that purpose, and you wrote it down so you wouldn’t forget. The next morning you can read the brilliant idea and stan work on the screenplay. ( ” Boy meets girl w. car chase & explosions”? That’s it?) Or maybe not.
We write and we later read. We save and we later retrieve. We store and we later access. The function of memory is to keep the information intact between those two events. Anytime we store information, we’re making use of different types of memory. Paper is a good medium for storing textual in­ formation, and magneti�o: tape works well for music and movies.

An Assemblage ofMemory 191
Telegraph relays too-when assembled into logic gates and then flip­ flops-can store information. As we’ve seen, a flip-flop is capable of stor­ ing 1 bit. This isn’t a whole lot of information, but it’s a stan. For once we know how to store 1 bit, we can easily store 2, or 3, or more.
In Chapter 14, we encountered the level-triggered D-type flip-flop, which is made out of an invener, two AND gates, and two NOR gates:
When the Clock input is 1 , the Q output is the same as the Data input. But whentheClockinputgoesto0,theQoutputholdsthelastvalueoftheData input. Funher changes to the Data input don’t affect the outputs until the Cloc-k input goes to 1 again. The logic table of the flip-flop is the following:
Inputs
Outputs
D Clk
Q Q-bar
01 11 X0
01 10
Q Q-bar
In Chapter 1 4, this flip-flop was featured in a couple of different circuits, but in this chapter it will be used in only one way-to store 1 bit of infor­ mation. For that reason, I’m going to rename the inputs and outputs so that they’ll be more in accordance with that purpose:
‘\
This is the same flip-flop, but now the Q output is named Data Out, and the Clock input (which st:med out in Chapter 14 as Hold That Bit) is named

1 92
Chapter Sixteen
Write. just as we might write down some information on paper, the Write signal causes the Data In signal to be written into or stored in the circuit. Normally, the Write input is 0 and the Data In signal has no effect on the output. But whenever we want to store the Data In signal in the flip-flop, we make the Write input 1 and then 0 again. As I mentioned in Chapter 1 4, this type of circuit is also called a latch because it latches onto data. Here’s how we might represent a 1-bit latch without drawing all of the individual components:
w fi0
It’s fairly easy to assemble multiple 1-bit latches into a multibit latch. All you have to do is connect the Write signals:
Write
This 8-bit latch has eight inputs and eight outputs. In addition, the latch has a single input named Write that’s normally 0. To save an 8-bit value in this latch, make the Write input 1 and then 0 again. This latch can also be drawn as a single box, like so:
w 8-Bir Latch
DO
Inputs
Outputs
Or to be more consistent with the 1-bit latch, it can be drawn this way:

An Assemblage of Memory 1 93
Data In 8-Bit Latch DO Data Out Writr
Another way of assembling eight 1-bit latches isn’t quite as straightfor­ ward as this. Suppose we want only one Data In signal and one Data Out signal. But we want the ability to save the value of the Data In signal at eight different times during the day, or maybe eight different times during the next minute. And we also want the ability to later check those eight values by looking at just one Data Out signal.
In other words, rather than saving one 8-bit value as in the 8-bit latch, we want to save eight separate 1-bit values.
Why do we want to do it this way? Well, maybe because we have only one lightbulb.
We know we need eight 1 -bit latches. Let’s not worry right now about how data actually gets stored in these latches. Let’s focus first on checking the Data Out signals of these eight latches using only one lightbulb. Of course, we could always test the output of each latch by manually moving the lightbulb from latch to latch, but we’d prefer something a bit more automated than that. ln fact, we’d like to use switches to select which oftheeight 1-bit latches we want to look at.
How many switches do we need? If we want to select something from eight items, we need three switches. Three switches can represent eight dif­ ferent values: 000, 001, 010, 011, 100, 101, 1 10, and 1 11.
So here are our eight 1 -bit latches, three switches, a lightbulb, and some­ thing else that we need in between the switches and the lightbulb:
DODO DODO

What Is This?

1 94
Chapter Sixteen
The “something else” is that mysterious box with eight inputs on top and three inputs on the left. By closing and opening the three switches, we can select which of the eight inputs is routed to the output at the bottom of the box. This output lights up the lightbulb.
So what exactly is “What Is This?”? We’ve encountered something like it before, although not with so many inputs. It’s similar to a circuit we used in Chapter 14 in the first revised adding machine. At that time, we needed something that let us select whether a row of switches or the output from a latch was used as an input to the adder. In that chapter, it was called a 2-Line­ to-1-Line Selector. Here we need an 8-Line-to-1-Line Data Selector:
Data Inputs
!!!!
8-to-1 Selector
Output
Select Inputs
The 8-to-1 Selector has eight Data inputs (shown at the top) and three Select inputs (shown at the left). The Select inputs choose which of the Data inputs appears at the Output. For example, if the Select inputs are 000, the Output is the same as 00• If the Select inputs are 1 1 1, the Output is the same as 07• If the Select inputs are 101, the Output is the same as Ds. Here’s the logic table:
Inputs
Outputs
s2 sl So
Q
000 001 010 011 100 101 110 111
Do Dl 02 D3 04 Ds D6 07

An Assemblage of Memory 195 The 8-to-1 Selector isbuiltfromthreeinverters,eight 4-input ANDgates,
andan 8-input ORgate,likethis:
Do
o.
Dz
OJ
04
Ds
06
o,
So s. Sz
Output
Now, this is a fairly hairy circuit, but perhap s ju st one example will con­ vince you that it work s. Suppo se S1 i s 1, S1 i s 0, and S0 i s 1. The input s to the sixth AND gate from the top include S0, S1, S1, all of which are 1. No other AN D gate ha s the se three input s, so all the other AND gates will have an output of 0. The sixth AND gate from the top will po ssibly have an ou tput of0ifDsis0.Oritwillhaveanoutputof 1ifDsis1.Thesamegoesfor the OR gate at the far right. Thus, if the Select inputs are 101, the Output isthe same as Ds·
Let’ s recap what we’re trying to do here . We’re trying to wire eight 1-bit latches so that they can be individually written to u sing a single Data In signal and individually examined using a single Data Out signal. We’ve already established that we can choose a Data Output signal from one of the eight latche s by u sing an 8 -to- 1 Selector, as shown on the following page .

196
Chapter Sixteen
DO DO
DO 1>0
8-to-1 Selector Our
We ‘re halfway fini shed . Now that we’ve e stabli shed what we need for the output side , let’ s look at the input side.
The input side involvesthe Data input signalsand the Write signal. On the input side of the latche s, we can connect all the Data input signals to­ gether. But we can’t connect the eight Write signalstogether because we want to be able to write into each latch ind ividually. We have a single Write sig­ nal that mu st be routed to one (and only one ) of the latche s:
v
I
Write
What Is This? JJl
..
V Data In
J
.-
fII 1lII
w 01 w OJ DO DO
w I>I IX)
w 01 w Dl w 01 w J)J w Dl DO 1>0 DO DO DO
IIIIIIII
To accomplish this task, we need another circuit that looks somewhat similar to the 8-to-1 Selector but actually does the opposite. This is the

An Assemblage of Memory 197
3-to-8 Decoder. We’ve also seen a simple Data Decoder before-when wir­ ing the switches to select the color of our ideal cat in Chapter 1 1 .
The 3-to-8 Decoder has eight Outputs. At any time, all but one of the Outputs are 0. The exception is the Output that’s selected by the 50, 51, and S2 inputs. This Output is the same as the Data Input.
Data In
Oo o.
Oz 01 04 of 06 0;
So s. Sz
Again, notice that the inputs to the sixth AND gate from the top include S0, S1, Sr No other AND gate has these three inputs. So if the Select inputs are 1 0 1 , then all the other AND gates will have an output of 0. The sixth AND gate from the top will possibly have an output of 0 if the Data Input is 0 or an output of 1 if the Data Input is 1 . Here’s the complete logic table:
Inputs
Outputs
s2 St So
07 06 Os o.. o.. 02 Ot Oo

000 00I 010 011 100 101 110 111
0 0 0 0 0 0 0 Data 0 0 0 0 0 0 Data 0 0 0 0 0 0 Data 0 0 0 0 0 0 Data 0 0 0 0 0 0 Data 0 0 0 0 0 0 Data 0 0 0 0 0 0 Data 0 0 0 0 0 0
Data 0 0 0 0 0 0 0

198
Chapter Sixteen
And here’s the complete circuit with the 8 latches:
Address
-So 1- sl
1- sl
Write
I
Data
3-to-8 Decoder
Data In
o, o, Os 04 01 02 01 Oo
IIIIIII
Wt D I W D I W D I W D I W D I W D I W O J w O J DO DO DO DO DO DO DO DO
I ll I I ll l l ll I I 1
IIIIIIII
D, D, Ds D4 Dl Dl Dl Do
So
sl
sl Output
8-ro-1 Selector
..1- ..
I
Data Out
Notice that the three Select signals to the Decoder and the Selector are the same and that I’ve also labeled those three signals the Address. Like a post office box number. this 3-bit address determines which of the eight 1-bit latches is being referenced. On the input side, the Address input determines which latch the Write signal will trigger to store the Data input. On the output side (at the bottom of the figure), the Address input controls the 8-to-1 Selector to select the output of one of the eight latches.
This configuration of latches is sometimes known as read/write memory, but more commonly as random access memory, or RAM (pronounced the same as the animal). This particular RAM configuration stores eight sepa­ rate 1-bit values. It can be represented this way:
Address
Data In Write
8xl RAM DO Data Out
It’s called memory because it retains information. It’s called read/write memory because you can store a new value in each latch (that is, write the value) and because you can determine what’s stored in each latch (that is, you can later read the value). It’s called random access memory because each

An Assemblage of Memory 199
of the eight latches can be read from or written to simply by changing the Address inputs. In contrast, some other types of memory have to be read se­ quentially-that is, you’d have to read the value stored at address 100 be­ fore you could read the value stored at address 101.
A particular configuration o f RAM i s often referred to as a RAM a”ay. This particular RAM array is organized in a manner called in abbreviated form 8 x 1 (pronounced eight by one). Each of the eight values in the array is 1 bit. Multiply the two values to get the total number of bits that can be stored in the RAM array.
RAM arrays can be combined in various ways. For example, you can take two 8 x 1 RAM arrays and arrange them so that they are addressed in the same way:
DO � Data Out
Address AI Al
Data In 01 Write w
.._Ao -I- AI
‘-I- Al Data In 01
Sxl RAM
Hxl RAM
DO –
Data Out
Ao
.._w
The Address and Write inputs of the two 8 x 1 RAM arrays are connected,
so the result is an 8 x 2 RAM array: Address
Data In Write
8×2 RAM Data Out
This RAM array stores eight values, but each of them is 2 bits in size.
Or the two 8 x 1 RAM arrays can be combined in much the same way that the individual latches were combined-by using a 2-to-1 Selector and
a 1-to-2 Decoder, as shown on the next page.

200
Chapter Sixteen
2-to-1 Selector
I
Data Out
Data In
I
1-to-2 Decoder
Select
Sl– oo,
01
Si–
The Select input that goes to both the Decoder and the Selector essentially selects between the two 8 x 1 RAM arrays. It’s really a fourth address line. So this is actually a 16 x 1 RAM array:
Write
Address
w
AoA1A2
8xl RAM
DO
Do
000
01
w
l I1
AoA, A2
8xl RAM
DO
I
o,
Address
Data In Write
16xl RAM DO DataOut
This RAM array stores 16 values, each of which is 1 bit.
The number of values that a RAM array stores is directly related to the number of Address inputs. With no Address inputs (which is the case with the 1-bit latch and the 8-bit latch), only one value can be stored. With one Address input, two values are possible. With two Address inputs, four val­ ues are stored. With three Address inputs, eight values, and with four Ad­ dress inputs, sixteen values. The relationship is summed up by this equation:
=
Num ber of vaIues 10 RAM array
2
. N- of Addrn• •np.l>

An Assemblage ofMemory 201 I’ve demonstrated how small RAM arrays can be constructed, and it
shouldn’t be difficult to imagine much larger ones. For example
Address
Data In
Write
1024×8 Data Out RAM
This RAM array stores a total of 8 1 96 bits, organized as 1 024 values of eight bits each. There are ten Address inputs because 210 equals 1024. There are eight Data inputs and eight Data outputs.
In other words, this RAM array stores 1024 bytes. It’s like a post office with 1024 post office boxes. Each one has a different 1-byte value inside (which may or may not be better than junk mail).
One thousand twenty-four bytes is known as a kilobyte, and herein lies much confusion. The prefix kilo ( from the Greek khilioi, meaning a thou­ sand) is most often used in the metric system. For example, a kilogram is 1000 grams and a kilometer is 1000 meters. But here I’m saying that a ki­ lobyte is 1024 bytes-not 1000 bytes.
The problem is that the metric system is based on powers of 1 0, and bi­ nary numbers are based on powers of 2, and never the twain shall meet. Powers of 10 are 10, 100, 1000, 10000, 100000, and so on. Powers of2 are 2, 4, 8, 1 6, 32, 64, and so on. There is no integral power of 1 0 that equals some integral power of 2.
But every once in a while they do come close. Yes, 1000 is fairly close to 1024, or to put it more mathematically using an “approximately equal to” sign:
Nothing is magical about this relationship. All it implies is that a particu­ lar power of 2 is approximately equal to a particular power of 1 0. This little quirk allows people to conveniently refer to a kilobyte of memory when they really mean 1024 bytes.
Kilobyte is abbreviated K or KB. The RAM array shown above can be said to store 1024 bytes or 1 kilobyte or 1K or 1 KB.
What you don’t say is that a 1-KB RAM array stores 1000 bytes, or (in English) “one thousand bytes.” It’s more than a thousand-it’s 1024. To sound like you know what you’re talking about, you say either ” 1 K” or “one kilobyte . ”
, One kilobyte o f memory has eight Data inputs, eight Data outputs, and ten Address inputs. Because the bytes are accessed by ten Address inputs, the RAM array stores 210 bytes. Whenever we add another address input, we

202
Chapter Sixteen
double the amount of memory. Each line of the following sequence repre­ sents a doubling of memory:
1 kilobyte= 1024 bytes= 210 bytes .. 101 bytes 2 kilobytes= 2048 bytes= 211 bytes
4 kilobytes= 4096 bytes= 211 bytes
8 kilobytes= 8192 bytes= 213 bytes
1 6 kilobytes= 1 6,384 bytes= 2 1• bytes
32 kilobytes= 32,768 bytes= 21’ bytes
64 kilobytes= 65,536 bytes= 216 bytes
128 kilobytes= 131,072 bytes= 217 bytes
256 kilobytes= 262,144 bytes= 211 bytes
512 kilobytes= 524,288 bytes= 219 bytes
1 ,024 kilobytes= 1 ,048,576 bytes= 220 bytes “‘
1 0′ bytes
Note that the numbers of kilobytes shown on the leh are also powers of 2. With the same logic that lets us call 1024 bytes a kilobyte, we can also refer to 1024 kilobytes as a megabyte. (The Greek word megas means
great.) Megabyte is abbreviated MB. And the memory doubling continues:
1 megabyte= 1 ,048,576 bytes= 220 bytes • 1 0’ bytes 2 megabytes= 2,097,152 bytes= 211 bytes
4 megabytes= 4,1 94,304 bytes= 211 bytes
8 megabytes= 8,388,608 bytes= 211 bytes
1 6 megabytes= 1 6,777,2 1 6 bytes= 21• bytes
32 megabytes= 33,554.432 bytes= 215 bytes
64 megabytes= 67,108,864 bytes= 216 bytes
1 28 megabytes= 1 34,21 7,728 bytes= 217 bytes
256 megabytes= 268.435.456 bytes= 2:11 bytes
5 1 2 megabytes= 536,870,91 2 bytes= 219 bytes
1 ,024 megabytes= 1 ,073,741 ,824 bytes= 230 bytes”‘ 1 0S’ bytes
The Greek work gigas means giant, so 1 024 megabytes are called a gigabyte, which is abbreviated GB.
Similarly, a terabyte (teras means monster) equals 2o10 bytes (approximately 1012) or 1,099,511,627,776 bytes. Terabyte is abbreviated TB.
A kilobyte is approximately a thousand bytes, a megabyte is approxi­ mately a million bytes, a gigabyte is approximately a billion bytes, and a terabyte is approximately a trillion bytes.
Ascending into regions that few have traveled, a petabyte equals 2Jo bytes or 1,125,899,906,842,624 bytes, which is approximately 1011 or a quadril­ lion. An exabyte equals 260 bytes or 1 , 1 52,92 1 ,504,606,846,976 bytes, approximately 1011 or a quintillion.

An Assemblage of Memory 203
Just to provide you with a little grounding here, home computers pur­ chased at the time this book was written (1999) commonly have 32 MB or 64 MB or sometimes 1 28 MB of random access memory. (And don’t get too confused just yet-I haven’t mentioned anything about hard drives; I’m talking only about RAM.) That’s 33,554,432 bytes or 67,108,864 bytes or 134,217,728 bytes.
People, of course, speak in shorthand. Somebody who has 65,536 bytes of memory will say, “I have 64K (and I’m a visitor from the year 1 980).” Somebody who has 33,554,432 bytes will say, “I have 32 megs.” That rare personwhohas 1,073,741,824bytesofmemorywillsay,”I’vegotagig(and I’m not talking music).”
Sometimes people will refer to kilobits o r megabits (notice bits rather than bytes), but this is rare. Almost always when people talk about memory, they’re talking number of bytes, not bits. (Of course, to convert bytes to bits, multiply by 8.) Usually when kilobits or megabits come up in conversation, it will be in connection with data being transmitted over a wire and will occur in such phrases as “kilobits per second” or “megabits per second.” For example, a 56K modem refers to 56 kilobits per second, not kilobytes.
Now that we know how to construct RAM in any array size we want, let’s not get too out of control. For now, let’s simply assume that we have assembled 65,536 bytes of memory:
Address
Data In Write
64Kx8 RAM
Data Out
Why 64 KB? Why not 32 K8 or 128 KB? Because 65,536 is a nice round number. It’s 216• This RAM array has a 16-bit address. In other words, the address is 2 bytes exactly. In hexadecimal, the address ranges from OOOOh through FFFFh.
As I implied earlier, 64 KB was a common amount of memory in personal computers purchased around 1980, although it wasn’t constructed from telegraph relays. But could you really build such a thing using relays? I trust you won’t consider it. Our design requires nine relays for each bit of memory, so the total 64K x 8 RAM array requires almost 5 million of them!
It will be advantageous for us to have a control panel that lets us man­ age all this memory-to write values into memory or examine them. Such a control panel has 16 switches to indicate an address, 8 switches to define an 8-bit value that we want to write into memory, another switch for the Write signal itself, and 8 lightbulbs to display a particular 8-bit value, as s”‘own on the following page.

204
Chapter Sixteen
64-KB RAM Control Panel
:u-u u u u u u u u u u u u u u u Au A,14 Au Au A11 A10 A9 11.1 A7 � A.1 11.4 A1 A2 A1 Ao
Write Takeover
All the switches are shown in their off (0) positions. I’ve also included a switch labeled Takeover. The purpose of this switch is to let other circuits use the same memory that the control panel is connected to. When the switch is set to 0 (as shown), the rest of the switches on the control panel don’t do anything. When the switch is set to 1, however, the control panel has exclu­ sive control over the memory.
This is a job for a bunch of 2-to-1 Selectors. In fact, we need 25 of them- 16 for the Address signals, 8 for the Data input switches, and another for the Write switch. Here’s the circuit:
Write
25 Switches
Data
In Address
25 2-to-1 Selectors 16
64Kx8
RAM w
vL_,. Takeover
s
When the Takeover switch is open (as shown), the Address, Data input, and Write inputs to the 64K )( 8 RAM array come from external signals shown at the top left of the 2-to-1 Selectors. When the Takeover switch is closed,
Data Out

An Assemblage of Memory 205
the Address, Data input, and Write signals to the RAM array come from the switches on the control panel. In either case, the Data Out signals from the RAM array go to the eight lightbulbs and possibly someplace else.
I’ll draw a 64K x 8 RAM array with such a control panel this way: Control Panel
Address
Data In Write
64Kx8 Data Out RAM
When the Takeover switch is dosed, you can use the 16 Address switches to select any of 65,536 addresses. The lightbulbs show you the 8-bit value currently stored in memory at that address. You can use the 8 Data switches to define a new value, and you can write that value into memory using the Write switch.
The 64K x 8 RAM array and control panel can cenainly help you keep track of any 65,536 8-bit values you may need to have handy. But we have also left open the opponunity for something else-some other circuitry perhaps-to use the values we have stored in memory and to write other ones in as well.
There’s one more thing you have to remember about memory, and it’s very imponant: When I introduced the concept of logic gates in Chapter 1 1, I stopped drawing the individual relays that compose these gates. In panicu­ lar, I no longer indicated that every relay is connected to some kind of sup­ ply of electricity. Whenever a relay is triggered, electricity is flowing through the coils of the electromagnet and holding a metal contact in place.
So if you have a 64K x 8 RAM array filled to the brim with 65,536 of your favorite bytes and you tum off the power to it, what happens? All the electromagnets lose their magnetism and with a loud thunk, all the relay contacts return to their untriggered states. And the contents of this RAM ? They all go POOF! Gone forever.
This is why random access memory is also called volatile memory. It re­ quires a constant supply of electricity to retain its contents.

Chapter Seventeen
Automation
T he human species is often amazingly inventive and industrious but at the same time profoundly lazy. It’s very clear that we humans don’t like to work. This aversion to work is so extreme-and our
ingenuity so acute-that we’re eager to devote countless hours designing and building devices that might shave a few minutes off our workday. Few fan­ tasies tickle the human pleasure center more than a vision of relaxing in a hamocm k watching some newfangled contraption we just built mow the lawn.
I’m afraid I won’t be showing plans for an automatic lawn-mowing ma­ chine in these pages . But in this chapter. through a progression of ever more sophisticated machines, I will automate the process of adding and subtracting numbers. This hardly sounds earth-shattering, I know. But the final machine in this chapter will be so versatile that it will be able to solve virtually any problem that makes use of addition and subtraction, and that includes a great many problems indeed.
Of course, with sophistication comes complexity, so some of this might be rough going. No one will blame you if you skim over the excruciating details. At times, you might rebel and promise that you’ll never seek elec­ trical or mechanical assistance for a math problem ever again. But stick with me because by the end of this chapter we’ll have invented a machine we can legitimately call a computer.
The last adder we looked at was in Chapter 14. That version included an 8-bit latch that accumulated a running total entered on one set of eight switches :

Automation
207
Clear
;_j
Add
Switches
8-Bit Adder
Clr
v
ughrbulbs
As you’ll recall, an 8-bit latch uses flip-flops to store an 8-bir value. To use this device, you first momentarily press the Clear switch ro set the stored contents of the latch to all zeros. Then you use the switches ro enter your first number. The adder simply adds this number to the zero output of the latch, so the result is the number you entered. Pressing the Add switch stores that number in the larch and turns on some lightbulbs to display it. Now you set up the second number on the switches. The adder adds this one to the number stored in the larch. Pressing the Add button again stores the total in the latch and displays it using the lightbulbs. In this way, you can add a whole string of numbers and display the running total. The limitation, of course, is that the eight lightbulbs can’t display a total greater than 255.
At the time I showed this circuit to you in Chapter 14, the only larches that I had introduced so far were level triggered. In a level-triggered larch, the Clock input has to go to 1 and then back to 0 in order for the larch to store something. During the rime the Clock input is 1 , the data inputs of the latch can change and these changes will affect the stored output. later in that chapter, I introduced edge-triggered latches. These latches save their values in the brief moment that the Clock input goes from 0 to 1 . Edge-triggered latches are often somewhat easier to use, so I want to assume that all the latches in this chapter are edge triggered.
A latch used to accumulate a running total of numbers is called an accu­ mulator. But we’ll see later in this chapter that an accumulator need not simply accumulate. An accumulator is often a latch that holds first one number and then that number plus or minus another number.
The big problem with the adding machine shown above is fairly obvious: Say you have a list of 100 binary numbers you want to add together. You sit, down at the adding machine and doggedly enter each and every number and accumulate the sum. But when you’re finished, you discover that a couple of the numbers on the list were incorrect. Now you have to do the whole thing over again.

208
Chapter Seventeen
But maybe not. In the preceding chapter, we used almost 5 million relays to build a RAM array containing 64 KB of memory. We also wired a control panel (shown on page 204) that let us close a switch labeled Takeover and lit­ erally take over aU the writing and reading of this RAM array using switches.
Address Data In Write
Control Panel
64Kx8 RAM
Data Out
If you had typed aU 100 binary numbers into this RAM array rather than directly into the adding machine, making a few corrections would be a lot easier.
So now we face the challenge of connecting the RAM array to the accumu­ lating adder. It’s pretty obvious that the RAM Data Out signals replace the switches to the adder, but it’s perhaps not so obvious that a 16-bit counter (such as we built in Chapter 14) can control the address signals of the RAM array. The Data Input and Write signals to the RAM aren’t needed in this circuit:
Conr.rol Panel
Addr 64Kx8 DO RAM
Clear
A
8-Bit Adder
1 6-Bit Counter
Or
‘——-CJk ‘-C—— ir
I inl.,huUco

Automation 209
This is certainly not the easiest piece of calculating equipment ever in­ vented. To use it, you first must close the switch labeled Clear. This clears the contents of the latch and sets the output of the 16-bit counter to OOOOh. Then you close the Takeover switch on the RAM control panel. You can then enter a set of 8-bit numbers that you want to add beginning at RAM address ooooh. If you have 100 numbers, you’ll store these numbers at addresses OOOOh through 0063h. (You should also set all the unused entries in the RAM array to OOh.) You can then open the Takeover switch of the RAM control panel (so that the control panel no longer has control over the RAM ar­ ray) and open the Clear switch. Then just sit back and watch the flashing lightbulbs.
Here’s how it works: When the Clear switch is first opened, the address of the RAM array is OOOOh. The 8-bit value stored in the RAM array at that address is an input to the adder. The other input to the adder is OOh because the latch is also cleared.
The oscillator provides a clock signal-a signal that alternates between 0 and 1 very quickly. After the Clear switch is opened, whenever the clock changes from a 0 to a 1, two things happen simultaneously: The latch stores the sum from the adder, and the 16-bit counter increments, thus addressing the next value in the RAM array. The first time the clock changes from 0 to 1 after the Clear switch is opened, the latch stores the first value and the counter increments to 0001h. The second time, the latch stores the sum of the first and second values, and the counter increments to 0002h. And so on.
Of course, I’m making some assumptions here. Above all, I’m assuming that the oscillator is slow enough to allow all the rest of the circuitry to work. With each stroke of the clock, a lot of relays must trigger other relays be­ fore a valid sum shows up at the output of the adder.
One problem with this circuit is that we have no way of stopping it! At some point, the lightbulbs will stop flashing because all the rest of the num­ bers in the RAM array will be OOh. At that time, you can read the binary sum. But when the counter eventually reaches FFFFh, it will roll over (just like a car odometer) to OOOOh and this automated adder will begin adding the numbers again to the sum that was already calculated.
This adding machine has other problems as well. All it docs is add, and all it adds are 8-bit numbers. Not only is each number in the RAM array limited to 255, but the sum is limited to 255 as well. The adder also has no way to subtract numbers, although it’s possible that you’re using negative numbers in two’s complements, in which case this machine is limited to handling numbers from -128 through 127. One obvious way to make it add larger numbers (for example, 16-bit values) is to double the width of the RAM array, the adder, and the latch, as well as provide eight more lightbulbs. But you might not be willing to make that investment quite yet.
Of course, I wouldn’t even mention these problems unless I knew we were going to solve them eventually. But the problem I want to focus on first is yet an dk
Zero flag
The output of that 8-bit NOR gate is 1 only if all the inputs are 0. Like the Clock input of the Carry latch, the Clock input of the Zero latch latches a value only when an Add, Subtract, Add with Carry, or Subtract with Bo”ow instruction is being performed. This latched value is known as the Zero flag. Watch out because it could seem as if it’s working backward: The Zero flag is 1 if the output of the adder is all zeros, and the Zero flag is 0 if output of the adder is not all zeros.
I
Clr

230
Chapter Seventeen
Withthe Carrylatchandthe Zerolatch,wecanexpandourrepenoire of instructionsby four:
Operation Load
Store
Add Subtract
Add with Carry Subtract with Borrow jump
Jump If Zero Jump I f Carry Jump If Not Zero jump If Not Carry Halt
Code t Oh l l h
20h 2 1 h 22h 23h 30h 3th 32h 33h 34h FFh
Forexample,theJumpIfNotZeroinstruction jumpstothespecifiedad­ dressonly ifthe output of the Zero latch is 0. In other words,there will be no jump ifthe last Add, Subtract, Add with Carry, or Subtract with Borrow instruction resulted in 0. Implementing this design is just an add-on to the control sign al s that impleme nt the regular Jump co mmand : If the instruc ­ tion isJumpIfNotZero,theSet It signalonthe 16-bitcounter istriggered only if the Zero flag is 0.
Now all that ‘s nece ssary to make the code shown above mult iply two numbers are the following instructions starting at address 0012h:
0012h:
OOISh:
0018h:
OOlBh:
OO I Eh:
Load byt� at address I 003b into accumulator
Add byt� at address OO l Eh to accumulator
Stor� byt� in accumulator at address I 003h Jump to OOh if tM z�ro flag is not I
Halt

Automation 231
The first time through, the 16-bit location at 0004h and 0005h contains A7h times 1, as we’ve already established. The instructions here load the byte from location 1003h into the accumulator. This is 1Ch. This byte is added to the value at location 001Eh. This happens to be the Halt instruction, but of course it’s also a valid number. Adding FFh to 1Ch is the same as subtract­ ing 1 from 1Ch, so the result is 1Bh. This isn’t 0, so the Zero flag is 0. The 1Bh byte is stored back at address 1 003h. Next is a Jump If Not Zero in­ struction. The Zero flag isn’t set to 1, so the jump occurs. The next instruc­ tion is the one located at address OOOOh.
Keep in mind that the Store instruction doesn’t affect the Zero flag. The Zero flag is affected only by the Add, Subtract, Add with Carry, or Subtract with Bo”ow instruction, so it will remain the same value that was set the last time one of these instructions occurred.
The second time through, the 16-bit location at 1004h and 1005h will contain the value A7h times 2. The value lBh is added to Ffh to get the result l Ah. That’s not 0, so back to the top.
On the twenty-eighth time through, the 16-bit location at 1004h and 1005h will contain the value A7h times I Ch. At location 1003h will be the value 1. This will be added to FFh and the result will be zero. The Zero flag will be set! So the Jump IfNot Zero instruction will not jump back to OOOOh. Instead, the next instruction is a Halt. We’re done.
I now assen that at long last we’ve assembled a piece of hardware that we can honestly call a computer. To be sure, it’s a primitive computer, but it’s a computer nonetheless. What makes the difference is the conditional jump. Controlled repetition or looping is what separates computers from calculators. I’ve just demonstrated how a conditional jump instruction aUows this machine to multiply two numbers. In a similar way, it can also divide two numbers. Moreover, it’s not limited to 8-bit values. It can add, subtract, multiply, and divide 16-bit, 24-bit, 32-bit, or even larger numbers. And if it can do this, it can calculate square roots, logarithms, and trigonometric functions.
Now that we’ve assembled a computer, we can stan using words that sound like we’re talking about computers.
The particular computer that we’ve assembled is classified as a digital computer because it works with discrete numbers. At one time, there were also analog computers that are now largely extinct. (Digital data is discrete data-data that has cenain specific distinct values. Analog information is continuous and varies throughout an entire range.)
A digital computer has four main parts: a processor. memory, at least one input device, and least one output device. In our machine, the memory is the 64-KB RAM array. The input and output devices are the rows of switches and lightbulbs on the RAM array control panel. These switches and lightbulbs let us (the human beings in this show) put numbers into memory and examine the results.
The processor is everything else. A processor is also called a central processing unit, or CPU . More casually, the processor is sometimes called

232
Chapter Seventeen
the brain of the computer, but I’d like to avoid using such terminology, mainly because what we designed in this chapter hardly seems anything like a brain to me. (The word microprocessor is very common these days. A microprocessor is just a processor that-through use of technology I’ll describe in Chapter 18-is very small. W hat we’ve built out of relays in this chapter could hardly be defined as a micro anything!)
The processor that we’ve built is an 8-bit processor. The accumulator is 8 bits wide and most of the data paths are 8 bits wide. The only 16-bit data path is the address to the RAM array. If we used 8 bits for that, we’d be limited to 256 bytes of memory rather than 65,536 bytes, and that would be quite restrictive.
A processor has several components. I’ve already identified the accumulator, which is simply a latch that holds a number inside the processor. In our computer, the 8-bit inverter and the 8-Bit Adder together can be termed the Arithmetic Logic Unit, or ALU. Our ALU performs only arithmetic, specifically addition and subtraction. In slightly more sophisticated computers (as we’ll see), the ALU can also perform logical functions, such as AND, OR, and XOR. The 16-bit counter is called a Program Counter.
The computer that we’ve built is constructed from relays, wires, switches, and lightbulbs. All of these things are hardware. In contrast, the instructions and other numbers that we enter into memory are called software. It’s “soft” because it can be changed much more easily than the hardware can.
W hen we speak of computers, the word software is almost synonymous with the term computer program, or, more simply, program. Writing software is known as computer programming. Computer programming is what I was doing when I determined the series of instructions that would allow our computer to multiply two numbers together.
Generally, in computer programs, we can distinquish between code (which refers to the instructions themselves) and data, which are the numbers that the code manipulates. Sometimes the distinction isn’t so obvious, as when the Halt instruction served double duty as the number – 1 .
Computer programming is sometimes also referred to as writing code, or coding, as in, “I spent my vacation coding” or “I was up until seven this morning banging out some code.” Sometimes computer programmers are known as coders, although some might consider this a derogatory term. Such programmers might prefer to be called software engineers.
The operation codes that a processor responds to (such as lOb and 11h for Load and Store) are known as machine codes, or machine language. The term language is used because it’s akin to a spoken or written human language in that a machine “understands” it and responds to it.
I’ve been referring to the instructions that our machine carries out by rather long phrases, such as Add with Carry. Commonly, machine codes are assigned short mnemonics that are written with uppercase letters.

A utomation
2 3 3
These mnemonics can be as short as 2 or 3 letters. Here’s a set of possible mnemonics for the machine codes that our computer recognizes:
Operation
load
Store
Add Subrracr
Add with Carry Subtract with Borrow jump
Jump If Zero jump If Carry jump If Nor Zero jump If Not Carry Halt
Code Mnemonic lOh LOD
l lh STO 20h ADD
21h SUB 22h ADC 23h SBB 30h JMP 3th JZ 32h JC 33h JNZ 34h JNC FFh HLT
These mnemonics are particularly useful when combined with a couple of other shortcuts. For example, instead of saying something long-winded like, “Load byte at address 1003h into accumulator,” we can instead write the statement:
LOD A,[l993h]
The A and the [1003/ that appear to the right of the mnemonic are called arguments that indicate what’s going on with this particular Load instruction. The arguments are written with a destination on the left (the A stands for accumulator) and a source on the right. The brackets indicate that the accumulator should be loaded not with the value 1003h but with the value stored in memory at address 1003h.
Similarly, the instruction ..Add byte at address OOlEh to accumulator” can be shortened to
ADD A,[991Eh]
and “Store contents of accumulator at address 1003h” is
STO [1093h].A
Notice that the destination (a memory location for the Store instruction) is still on the left and the source is on the right. The contents of the accumu­ lator must be stored in memory at address 1003h. The wordy “Jump to OOOOh if the Zero flag is not 1 ” is more concisely written as
JNZ 0900h
The brackets aren’t used in this instruction because the instruction jumps to address OOOOh, not to the value that might be stored at address OOOOh.

It’s convenient to write these instructions in this type of shorthand because the instructions can be listed sequentially in a readable way that doesn’t require us to draw boxes of memory locations. To indicate that a panicular instruction is stored at a particular address, you can use the hexadecimal address followed by a colon, such as
0000h : LOD A , [ l005h]
And here’s how we can indicate some data stored at a particular address:
1009h : 1992h : 1094h :
99h , A7h 99h , lCh 99h , 99h
The 2 bytes separated by commas indicate that the first byte is stored at the address on the leh and the second byte is stored at the next address. These three lines are equivalent to
1909h: 90h, A7h, 99h, lCh, 00h, 00h
So the entire multiplication program can be written as a series of state­ ments like this:
0900h :
LOD A , [ l005h] ADO A,[199lh] STO [1005h],A
LOD A , [ l094h] ADC A,[l900h] STO [1904h],A
LOD A,[l003h] ADD A, [001Eh] STO [1003h],A
JNZ 0000h 901 Eh : HLT
1000h : 1002h : 1904h :
09h , A7h 00h , lCh 00h , 00h
The judicious use of blank lines and other whitespaceis simply to make the whole program more readable for human beings like you and me.
It’s better not to use actual numeric addresses when writing code because they can change. For example, if you decided to store the numbers at memory locations 2000h through 20005h, you’d need to rewrite many of the state­ ments as well. It’s better to use labels to refer to locations in memory. These labels are simply words, or they look almost like w�rds, like this:
Chapter Seventeen
234

Automation
235
BEGIN:
LOD ADO STO
LOD ADC STO
A,[RESULT + 1] A,[NUH1 + 1] [RESULT + 1],A
A , [ RESULT] A,[NUH1]
[ RESULT] . A
NEG1 :
NUM1: NUH2 : RESULT:
JNZ BEGI N HLT
99h, A7h
1Ch 99h, 99h
A,[NUH2 + 1] A.[NEG1]
LOD
ADD
STO [NUH2 + 1].A
Notice that the labels NUMl, NUM2, and RESULTall refer to memory locationswhere2bytesarestored.Inthesestatements,thelabelsNUM1 + 1, NUM2 + 1 , and RESULT + 1 refer to the second byte after the particular label. Notice the NEG1 (negative one) label on the HLT instruction.
Finally, if there’s a chance that you’U forget what these statements do, you can add linle comments, which are in English and are separated from the actual statements by a semicolon:
BEGIN:
LOD A,[RESULT + 1] ADO A,[NUH1 + 1] STO [RESULT + 1],A
LOD A . [ RESULT] ADC A,[NUM1] STO [RESULT] .A
LOD A,[NUH2 + 1] ADD A , [ NEG1 ]
STO [NUM2 + 1],A
JNZ BEGIN
HLT
99h, A7h 99h , 1Ch 99h, 99h
Add low-order byte
Add high-order byte
NEG1 :
NUM1: NUH2 : RESULT:
Decrement
second
number
99h ,

236
Chapter Seventeen
I’m showing you here a type of computer programming language known as assembly language. It’s something of a compromise between the naked num­ bers of machine code and the wordiness of our English descriptions of the instructions, coupled with symbolic representations of memory addresses. People are sometimes confused about the difference between machine code and assembly language because they’re really just two different ways of looking at the same thing. Every statement in assembly language corresponds to cenain specific bytes of machine code.
If you were to write a program for the computer that we’ve built in this chapter, you’d probably want to write it first (on paper) in assembly language. Then, once you were satisfied that it was mostly correct and ready to be tested, you would hand assemble it: This means that you would manually conven each assembly-language statement to machine code, still on paper. At that point, you can usc the switches to enter the machine code into the RAM array and run the program, which means to let the machine execute the instructions.
When you’re learning the concepts of computer programming, it’s never too early to get acquainted with bugs. When you’re coding-panicularly in machine code-it’s very easy to make mistakes. It’s bad enough to enter a number incorrectly, but what happens when you enter an instruction code incorrectly? If you enter a l l h (the Store instruction) when you really meant to enter a l Oh (the Load instruction), not only will the machine not load in the number it’s supposed to, but that number will be overwritten by what­ ever happens to be in the accumulator.
Some bugs can have unpredictable results. Suppose you usc the Jump instruction to jump to a location that doesn’t contain a valid instruction code. Or suppose you accidentally use the Store instruction to write over instructions. Anything can happen (and often does).
There’s even a bug in my multiplication program. If you run it twice, the second time through it will multiply A7h by 256 and add that result to the result already calculated. This is because after you run the program once, the number at address 1003h will be 0. When you run it the second time, Ffh will be added to that value. The result won’t be 0, so the program will keep running until it is.
We’ve seen that this machine can do multiplication, and in a similar way it can also do division. I’ve also asserted that this machine can use these primitive functions to do square roots, logarithms, and trigonometric func­ tions. All a machine needs is the hardware to add and subtract and some way to use conditional jump instructions to execute the proper code. As a pro­ grammer might say, “I can do the rest in software.”
Of course, this software might be quite complex. Many whole books have been written that describe the algorithms that programmers use to solve specific problems. We’re not yet ready for that. We’ve been thinking about whole numbers and haven’t taken a crack at how to represent decimal frac­ tions in the computer. I’ll get to that in Chapter 23.

Automation
2 3 7
I’ve mentioned several times that all the hardware to build these devices was available over a hundred years ago. But it’s unlikely that the computer shown in this chapter could have been built at that time. Many of the con­ cepts implicit in its design weren’t apparent when relay computers were first built in the mid-1930s and only started to be understood around 1945 or so. Until that time, for example, people were still trying to build comput­ ers that internally used decimal numbers rather than binary. And computer programs weren’t always stored in memory but instead were sometimes coded on paper tape. In particular, in the early days of computers, memory was expensive and bulky. Building a 64-KB RAM array from five million telegraph relays would have been as absurd one hundred years ago as it is now.
It’s time to put what we’ve done in perspective and to review the history of calculation and computing devices and machines. Perhaps we shall find that we don’t have to build this elaborate relay computer after all. As I mentioned in Chapter 12, relays were eventually replaced with electronic devices such as vacuum tubes and transistors. Perhaps we shall also find that someone else has built something that’s equivalent to the processor and the memory we designed but that can fit in the palm of your hand.

Chapter Eighteen
From Abaci to Chips
T hroughout recorded history, people have invented numerous clever gadgets and machines in a universal quest to make mathematical calculations just a little bit easier. While the human species seem­
ingly has an innate numerical ability, we also require frequent assistance. We can often conceive of problems that we can’t easily solve ourselves.
The development of number systems can be seen as an early tool to help people keep track of commodities and property. Many cultures, including the ancient Greeks and native Americans, seem to have counted with the as­ sistance also of pebbles or kernels of grain. In Europe, this led to counting boards, and in the Middle East to the familiar frame-and-bead abacus:
Although commonly associated with Asian cultures, the abacus seems to have been introduced to China by traders around 1200 cr.
No one has ever really enjoyed multiplication and division, but few people have done anything about it. The Scottish mathematician John Napier ( 1 550-1 6 1 7) was one of those few. He invented logarithms for the specific
..,0

From Abaci to Chips 239
purpose of simplifying these operations. The product of two numbers is simply the sum of their logarithms. So if you need to multiply two numbers, you look them up in a table of logarithms, add the numbers from the table, and then use the table in reverse to find the actual product.
The construction of tables of logarithms occupied some of the greatest minds of the subsequent 400 years while others designed little gadgets to use in place of these tables. The slide rule has a long history beginning with a logarithmic scale made by Edmund Gunter (1581-1626) and refined by William Oughtred (1574-1660). The history of the slide rule effectively ended in 1 976, when the Keuffel & Esser Company presented its last manu­ factured slide rule to the Smithsonian Institution in Washington D.C. The cause of death was the hand-held calculator.
Napier also invented another multiplication aid, which is composed of strips of numbers usually inscribed on bone, hom, or ivory and hence re­ ferred to as Napier’s Bones. The earliest mechanical calculator was a some­ what automated version of Napier’s bones built around 1 620 by Wilhelm Schickard ( 1 592-1 635 ). Other calculators based on interlocking wheels, gears, and levers are almost as old. Two of the more significant builders of mechanical calculators were the mathematicians and philosophers Blaise Pascal (1623-1662) and Gottfried Wilhelm von Leibniz (1646-1716).
You’ll no doubt recall what a nuisance the carry bit was in both the origi­ nal 8-Bit Adder and the computer that (among other things) automated the addition of numbers wider than 8 bits. The carry seems at first to be just a little quirk of addition, but in adding machines, the carry is really the cen­ tral problem. If you’ve designed an adding machine that does everything except the carry, you’re nowhere close to being finished!
How successfully the carry is dealt with is a key to the evaluation of old calculating machines. For example, Pascal’s design of the carry mechanism prohibited the machine from subtracting. To subtract, the nines’ complement had to be added the way that I demonstrated in Chapter 13. Successful mechanical calculators that real people could use weren’t available until the late nineteenth century.
One curious invention that was to have a later influence on the history of computing-as well as a profound influence on the textile industry-was an automated loom developed by joseph Marie jacquard ( 1 752-1 834). The jacquard loom (circa 1801) used metal cards with holes punched in them (much like those of a player piano) to control the weaving of patterns in fabrics. jacquard’s own tour de force was a self-portrait in black and white silk that required about 10,000 cards.
In the eighteenth century (and indeed up to the 1 940s), a computer was a person who calculated numbers for hire. Tables of logarithms were always needed, and trigonometric tables were essential for nautical navigation us­ ing the stars and planets. If you wanted to publish a new set of tables, you would hire a bunch of computers, set them to work, and then assemble all the results. Errors could creep in at any stage of this process, of course, from the initial calculation to setting up the type to print the final pages.

240
Chapter Eighteen
The desire to eliminate errors from mathe- matical tables motivated the work of Charles Babbage (1791-1871), a British mathematician and economist who was almost an exact con­ temporary of Samuel Morse.
At the time, mathematical tables (of loga­ rithms, for example) were not created by calcu­ lating an actual logarithm for each and every entry in the table. This would have taken far too long. Instead, the logarithms were calculated for select numbers, and then numbers in between were calculated by interpolation, using what are called differences in relatively simple calculations.
Beginning about 1820, Babbage believed that he could design and build a machine that would automate the process of constructing a table, even to the point of setting up type for printing. This would eliminate errors. He con­ ceived the Difference Engine, and basically it was a big mechanical adding machine. Multidigit decimal numbers were represented by geared wheels that could be in any of 10 positions. Negatives were handled using the ten’s complement. Despite some early models that showed Babbage’s design to be sound and some grants from the British government (never enough, of course), the Difference Engine was never completed. Babbage abandoned work on it in 1833.
By that time, however, Babbage had an even better idea. It was called the Analytical Engine, and through repeated design and redesign (with a few small models and parts of it actually built) it consumed Babbage off and on until his death. The Analytical Engine is the closest thing to a computer that the nineteenth century has to offer. In Babbage’s design, it had a store (com­ parable to our concept of memory) and a mill (the arithmetic unit). Multi­ plication could be handled by repeated addition, and division by repeated subtraction.
What’s most intriguing about the Analytical Engine is that it could be programmed using cards that were adapted from the cards used in the Jac­ quard pattern-weaving loom. As Augusta Ada Byron, Countess of Lovelace (1815-1852), put it (in notes to her translation of an article written by an Italian mathematician about Babbage’s Analytical Engine), “We may say that the Analytical Engine weaves algebraical patterns just as the Jacquard-loom weaves flowers and leaves.”
Babbage seems to be the first person to understand the importance of a conditional jump in computers. Here’s Ada Byron again: “A cycle of opera­ tions, then, must be understood to signify any set of operations which is repeated more than once. It is equally a cycle, whether it be repeated twice only, or an indefinite number of times; for it is the fact of a repetitio” oc­ curri11g at all that constitutes it such. In many cases of analysis there is a recurring group of one or more cycles; that is, a cycle ofcycle, or a cycle of cycles. ”

From Abaci to Chips 241
Although a difference engine was eventually built by father-and-son team Georg and Edvard Schcutz in 1 853, Babbage’s engines were forgotten for many years, only to be resurrected in the 1 930s when people began search­ ing for the roots of twentieth century computing. By that time, everything Babbage had done had already been surpassed by later technology, and he had little to offer the twentieth century computer engineer except a preco­ cious vision of automation.
Another milestone in the history of computing resulted from Article I, Section 2, of the Constitution of the United States of America. Among other things, this section calls for a census to be taken every ten years. By the time of the 1880 census, information was accumulated on age, sex, and national origin. The data amassed took about seven years to process.
Fearing that the 1 890 census would take longer than a decade to process, the C..cnsus Office explored the possibility of automating the system and chose machinery developed by Herman Hollerith
( 1 860-1 929), who had worked as a statistician
for the 1 880 census.
Hollerith’s plan involved manila punch cards
6 5A! x 3 4 inches in size. ( It’s unlikely that Hoi-
!crith knew about Charles Babbage’s use of cards
to program his Analytical Engine, but he was
almost certainly familiar with the use of cards
in the jacquard loom.) The holes in these cards
were organized into 24 columns of 1 2 posi­
tions each, for a total of 288 positions. These
positions represented certain characteristics of
a person being tallied in the census. The census
taker indicated these characteristics by punching 4-inch square holes into the appropriate positions on the card.
This book has probably so accustomed you to thinking in terms of binary codes that you might immediately assume that a card with 288 possible punches is capable of storing 288 bits of information. But the cards weren’t used that way.
For example, a census card used in a purely binary system would have one position for sex. It would be either punched for male or unpunched for female (or the other way around). But Hollerith’s cards had two positions for sex. One position was punched for male, the other for female. Likewise, the census taker indicated a subject’s age by making two punches. The first punch designated a five-year age range: 0 through 4, 5 through 9, 10 through 14, and so forth. The second punch was in one of five positions to indicate the precise age within that range. Coding the age required a total of 28 positions on the card. A pure binary system would require just 7 positions to code any age from 0 through 1 27.
We should forgive Hollerith for not implementing a binary system for recording census information: Converting an age to binary numbers was a little roo much ro ask of the 1 8 90 census takers . There’s also a practical reason why a system of punched cards can’t be entirely binary. A binary
f).
·,’
�··:

242
Chapter Eighteen
system would produce cases in which all the holes (or nearly all) were punched, rendering the card very fragile and structurally unsound.
Census data is collected so that it can be counted, or tabulated. You want to know how many people live in each census district, of course, but it’s also interesting to obtain information about the age distribution of the population. For this, Hollerith created a tabulating machine that combined hand operation and automation. An operator pressed a board containing 288 spring-loaded pins on each card. Pins corresponding to punched holes in the cards came into contact with a pool of mercury that completed an electrical circuit that triggered an electromagnet that incremented a decimal counter.
Hollerith also used electromagnets in a machine that sorted cards. For example, you might want to accumulate separate age statistics for each occupation that you’ve tallied. You first need to sort the cards by occupa­ tion and then accumulate the age statistics separately for each. The sorting machine used the same hand press as the tabulator, but the sorter had elec­ tromagnets to open a hatch to one of 26 separate compartments. The op­ erator dropped the card into the compartment and manually closed the hatch.
This experiment in automating the 1 8 90 census was a resounding success. All told, over 62 million cards were processed. They contained twice as much data as was accumulated in the 1 8 80 census, and the data was processed in about one-third the time. Hollerith and his inventions became known around the world. In 1 895, he even traveled to Moscow and succeeded in selling his
equipment for use in the very first Russian census, which occurred in 1 897. Herman Hollerith also set in motion a long trail of events. In 1 896, he founded the Tabulating Machine Company to lease and sell the punch-card equipment. By 1 9 1 1 , with the help of a couple of mergers, it had become the Computing-Tabulating-Recording Company, or C-T-R. By 1915, the presi­ dent of C-T-R was Thomas J. Watson (1874-1956), who in 1924 changed the name of the company to International Business Machines Corporation,
or IBM.
By 1928, the original 1890 census cards had evolved into the famous “do
not spindle, fold, or mutilate” IBM cards, with 80 columns and 12 rows. They remained in active use for over 50 years, and even in their later years were sometimes referred to as Hollerith cards. I’ll describe the legacy of these cards more in Chapters 20, 21, and 24.
Before we move on to the twentieth century, let’s not leave the nineteenth century with too warped a view about that era. For obvious reasons, in this book I’ve been focusing most closely on inventions that are digital in nature. These include the telegraph, Braille, Babbage’s engines, and the Hollerith card. When working with digital concepts and devices, you might find it easy to think that the whole world must be digital. But the nineteenth century is characterized more by discoveries and inventions that were decidedly not digital. Indeed, very little of the natural world that we experience through our senses is digital. It’s instead mostly a continuum that can’t be so easily quantified.
Although Hollerith used relays in his card tabulators and sorters, people didn’t really begin building computers using relays-electromechanical com­ puters, as they were eventually called-un_t.il the mid 1 930s. The relays used

From Abaci to Chips 243
in these machines were generally not telegraph relays, but instead were re­ lays developed for the telephone system to control the routing of calls.
Those early relay computers were not like the relay computer that we built in the last chapter. (As we’ll see, I based the design of that computer on microprocessors from the 1970s.) In particular, while it’s obvious to us to­ day that computers internally should use binary numbers, that wasn’t always the case.
Another difference between our relay computer and the early real ones is that nobody in the 1930s was crazy enough to construct 524,288 bits of memory out of relays! The cost and space and power requirements would have made so much memory impossible. The scant memory available was used only for storing intermediate results. The programs themselves were on a physical medium such as a paper tape with punched holes. Indeed, our process of putting code and data into memory is a more modern concept.
Chronologically, the first relay computer seems to have been constructed by Conrad Zuse (1910-1995), who as an engineering student in 1935 be­ gan building a machine in his parents’ apartment in Berlin. It used binary numbers but in the early versions used a mechanical memory scheme rather than relays. Zuse punched holes in old 35mm movie film to program his computers.
In 1 937, George Stibitz ( 1 904-1 995 ) of Bell Telephone Laboratories took home a couple of telephone relays and wired a 1-bit adder on his kitchen table that his wife later dubbed the K Machine (K for kitchen). This experi­ mentation led to Bell Labs’ Complex Number Computer in 1939.
Meanwhile, Harvard graduate student Howard Aiken ( 1 900-1973) needed some way to do lots of repetitive calculations, and that led to a collabora­ tion between Harvard and IBM that resulted in the Automated Sequence Controlled Calculator (ASCC) eventually known as the Harvard Mark I, completed in 1 943. This was the first digital computer that printed tables, thus finally realizing Charles Babb’age’s dream. The Mark II was the larg­ est relay-based machine, using 13,000 relays. The Harvard Computation laboratory headed by Aiken taught the first classes in computer science.
Relays weren’t perfect devices for constructing computers. Because they were mechanical and worked by bending pieces of metal, they could break after an extended workout. A relay could also fail because of a piece of dirt or paper stuck between the contacts. In one famous incident in 1 947, a moth was extracted from a relay in the Harvard Mark II computer. Grace Murray Hopper ( 1 906-1 992), who had joined Aiken’s staff in 1 944 and who would later become quite famous in the field of computer programming languages, taped the moth to the computer logbook with the note “first actual case of bug being found.”
A possible replacement for the relay is the vacuum tube, which was developed by john Ambrose Fleming (1849-1945) and Lee de Forest (1873-1961) inconnectionwithradio.Bythe 1940s,vacuumtubeshadlong been used to amplify telephones, and virtually every home had a console radio set filled with glowing tubes that amplified radio signals to make them audible. Vacuum tubes can also be wired-much like relays-into AND, OR, NAND, and NOR gates.

244
Chapter Eighteen
It doesn’t matter whether gates are built from relays or vacuum tubes. Gates can always be assembled into adders, selectors, decoders, flip-flops, and counters. Everything I explained about relay-based components in the pre­ ceding chapters remains valid when the relays are replaced by vacuum tubes.
Vacuum tubes had their own problems, though. They were expensive, re­ quired a lot of electricity, and generated a lot of heat. The big problem, however, was that they eventually burned out. This was a fact of life that people lived with. Those who owned tube radios were accustomed to replac­ ing tubes periodically. The telephone system was designed with a lot of redun­ dancy, so the loss of a tube now and then was no big deal. (No one expects the telephone system to work flawlessly anyway.) When a tube burns out in a computer, however, it might not be immediately detected. Moreover, a com­ puter uses so many vacuum tubes, that statistically they might be burning out every few minutes.
The big advantage of using vacuum tubes over relays is that tubes can switch in about a millionth of a second-one microsecond. A vacuum tube changes state ( switches on or off) a thousand times faster than a relay, which at its very best only manages to switch in about 1 millisecond, a thousandth of a second. Interestingly enough, the speed issue wasn’t a major consid­ eration in early computer development because overall computing speed was linked to the speed that the machine read the program from the paper or film tape. As long as computers were built in this way, it didn’t matter how much faster vacuum tubes were than relays.
But beginning in the early 1 940s, vacuum tubes began supplanting relays in new computers. By 1 945, the transition was complete. While relay ma­ chines were known as electromechanical computers, vacuum tubes were the basis of the first electronic computers.
In Great Britain, the Colossus computer (first operational in 1943) was dedicated to cracking the German “Enigma” code-making machine. Con­ tributing to this project (and to some later British computer projects) was Alan M. Turing ( 1 912-1954), who is most famous these days for writing two influential papers. The first, published in 1 937, pioneered the concept of “computability,” which is an analysis of what computers can and can’t do. He conceived of an abstract model of a computer that’s now known as the Turing Machine. The second famous paper Turing wrote was on the subject of artificial intelligence. He introduced a test for machine intelligence that’s now known as the Turing Test.
At the Moore School of Electrical Engineering (University of Pennsylva­ nia), j. Presper Eckert ( 1 91 9-1 995) andjohn Mauchly ( 1 907-1980) designed the ENTAC (Electronic Numerical Integrator and Computer). lt used 1 8,000 vacuum tubes and was completed in late 1 945. In sheer tonnage (about 30), the ENTAC was the largest computer that was ever (and probably will ever be) made. By 1 977, you could buy a faster computer at Radio Shack. Eckert and Mauchly’s attempt to patent the computer was, however, thwarted by a competing claim of john V. Atanasoff ( 1 903-1 995), who earlier designed an electronic computer that never worked quite right.

From Abaci to Chips 245
The ENIAC attracted the interest of mathe­ matician john von Neumann ( 1 90.3-1 957). Since 1 930, the Hungarian-born von Neumann (whose last name is pronounced noy mahrr ) had been living in the United States. A flamboyant man who had a reputation for doing complex arith­ metic in his head, von Neumann was a mathe­ matics professor at the Princeton Institute for Advanced Study, and he did research in every­ thing from quantum mechanics to the applica­ tion of game theory to economics.
john von Neumann helped design the suc­
cessor to the ENIAC, the EDVAC (Electronic
Discrete Variable Automatic Computer). Particularly in the 1946 paper ..Pre­ liminary Discussion of the Logical Design of an Electronic Computing In­ strument,” coauthored with Arthur W. Burks and Herman H. Goldstine, he described several features of a computer that made the EDVAC a consider­ able advance over the ENIAC. The designers of the EDVAC felt that the computer should use binary numbers internally. The ENIAC used decimal numbers. The computer should also have as much memory as possible, and this memory should be used for storing both program code and data as the program was being executed. (Again, this wasn’t the case with the ENIAC. Programming the ENIAC was a matter of throwing switches and plugging in cables. ) These instructions should be sequential in memory and addressed with a program counter but should also allow conditional jumps. This de­ sign came to be known as the stored-program concept.
These design decisions were such an important evolutionary step that to­ day we speak of von Neumann architecture. The computer that we built in the last chapter was a classic von Neumann machine. But with von Neumann architecture comes the von Ne11mann bottleneck. A von Neumann machine generally spends a significant amount of time just fetching instruc­ tions from memory in preparation for executing them. You’ll recall that the final design of the Chapter 17 computer required that three-quarters of the time it spent on each instruction be involved in the instruction fetch.
At the time of the EDVAC, it wasn’t cost effective to build a lot of memory out of vacuum tubes. Some very odd solutions were proposed instead. One successful one was mercury delay line memory, which used 5-foot tubes of mercury. At one end of the tube, little pulses were sent into the mercury about
1 microsecond apart. These pulses took about a millisecond to reach the other end (where they were detected like sound waves and routed back to the beginning), and hence each tube of mercury could store about 1 024 bits
It wasn’t until the mid- 1 950s that magnetic core memory was developed. Such memory consisted of large arrays of little magnetized metal rings strung with wires. Each little ring could store a hit of information. Long after core memory had heen replaced by other technologies, it was common to hear older programmers refer to the memory that the processor accessed as core.
of information.

246
Chapter Eighteen
John von Neumann wasn’t the only person doing some major conceptual thinking about the nature of computers in the 1 940s.
Claude Shannon (born 1916) was another influential thinker. In Chap­ ter 1 1 , I discussed his 1 938 master’s thesis, which established the relation­ ship between switches, relays, and Boolean algebra. In 1 948, while working for Bell Telephone Laboratories, he published a paper in the BellSystem Tech­ nical Journal entitled “A Mathematical Theory of Communication” that not only introduced the word bit in print but established a field of study today known as infonnation theory. Information theory is concerned with trans­ mining digital information in the presence of noise (which usually prevents all the information from getting through) and how to compensate for that. In 1 949, he wrote the first article about programming a computer to play chess, and in 1 952 he designed a mechanical mouse controlled by relays that could learn its way around a maze. Shannon was also well known at Bell Labs for riding a unicycle and juggling simultaneously.
Norbert Wiener ( 1 894-1 964), who earned his Ph.D. in mathematics from Harvard at the age of 1 8, is most famous for his book Cybernetics, or Control and Communication in the Animal and Machine ( 1 948 ). He coined the word cybernetics (derived from the Greek for steersman) to identify a theory that related biological processes in humans and animals to the mechanics of computers and robots. In popular culture, the ubiquitous cyber- prefix now denotes anything related to the computer. Most notably, the interconnection of millions of computers through the Internet is known as cyberspace, a word coined by cyberpunk science-fiction novelist William Gibson in his 1984 novel Neuromancer.
In 1948, the Eckert-Mauchly Computer Corporation (later part of Remington Rand) began work on what would become the first commercially available computer-the Universal Automatic Computer, or UNIVAC. It was completed in 1 95 1 , and the first one was delivered to the Bureau of the Cen­ sus. The UNIVAC made its prime-time network debut on CBS, when it was used to predict results of the 1952 presidential election. Walter Cronkite referred to it as an “electronic brain.” Also in 1952, IBM announced the company’s first commercial computer system, the 70 1 .
And thus began a long history o f corporate and governmental comput­ ing. However interesting that history might be, we’re going to pursue an­ other historical track-a track that shrank the cost and size of computers and brought them into the home, and which began with an almost unno­ ticed electronics breakthrough in 1947.
Bell Telephone Laboratories was for many years a place where smart people could work on just about anything that interested them. Some of them, for­ tunately, were interested in computers. I’ve already mentioned George Stibitz and Claude Shannon, both of whom made significant contributions to early computing while working at Bell Labs. Later on, in the 1 970s, Bell Labs was the binhplace of the influential computer operating system named Unix and a programming language named C, which I’ll describe in upcoming chapters.
Bell Labs came about when American Telephone and Telegraph officially separated their scientific and technical research divisions from the rest of their business, creatiDJ� the subsidiarv on Januarv 1 . 1 925. The orimarv ouroose

From Abaci to Chips
of Bell Labs was to develop technologies for improving the telephone sys­ tem. That mandate was fortunately vague enough to encompass all sorts of things, but one obvious perennial goal within the telephone system was the undistorted amplification of voice signals transmitted over wires.
Since 1 9 1 2, the Bell System had worked with vacuum rube amplification, and a considerable amount of research and engineering went into improv­ ing vacuum rubes for use by the telephone system. Despite this work, vacuum rubes still left much to be desired. Tubes were large, consumed a lot of power. and eventually burned out. But they were the only game in town.
All that changed December 1 6, 1 947, when two physicists at Bell Labs namedjohn Bardeen (1908-1991) and Walter Brattain (1902-1987) wired a different type of amplifier. This new amplifier was constructed from a slab of germanium-an element known as a semiconductor-and a strip of gold foil. They demonstrated it to their boss, William Shockley ( 1910-1989), a week later. It was the first transistor, a device that some people have called the most important invention of the twentieth century.
The transistor didn’t come out of the blue. Eight years earlier, on Decem­ ber 29, 1 939, Shockley had written in his notebook, ” It has today occurred to me that an amplifier using semiconductors rather than vacuum is in prin­ ciple possible.” And after that first transistor was demonstrated, many years followed in perfecting it. It wasn’t until 1 956 that Shockley, Bardeen, and Brattain were awarded the Nobel Prize in physics “for their researches on semiconductors and their discovery of the transistor effect.”
Earlier in this book, I talked about conductors and insulators. Conductors are so called because they’re very conducive to the passage of electricity. Copper, silver, and gold are the best conductors, and it’s no coincidence that all three arc found in the same column of the periodic table of the elements.
As you’ll recall, the electrons in an atom are distributed in shells that surround the nucleus of the atom. What characterizes these three conduc­ tors is a lone electron in the outermost shell. This electron can be easily dis­ lodged from the rest of the atom and hence is free to move as electrical current. The opposites of conductors are insulators-like rubber and plastic­ that barely conduct electricity at all.
The elements germanium and silicon (as well as some compounds) are called semiconductors, not because they conduct half as well as conductors, but because their conductance can be manipulated in various ways. Semi­ conductors have four electrons in the outermost shell, which is half the maximum number the outer shell can have. In a pure semiconductor, the atoms form very stable bonds with each other and have a crystalline struc­ ture similar to the diamond. Such semiconductors aren’t good conductors.
But semiconductors can be doped, which means that they’re combined with certain impurities. One type of impurity adds extra electrons to those needed for the bond between the atoms. These are called N-type semi­ conductors (N for negative). Another type of impurity results in a P-type semiconductor.
Semiconductors can be made into amplifiers by sandwiching a P-type semiconductor between two N-type semiconductors. This is known as an
247

248
Chapter Eighteen
NPN transistor, and the three pieces are known as the collector, the base, and the emitter.
Here’s a schematic diagram of an NPN transistor: Collector
&��Emitter
A small voltage on the base can control a much larger voltage passing from the collector to the emitter. If there’s no voltage on the base, it effectively turns off the transistor.
Transistors are usually packaged in little metal cans about a quarter-inch in diameter with three wires poking out:
The transistor inaugurated solid-state electronics, which means that tran­ sistors don’t require vacuums and are built from solids, specifically semicon­ ductors and most commonly (these days) silicon. Besides being much smaller than vacuum tubes, transistors require much less power, generate much less heat, and last longer. Carrying around a tube radio in your pocket was in­ conceivable. But a transistor radio could be powered by a small battery, and unlike tubes, it wouldn’t get hot. Carrying a transistor radio in your pocket became possible for some lucky people opening presents on Christmas morning in 1 954. Those first pocket radios used transistors made by Texas Insrruments, an important company of the semiconductor revolution.
The first commercial application of the transistor was, however, a hear­ ing aid. In commemorating the heritage of Alexander Graham Bell in his lifelong work with deaf people, AT&T allowed hearing aid manufacturers to use transistor technology without paying any royalties. The first transis­ tor television debuted in 1960, and today tube appliances have almost disappeared. (Not entirely, however. Some audiophiles and electric gui­ tarists continue to prefer the sound of tube amplifiers to their transistor counterparts . )
In 1 956, Shockley left Bell Labs to form Shockley Semiconductor Labo­ ratories. He moved to Palo Alto, California, where he had grown up. His was the first such company to locate in that area. In time, other semicon­ ductor and computer companies set up business there, and the area south of San Francisco is now informally known as Silicon Valley.
Vacuum tubes were originally developed for amplification, but they could also be used for switches in logic gates. The same goes for the tran­ sistor. On the next page, you’ll see a transistor-based AND gate structured

From Abaci to Chips 249
much like the relay version. Only when both the A input is 1 and the 8 in­ put is 1 will both transistors conduct current and hence make the output 1 . The resistor prevents a short circuit when this happens.
Wiring two transistors as you see below in the diagram on the right cre­ ates an OR gate. In the AND gate, the emitter of the top transistor is con­ nected to the collector of the bottom transistor. In the OR gate, the collectors of both transistors are connected to the voltage supply. The emitters are con­ nected together.
OR gate
v
v Bin
Out
AND gate v
A In Bin
Out
So everything we learned about constructing logic gates and other com­ ponents from relays is valid for transistors. Relays, tubes, and transistors were all initially developed primarily for purposes of amplification but can be connected in similar ways to make logic gates out of which computers can be built. The first transistor computers were built in 1 956, and within a few years tubes had been abandoned for the design of new computers.
Here’s a question: Transistors certainly make computers more reliable, smaller., and less power hungry. But do transistors make computers any sim­ pler to construct?
Not really. The transistor lets you fit more logic gates in a smaller space, of course, but you still have to worry about all the interconnections of these components. It’s just as difficult wiring transistors to make logic gates as it is wiring relays and vacuum tubes. In some ways, it’s even more difficult because the transistors are smaller and less easy to hold. If you wanted to build the Chapter 1 7 computer and the 64-KB RAM array out of transistors, a good part of the design work would be devoted to inventing some kind of structure in which to hold all the components. Most of your physical labor
would be the tedious wiring of millions of interconnections among millions of transistors.
A In
.
.

250
Chapter Eighteen
As we’ve discovered, however, there are cenain combinations of transis­ tors that show up repeatedly. Pairs of transistors are almost always wired as gates. Gates are often wired into flip-flops or adders or selectors or decoders. Flip-flops are combined into mulribit latches or RAM arrays. As­ sembling a computer would be much easier if the transistors were prewired in common configurations.
This idea seems to have been proposed first by British physicist Geoffrey Dummer ( born 1 909) in a speech in May 1 952. “I would like to take a peep into the future,” he said.
With the advent of the transistor and the work in semiconductors generally, it seems now possible to envisage electronic equipment in a solid block with no connecting wires. The block may consist of layers of insulating, conducting, rectifying and amplifying materials, the electrical functions being connected directly by cutting out areas of the various layers.
A working product, however, would have to wait a few years.
Without knowing about the Dummer prediction, in July 1 958 it occurred to Jack Kilby (born 1923) of Texas Instruments that multiple transistors as well as resistors and other electrical components could be made from a single piece of silicon. Six months later, in January 1959, basically the same idea occurred to Roben Noyce ( 1 927-1 990). Noyce had originally worked for Shockley Semiconductor Laboratories, but in 1 95 7 he and seven other sci­
entists had left and staned Fairchild Semiconductor Corporation.
In the history of technology, simultaneous invention is more common than one might suspect. Although Kilby had invented the device six months before Noyce, and Texas Instruments had applied for a patent before Fairchild, Noyce was issued a patent first. Legal battles ensued, and only after a de­ cade were they finally settled to everyone’s satisfaction. Although they never worked together, Kilby and Noyce are today regarded as the coinventors of
the integrated circuit, or /C, commonly called the chip.
Integrated circuits are manufactured through a complex process that
involves layering thin wafers of silicon that are precisely doped and etched in different areas to form microscopic components. Although it’s expensive to develop a new integrated circuit, they benefit from mass production-the more you make, the cheaper they become.
The actual silicon chip is thin and delicate, so it must be securely packaged, both to protect the chip and to provide some way for the components in the chip to be connected to other chips. Integrated circuits are packaged in a couple of different ways, but the most common is the rectangular plastic dual inline package (or DIP), with 1 4, 1 6, or as many as 40 pins protruding from the side:

From Abaci to Chips
25 1
This is a 16-pin chip. If you hold the chip so the linle indentation is at the left (as shown), the pins are numbered 1 through 16 beginning at the lower left and circling around the right side to end with pin 16 at the upper left. The pins on each side are exactly 11io inch apart.
Throughout the 1 960s, the space program and the arms race fueled the early integrated circuits market. On the civilian side, the first commercial product that contained an integrated circuit was a hearing aid sold by Ze­ nith in 1 964. In 1 97 1 , Texas Instruments began selling the first pocket cal­ culator, and Pulsar the first digital watch. (Obviously the IC in a digital watch is packaged much differently from the example just shown.) Many other products that incorporated integrated circuits in their design followed.
In 1 965, Gordon E. Moore (then at Fairchild and later a cofounder of Intel Corporation) noticed that technology was improving in such a way that the number of transistors that could fit on a single chip had doubled every year since 1959. He predicted that this trend would continue. The actual trend was a linle slower, so Moore’s Law (as it was eventually called) was modi­ fied to predict a doubling of transistors on a chip every 18 months. This is still an astonishingly fast rate of progress and reveals why home computers always seem to become outdated in just a few short years. Some people believe that Moore’s Law will continue to be accurate until about 2015.
In the early days, people used to speak o f small-scale integration, o r SSI, to refer to a chip that had fewer than 1 0 logic gates; medium-scale integration, or MSI (10to 100gates);andlarge-scaleintegration,orLSI(100to5000). Then the terms ascended to very-large-scale integration, or VLSI (5000 to 50,000); super-large-scale integration, or SLSI (50,000 to 100,000); and ultra-large-scale integration, (more than 100,000 gates).
For the remainder of this chapter and the next, I want to pause our time machine in the mid-1 970s, an ancient age before the first Star Wars movie was released and with VLSI just on the horizon. At that time, several dif­ ferent technologies were used to fabricate the components that make up inte­ grated circuits. Each of these technologies is sometimes called a family of ICs. By the mid-1970s, two families were prevalent: TTL (pronounced tee tee ell) and CMOS (see moss).
TTL stands for transistor-transistor logic. If in the rnid-1970s you were a digital design engineer (which meant that you designed larger circuits from ICs), a 1 14-inch-thick book first published in 1973 by Texas Instruments called The TTL Data Book for Design Engineers would be a permanent fixture on your desk. This is a complete reference to the 7400 (seventy-four hundred) series of Til.. integrated circuits sold by Texas Instruments and several other companies, so called because each IC in this family is identi­ fied by a number beginning with the digits 74.
Every integrated circuit in the 7400 series consists of logic gates that are prewired in a particular configuration. Some chips provide simple prewired gates that you can use to create larger components; other chips provide common components such as flip-flops, adders, selectors, and decoders.

252
Chapter Eighteen
The first IC in the 7400 series is number 7400 itself, which is described in the TTL Data Book as “Quadruple 2-lnput Positive-NAND Gates. ” What this means is that this particular integrated circuit contains four 2-input NAND gates. They’re called positive NAND gates because a voltage corre­ sponds to 1 and no voltage corresponds to 0. This is a 14-pin chip, and a little diagram in the data book shows how the pins correspond to the inputs and outputs:
lA 18 tY 2A 28 2Y Gnd
This diagram is a top view of the chip (pins on the bottom) with the little indentation (shown on page 250) at the left.
Pin 14 is labeled Vee and is equivalent to the V symbol that I’ve been us­ ing to indicate a voltage. (By convention, any double letter subscript on a capi­ tal V indicates a power supply. The C in this subscript refers to the collector input of a transistor, which is internally where the voltage supply is con­ nected. ) Pin 7 is labeled GND for ground. Every integrated circuit that you use in a particular circuit must be connected to a power supply and a com­ mon ground.
For 7400 series TTL, Vee must be between 4.75 and 5.25 volts. Another way of saying this is that the power supply voltage must be 5 volts plus or minus 5 percent. If the power supply is below 4.75 volts, the chip might not work. If it’s higher than 5.25, the chip could be damaged. You generally can’t use batteries with TTI.; even if you were to find a 5-volt battery, the volt­ age wouldn’t be exact enough to be adequate for these chips. TTL usually requires a power supply that you plug into the wall.
Each of the four NAND gates in the 7400 chip has two inputs and one output. They work independently of each other. In past chapters, we’ve been differentiating between inputs being either 1 (which is a voltage) or 0 (which is no voltage). In reality, an input to one of these NAND gates can range anywhere from 0 volts (ground) to 5 volts (Vcc:l· In TTL, anything between 0 volts and 0.8 volt is considered to be a logical 0, and anything between

from Abaci to Chips
2 volts and 5 volts is considered to be a logkal 1. Inputs between 0.8 volt and 2 volts should be avoided.
The output of a lTL gate is typically about 0.2 volt for a logical 0 and 3.4 volts for a logical 1 . Because these voltages can vary somewhat, inputs and outputs to integrated circuits are sometimes referred to as /ow and high rather than 0 and 1 . Moreover, sometimes a low voltage can mean a logical 1 and a high voltage can mean a logical 0. This configuration is referred to as negative logic. When the 7400 chip is referred to as “Quadruple 2-lnput Positive-NAND Gates,” the word positive means positive logic is assumed.
If the output of a 1TL gate is typically 0.2 volt for a logical 0 and 3.4 volts for a logical 1 , these outputs are safely within the input ranges, which are between 0 and 0.8 volt for a logical 0 and between 2 and 5 volts for a logi­ cal 1 . This is how 1TL is insulated against noise. A 1 output can lose about
1 .4 volts and still be high enough to qualify as a 1 input. A 0 output can gain 0.6 volt and still be low enough to qualify as a 0 input.
Probably the most important fact to know about a particular integrated circuit is the propagation time. That’s the time it takes for a change in the inputs to be reflected in the output.
Propagation times for chips arc generally measured in nanoseconds, ab­ breviated nsec. A nanosecond is a very short period of time. One thousandth of a second is a millisecond. One millionth of a second is a microsecond. One billionth of a second is a nanosecond. The propagation time for the NAND gates in the 7400 chip is guaranteed to be less than 22 nanoseconds. That’s 0.000000022 seconds, or 22 billionths of a second.
If you can’t get the feel of a nanosecond, you’re not alone. Nobody on this planet has anything but an intellectual appreciation of the nanosecond. Nanoseconds are much shorter than anything in human experience, so they’ll forever remain incomprehensible. Every explanation makes the nanosecond more elusive. For example, I can say that if you’re holding this book 1 foot away from your face, a nanosecond is the time it takes the light to travel from the page to your eyes. But do you really have a better feel for the nano­ second now?
Yet the nanosecond is what makes computers possible. As we saw in Chapter 1 7, a computer processor does moronically simple things-it moves a byte from memory to register. adds a byte to another byte, moves the result back to memory. The only reason anything substantial gets completed (not in the Chapter 1 7 computer but in real ones) is that these operations occur very quickly. To quote Robert Noyce, “Aher you become reconciled to the nanosecond, computer operations are conceptually fairly simple.”
let’s continue perusing the ITL Data Book for Design Engineers. You will see a lot of familiar little items in this book. The 7402 chip contains four 2-input NOR gates, the 7404 has six inverters, the 7408 has four 2-input
253

254
Chapter Eighteen
AND gates, the 7432 has four 2-input OR gates, and the 7430 has an 8-input NAND gate:
The abbreviation NC means no connection.
The 7474 chip is another that will sound very familiar. It’s a ..Dual D-Type
Positive-Edge-Triggered Flip-Flop with Preset and Clear” and is diagrammed like this:
2Clr 2D 2Clk 2Pre 2Q 2Q
lClr 1D lClk lPre tQ tQ Gnd
The TTL Data Book even includes a logic diagram for each flip-flop in this chip:

From Abaci to Chips 255
You’ll recognize this as being similar to the diagram at the end of Chapter 14, except that I used NOR gates. The logic table in the TTL Data Book is a little different as well:
IDputs
Outputs
Pre Clr Ok D
QQ
LHXX HLXX LLXX HLtH HHtL HHLX
HL LH H• H• HL LH
Qo Qo
ln this table, the H stands for High and the L stands for Low. You can think of these as 1 and 0 if you wish. In my flip-flop, the Preset and Clear inputs were normally 0; here they’re normally 1 .
Moving right along in the TTL Data Book, you’ll discover that the 7483 chip is a 4-Bit Binary Full Adder, 74151 is a 8-Line-To-1-Line Data Selector, the 74154 is a 4-line-To-16-Line Decoder, 74161 is a Synchronous 4-Bit Binary Counter, and 74 1 75 is a Quadruple D-Type Flip-Flop with Clear. You can use two of these chips for making an 8-bit latch.

256
Chapter Eighteen
So now you know bow I came up with all the various components I’ve been using since Chapter 1 1 . I stole them from the TTL Data Book for Design Engineers.
As a digital design engineer, you would spend long hours going through the TTL Data Book familiarizing yourself with the types of TIL chips that were available. Once you knew all your tools, you could actually build the computer I showed in Chapter 1 7 out of TIL chips. Wiring the chips together is a lot easier than wiring individual transistors together. But you might want to consider not using ITL to make the 64-KB RAM array. In the 1 973 TTL Data Book, the heftiest RAM chip listed is a mere 256 x 1 bits. You’d need 2048 of these chips to make 64 KB! TIL was never the best technol­ ogy for memory. I’ll have more to say about memory in Chapter 2 1 .
You’d probably want to use a better oscillator as well. While you can certainly connect the output of a TIL inverter to the input, it’s better to have an oscillator with a more predictable frequency. Such an oscillator can be constructed fairly easily using a quam crystal that comes in a little flat can with two wires sticking out. These crystals vibrate at very specific frequen­ cies, usually at least a million cycles per second. A million cycles per second is called a megahertz and abbreviated MHz. If the Chapter 1 7 computer were constructed out of TIL, it would probably run fine with a clock frequency of 10 MHz. Each instruction would execute in 400 nanoseconds. This, of course, is much faster than anything we conceived when we were working with relays.
The other popular chip family was (and still is) CMOS, which stands for complementary metal-oxide semiconductor. If you were a hobbyist design­ ing circuits from CMOS ICs in the mid- 1 970s, you might use as a reference source a book published by National Semiconductor and available at your local Radio Shack entitled CMOS Databook. This book contains informa­ tion about the 4000 (four thousand) series of CMOS ICs.
The power supply requirement for TTL is 4.75 to 5.25 volts. For CMOS, it’s anything from 3 volts to 1 8 volts. That’s quite a leeway! Moreover, CMOS requires much less power than TTL, which makes it feasible to run small CMOS circuits from batteries. The drawback of CMOS is lack of speed. For example, the CMOS 4008 4-bit full adder running at 5 volts is only guaranteed to have a propagation time of 750 nanoseconds. It gets faster as the power supply gets higher-250 nsec at 10 volts and 190 nsec at 15 volts. But the CMOS device doesn’t come dose to the TTL 4-bit adder, which has a propagation time of 24 nsec. (Twenty-five years ago, the trade-off between the speed of ITL and the low power requirements of CMOS was fairly clear cut. Today there are low-power versions of ITL and high-speed versions of CMOS.)
On the practical side, you would probably begin wiring chips together on a plastic breadboard:

257
Each short row of 5 holes is electrically connected underneath the plastic base. You insert chips into the breadboard so that a chip straddles the long central groove and the pins go into the holes on either side of the groove. Each pin of the IC is then electrically connected to 4 other holes. You con­ nect the chips with pieces of wires pushed into the other holes.
You can wire chips together more permanently using a technique caUed wire-wrapping. Each chip is inserted into a socket that has long square posts:
Each post corresponds to a pin of the chip. The sockets themselves are in­ serted into thin perforated boards. From the other side of the board, you use a special wire-wrap gun to tightly wrap thin pieces of insulated wire around the post. The square edges of the post break through the insulation and make an electrical connection with the wire.
If you were actually manufacturing a particular circuit using ICs, you’d probably use a printed circuit board. Back in the old days, this was some­ thing a hobbyist could do. Such a board has holes and is covered by a thin layer of copper foil. Basically, you cover all the areas of copper you want to preserve with an acid resistant and use acid to etch away the rest. You can then solder IC sockets (or the ICs themselves) directly to the copper on the board. But because of the very many interconnections among ICs, a single area ofcopper foil is usually inadequate. Commercially manufactured printed circuit boards have multiple layers of interconnections.
By the early 1 970s, it became possible to use ICs to create an entire com­ puter processor on a single circuit board. It was really only a matter of time before somebody put the whole processor on a single chip. Although Texas Instruments filed a patent for a single-chip computer in 1 97 1 , the honor of actually making one belongs to Intel, a company started in 1 968 by former
From Abaci to Chips

258
Chapter Eighteen
Fairchild employees Robert Noyce and Gordon Moore. Intel’s first major product was, in 1 970, a memory chip that stored 1 024 bits, which was the greatest number of bits on a chip at that time.
Intel was in the process of designing chips for a programmable calcula­ tor to be manufactured by the japanese company Busicom when they de­ cided to take a different approach. As Intel engineer Ted Hoff put it, ..Instead of making their device act like a calculator with some programming abili­ ties, I wanted to make it function as a general-purpose computer programmed to be a calculator.” This led to the Intel 4004 (pronounced forty oh four), the first “computer on a chip,” or microprocessor. The 4004 became avail­ able in November 1971 and contained 2300 transistors. (By Moore’s Law, microprocessors made 18 years later should contain about 4000 times as many transistors, or about 10 million. That’s a fairly accurate prediction.)
Having told you the number o f its transistors, I’ll now describe three other important characteristics of the 4004. These three measures are often used as standards for comparison among microprocessors since the 4004.
First, the 4004 was a 4-bit microprocessor. This means that the data paths in the processor were only 4 bits wide. When adding or subtracting num­ bers, it handled only 4 bits at a shot. In contrast, the computer developed in Chapter 17 has 8-bit data paths and is thus an 8-bit processor. As we’ll soon see, 4-bit microprocessors were surpassed very quickly by 8-bit micro­ processors. No one stopped there. In the late 1 970s, 1 6-bit microprocessors became available. When you think back to Chapter 1 7 and recall the sev­ eral instruction codes necessary to add two 1 6-bit numbers on an 8-bit pro­ cessor, you’ll appreciate the advantage that a 16-bit processor gives you. In the mid-1 980s, 32-bit microprocessors were introduced and have remained the standard for home computers since then.
Second, the 4004 had a maximum clock speed of 108,000 cycles per sec­ ond, or 108 kilohertz (KHz). Clock speed is the maximum speed of an os­ cillator that you can connect to the microprocessor to make it go. Any faster and it might not work right. By 1 999, microprocessors intended for home computers had hit the 500-megahertz point-about 5000 times faster than the 4004.
Third, the addressable memory of the 4004 was 640 bytes. This seems like an absurdly low amount; yet it was in line with the capacity of memory c�ips available at the time. As you’ll see in the next chapter, within a couple of years microprocessors could address 64 KB of memory, which is the ca­
pability of the Chapter 1 7 machine. Intel microprocessors in 1 999 can ad­ dress 64 terabytes of memory, although that’s overkill considering that most people have fewer than 256 megabytes of RAM in their home computers.
These three numbers don’t affect the capability of a computer. A 4-bit processor can add 32-bit numbers, for example, simply by doing it in 4-bit chunks. In one sense, all digital computers are the same. If the hardware of one processor can do something another can’t, the other processor can do it in software; they all end up doing the same thing. This is one of the im­ plications of Alan Turing’s 1937 paper on computability.

From Abaci to Chips 259
Where processors ultimately do differ, however., is in speed. And speed is a big reason why we’re using computers to begin with.
The maximum clock speed is an obvious influence on the overall speed of a processor. That clock speed determines how fast each instruction is being executed. The processor data width affects speed as well. Although a 4-bit processor can add 32-bit numbers, it can’t do it nearly as fast as a 32-bit processor. What might be confusing, however., is the effect on speed of the maximum amount of memory that a processor can address. At first, address­ able memory seems to have nothing to do with speed and instead reflects a limitation on the processor’s ability to perform certain functions that might require a lot of memory. But a processor can always get around the memory limitation by using some memory addresses to control some other medium for saving and retrieving information. (For example, suppose every byte writ­ ten to a particular memory address is actually punched on a paper tape, and every byte read from that address is read from the tape.) What happens, however., is that this process slows down the whole computer. The issue again is speed.
Of course, these three numbers indicate only roughly how fast the micro­ processor operates. These numbers tell you nothing about the internal ar­ chitecture of the microprocessor or about the efficiency and capability of the machine-code instructions. As processors have become more sophisticated, many common tasks previously done in software have been built into the processor. We’ll see examples of this trend in the chapters ahead.
Even though all digital computers have the same capabilities, even though they can do nothing beyond the primitive computing machine devised by Alan Turing, the speed of a processor ofcourse ultimately affects the over­ all usefulness of a computer system. Any computer that’s slower than the human brain in performing a set of calculations is useless, for example. And we can hardly expect to watch a movie on our modem computer screens if the processor needs a minute to draw a single frame.
But back to the mid-1 970s. Despite the limitations of the 4004, it was a start. By April 1972, Intel had released the 8008-an 8-bit microprocessor running at 200 kHz that could address 16 KB of memory. (See how easy it is to sum up a processor with just three numbers?) And then, in a five-month period in 1974, both Intel and Motorola came out with microprocessors that were intended to improve on the 8008. These two chips changed the world.

Chapter Nineteen
Two Classic Microprocessors
The microprocessor-a consolidation of all the components of a central processing unit (CPU) of a computer on a single chip of silicon-was born in 1971. It was a modest beginning: The first microprocessor, the Intel 4004, contained about 2300 transistors. Today, nearly three decades later, microprocessors made for home computers are approaching the 10,000,000 transistor mark.
Yet what the microprocessor actually docs on a fundamental level has remained unchanged. While those millions of additional transistors in today’s chips might be doing interesting things, in an initial exploration of the mi­ croprocessor they offer more distraction than enlightenment. To obtain the clearest view of what a microprocessor docs, let’s look at the first ready-for­ prime-time microprocessors.
These microprocessors appeared in 1974, the year in which Intel intro­ duced the 8080 (pronounced eighty eighty) in April and Motorola-a com­ pany that had been making semiconductors and transistor-based products since the 1 950s-introduced the 6800 (sixty-eight hundred) in August. These weren’t the only microprocessors available that year. Also in 1974, Texas Instruments introduced the 4-bit TMS 1 000, which was used in many cal­ culators, toys, and appliances; and National Semiconductor introduced the PACE, which was the first 1 6-bit microprocessor. In retrospect, however, the 8080 and the 6800 were certainly the two most historically significant chips.

Two Classic Microprocessors 261
Intel set the initial price of the 8080 at $360, a sly dig at IBM’s System/360, a large mainframe system used by many large corporations that cost millions. (Today you can buy an 8080 chip for $ 1 .95. ) It’s not as if the 8080 is com­ parable to Systcm/360 in any way, but within a few years IBM itself would certainly be taking notice of these very small computers.
The 8080 is an 8-bit microprocessor that contains about 6000 transistors, runs at a 2 MHz clock speed, and addresses 64 kilobytes of memory. The 6800 (also selling these days for $1.95) has about 4000 transistors and also addresses 64 KB of memory. The first 6800 ran at 1 MHz, but by 1977 Motorola introduced later versions running at 1 .5 and 2 MHz.
These chips are referred to as single-chip microprocessors and less accu­ rately as computers on a chip. The processor is only one part of the whole computer. In addition to the processor, a computer at the very least requires some random access memory (RAM), some way for a person to get infor­ mation into the computer (an input device), some way for a person to get information out of the computer (an output device), and several other chips that bind everything together. But I’ll describe these other components in greater detail in Chapter 2 1 .
For now, let’s look at the microprocessor itself. Often a description of a microprocessor is accompanied by a block diagram that illustrates the in­ ternal components of the microprocessor and how they’re connected. But we had enough of that in Chapter 1 7. Instead, we’ll get a sense of what’s inside the processor by seeing how it interacts with the outside world. In other words, we can think of the microprocessor as a black box whose in­ ternal operations we don’t need to study minutely in order to understand what it does. We can instead grasp what the microprocessor does by exam­ ining the chip’s input and output signals, and in particular the chip’s instruc­ tion set.
Both the 8080 and 6800 are 40-pin integrated circuits. The most com­ mon IC package for these chips is about 2 inches long, about a half inch wide, and � inch thick:
Of course, what you sec is just the packaging. The actual wafer of silicon inside is much smaller-in the case of the early 8-bit microprocessors, the silicon is less than 14 inch square. The packaging protects the silicon chip and also provides access to all of the chip’s input and output points through the pins. The diagram on the following page shows the function of the 40 pins of the 8080.

Intel
JNT A2 02 At
INfE 25
DBIN 24 WAIT
WR 23 READY SYNC 22 e,
+5V 21 HLDA
Every electrical or electronic device that we’ve built in this book has re­ quired some kind of electrical power supply. One of the 8080’s quirks is that it requires three power supply voltages. Pin 20 must be connected to a 5- volt power supply, pin 1 1 to a -5-volt power supply, and pin 28 to a 1 2-volt power supply. You connect pin 2 to ground. (In 1 976, Intel released the 8085 chip, which simplified these power requirements.)
All the remaining pins are drawn as arrows. An arrow from the chip indicates an output signal. This is a signal controlled by the microproces­ sor that other chips in the computer respond to. An arrow into the chip indicates an input signal. This is a signal that comes from another chip in the computer that the 8080 responds to. Some pins are both inputs and outputs.
The processor in Chapter 17 required an oscillator to make it go. The 8080 requires rwo different synchronized 2-MHz clock inputs labeled “• and “z on pins 22 and 1 5 . These signals are most conveniently supplied by another chip made by Intel known as the 8224 Clock Signal Generator. You connect an 18-MHz quartz crystal to this chip, and it basically does the rest.
A microprocessor always has multiple output signals that address memory. The number of signals it has for this purpose is directly related to the amount of memory the microprocessor can address. The 80801has 1 6 signals labeled Au through A1p which give it the ability to address 2 6, or 65,536, bytes of memory.
The 8080 is an 8-bit microprocessor that reads data from memory and writes data to memory 8 bits at a time. The chip ‘includes eight signals la­ beled 00 through 07• These signals are the only ones on the chip that are both
At4 38 Au Au 36 AH
A9
Ao
Chapter Nineteen
AJO
GND
D4 D5 D, D, D3 Dl D, Do
-5V
RESET
HOLD
40 Au
As 33 A,
32
3 1 As
8080 30 A.. 29 Al
28 +12V
39
37
35 34
27
A,
26

Two Classic Microprocessors 263
inputs and outputs. When the microprocessor reads a byte of memory, the pins function as inputs; when the microprocessor writes a byte to memory, the pins function as outputs.
The other ten pins of the microprocessor are control signals. The RESET input, for example, is used to reset the microprocessor. The output signal WR indicates that the microprocessor needs to write a byte of memory into RAM. (The WR signal corresponds to the Write input of the RAM array.) In addition, other control signals appear on the 00 through 07 pins at a par­ ticular time while the chip reads instructions. Computer systems built around the 8080 generally use the 8228 System ControUer chip to latch these ad­ ditional control signals. I’ll describe some control signals later on, but the 8080’s control signals are notoriously messy, so unless you’re going to ac­ tually design a computer based on the chip, it’s best not to torture yourseU with its control signals.
Let’s assume that the 8080 microprocessor is connected to 64 KB of memory that we have the ability to write bytes into and read bytes from independent of the microprocessor.
After the 8080 chip is reset, it reads the byte located at memory address OOOOh into the microprocessor. It does this by outputting 1 6 zeros on the address signals � through Aw The byte it reads should be an 8 0 8 0 instruction. and the process of reading this byte is known as an instruction fetch.
In the computer we built in Chapter 1 7, aU instructions (except HLT) were 3 bytes in length, consisting of an opcode and a 2-byte address. In the 8080, instructions can be 1 byte, 2 bytes, or 3 bytes in length. Some instructions cause the 8080 to read a byte from a particular location in memory into the microprocessor. Some instructions cause the 8080 to write a byte from the microprocessor into a particular location in memory. Other instructions cause the 8080 to do something internally without using any RAM. After processing the first instruction, the 8080 accesses the second instruction in memory, and so forth. Together, these instructions constitute a computer
program that can do something interesting.
When the 8080 is running at its maximum speed of 2 MHz, each clock
cycle is 500 nanoseconds. ( 1 + 2,000,000 cycles per second = 0.000000500 seconds. ) The instructions in the Chapter 1 7 computer all required 4 clock cycles. Each 8080 instruction requires anywhere from 4 to 1 8 clock cycles. This means that each instruction is executed in 2 to 9 microseconds ( millionths of a second).
Probably the best way to understand what a particular microprocessor is capable of doing is to examine its complete instruction set in a systematic manner.
The final computer in Chapter 17 had only 12 instructions. An 8-bit microprocessor could easily have as many as 256 instructions, each opcode corresponding to a particular 8-bit value. (It could actually have more if some instructions have 2-byte opcodes.) The 8080 doesn’t go quite that far, but it does have 244 opcodes. That might seem like a lot, but all in all, the 8080 doesn’t really do all that much more than the computer in Chapter 1 7. For example, if you need to do multiplication or division using an 8080, you still need to write your own little program to do it.

264
Chapter Nineteen
As you’ll recall from Chapter 17, each opcode in a processor’s instruc­ tion set is usually associated with a particular mnemonic, and some of these mnemonics might have arguments. But these mnemonics are solely for con­ venience in referring to the opcodes. The processor reads only bytes; it knows nothing about the text that makes up the mnemonics. (For purposes ofclar­ ity, I’ve taken some liberty with the mnemonics as they appear in Intel’s docu· mentation of the 8080.)
The Chapter 17 computer had two imponant instructions that we initially called Load and Store. Each of these instructions occupied 3 bytes of memory. The first byte of a Load instruction was the opcode, and the 2 bytes that followed the opcode indicated a 16-bit address. The processor loaded the byte at that address into the accumulator. Similarly, the Store instruc­ tion saved the contents of the accumulator in the address indicated in the instruction.
Later on, we discovered that we could abbreviate these two opcodes us­ ing mnemonics:
LOD A.[aaaa] STO [aaaa] .A
where A stands for the accumulator (the destination in the Load instruction and the source in the Store instruction) and aaaa indicates a 16-bit memory address, usually written as 4 hexadecimal digits.
The 8-bit accumulator in the 8080 is called A, just like the accumulator in Chapter 1 7. And like the computer in Chapter 1 7, the 8080 includes two instructions that do exactly the same thing as the Load and Store instruc­ tions. The 8080 opcodes for these two instructions are 32h and 3Ah, and each opcode is followed by a 1 6-bit address. The 8080 mnemonics are STA (standing for Store Accumulator) and LDA (Load Accumulator):
Opcode 32 3A
Instruction
STA [aaaa],A LOA A,[aaaa]
I n addition to the accumulator, the 8080 contains six registers that can also hold 8-bit values inside the microprocessor. These registers are very similar to the accumulator; indeed, the accumulator is considered to be a special type of register. Like the accumulator, the other six registers arc latches; the processor can move bytes from memory into registers, and from registers back into memory. The other registers, however, aren’t as versatile as the accumulator. When you add two 8-bit numbers, for example, the result always goes into the accumulator rather than into one of the other registers.
The six additional registers in the 8080 are named 8, C, D, E, H, and 1.. The first question people usually ask is, ..What happened to F and G?” and the second question is, ..And what about I, J, and K?” The answer is that registers H and L are so called because they’re special in a certain way. H stands for high and L stands for low. Very often the 8-bit quantities in H and L are treated in tandem as a 1 6-bit register pair named HL, H being the

Two Classic Microprocessors 265
high-order byte and l being the low-order byte. This 16-bit value is often used to address memory. We’ll see how this works shortly.
Are all these registers necessary? Why didn’t we need them in the Chap­ ter 17 computer? In theory, they aren’t necessary. But they turn out to be very convenient. Many computer programs juggle several numbers at the same time. It’s easiest to do this if all the numbers are stored in microprocessor registers rather than memory. The program is usually faster as well: The fewer rimes a program needs to access memory, generally the faster it will run.
No fewer than 63 opcodes are devoted to a single 8080 instruction called MOV, which is short for Move. This instruction is just a single byte. The instruction usually moves the contents of one register into another (or the same) register. The large number of MOV instructions is a normal conse­ quence of designing a microprocessor with seven registers (including the accumulator ).
Here are the first 32 MOV instructions. Remember that the destination is the argument on the left and the source is the argument on the right:
Opcode lnsttuction
40 HOV 8,8
41 HOV B.C
42 HOV 8,0
HOV 8,E HOV B.H HOV 8,L
46 HOV 8,[HL]
47 HOV B,A
48 HOV C.B
HOV C,C
4A HOV c.o 48 HOV C.E 4C HOV C,H
4D HOV C.L
4E HOV C,[HL] HOV C,A
Opcode Insttuction HOV 0,8 51 HOV o.c 52 HOV D.D 53 HOVD,E HOV O,H 55 HOVD,L
56 HOV D,[HL] 57 HOV D.A
58 HOVE.8
59 HOVE,C
SA HOVE.D
58 HOVE.E
sc HOV E,H
SD HOVE,L
SE HOV E,[HL] SF HOV E.A
These are handy instructions to have. Whenever you have a value in one register, you know you can move it to another register. Notice also the four instructions that use the Hl register pair, such as
HOY B,[HL]
The LDA instruction shown earlier transfers a byte from memory into the accumulator; the 16-bit address of the byte directly follows the LDA opcode. This MOY instruction transfers a byte from memory into register B. But the address of the byte to be loaded into the register is stored in the register pair Hl registers. How did Hl come to hold a 16-bit memory address? Well, it could happen in a variety of ways. Maybe the address was calculated in some way.
50
43
54
44
45
49
4F

266
Chapter Nineteen
To summarize, these two instructions
LOA A,[aaaa] HOY B,[HL]
both load a byte from memory into the microprocessor, but they use two different methods to address memory. The first method is called direct ad­ dressing and the second method is called indexed addressing.
The second batch of 32 MOV instructions shows that the memory loca­ tion addressed by HL can also be a destination:
Opcode Insauction 40 HOY 8,8
60 HOY H,8
61 HOY H,C
62 HOYH,D
63 HOYH,E
64 HOYH,H
65 HOYH,l
66 HOY H,[Hl]
67 HOY H.A
68 HOY l,8
69 HOY L.C
6A HOY l,D 68 HOY L,E 6C HOY L.H 60 HOY l,l
6E HOY l,[Hl]
6F HOY L.A
Several of these instructions, such as
Opcode Insauction 50 HOY 0,8
70 HOY [HL].8
71 HOY [HL].C
72 HOY [Hl] .D
73 HOY [Hl].E
HOY [HL].H
HOY [HL] . l
76 HLT
HOY [Hl],A
78 HOY A,8
79 HOY A,C
7A HOY A.D 78 HOY A,E 7C HOY A,H 70 HOY A,l
7E HOY A,[Hl]
7F HOY A.A
HOY A,A
don’t do anything useful. But the instruction
MDV [HL] , [HL]
doesn’t exist. The opcode that would otherwise correspond to that instruc­ tion is actually a HLT ( Halt) instruction.
A more revealing way to look at all these MO V opcodes is to examine the bit pattern of the opcode. The MOV opcode consists of the 8 bits
9ldddsss
i n which the letters ddd represent a 3-bit code that refers to a destination, and sss is a 3-bit code that refers to a source. These 3-bit codes are
74
75
77

Two Classic Microprocessors 267
For example, the instruction
HOY L.E
is associated with the opcode
000 = Register B
00I = Register C
010 = Register D
01 1 = Register E
I 00 = Register H
101 = Register L
1 10 = Contents of memory at address HL 1 1 1 = Accumulator
01 1 01 01 1
or 6Bh. You can check the preceding table to verify that.
So probably somewhere inside the 8080, the 3 bits labeled sss are used in a 8-Line-to- 1 -Line Data Selector, and the 3 bits labeled ddd are used to con­ trol a 3-l.i. ne-to-8-Line Decoder that determines which register latches a value. It’s also possible to use registers B and C as a 1 6-bit register pair BC, and registers D and E as a 1 6-bit register pair DE. If either register pair contains the address of a memory location that you want to use to load or store a byte,
you can use the following instructions:
Opcode 02 12
Instruction
STAX [BC].A STAX [DE].A
Opcodc: lnsttuction
OA LOAX A,[BC] lA LDAX A,[DE]
Another type of Move instruction i s called Move Immediate and i s as­ signed the mnemonic MVI. The Move Immediate instruction is composed of 2 bytes. The first is the opcode, and the second is a byte of data. That byte is transferred from memory into one of the registers or to the memory location addressed by the HL register pair:
Opcodc: lnsttuction
06 HVI B.xx
OE HVI c.xx
16 HVI D.xx
IE HVI E.xx
26 HVI H.xx
2E HVI L,xx
36 MVI [HL].xx JE HVI A.xx
For example, after the instruction
MVI E.37h

268
Chapter Nineteen
the register E contains the byte 37h. This is considered to be a third method of addressing memory, called immediate addressing.
A collection of 32 opcodes do the four basic arithmetical operations we’re familiar with from the processor we developed in Chapter 17. These are addition (ADD), addition with carry (ADC), subtraction (SUB), and sub­ traction with borrow (SBB). In all cases, the accumulator is one of the two operands and is also the destination for the result:
Opeode Instruction
80 ADD A,B
81 ADD A,C
82 ADD A,D
83 ADD A,E
84 ADD A,H
85 ADD A,L
86 ADD A,[HL]
87 ADD A,A
88 ADC A,B
89 ADC A,C
8A ADC A,D 88 ADCA,E sc ADC A,H 80 ADCA,L
SE ADC A,[HL]
SF ADC A,A
Opcodc Instruction
90 SUB A,B 91 SUB A,C 92 SUB A,D 93 SUB A,E 94 SUB A,H
SUB A,L
96 SUB A,[HL]
97 SUB A,A
98 SBB A,B
99 SBB A,C
9A SBB A,D
98 SBB A,E
9C SBB A,H
90 SBB A.L
9E SBB A,[HL] 9F SBB A,A
Suppose A contains the byte 35h and register 8 contains the byte 22h. After executing
SUB A,B
the accumulator contains the byte 13h.
If A contains the byte 35h, and register H contains the byte 1Oh, and L
contains the byte 7Ch, and the memory location 107Ch contains the byte 4Ah, the instruction
ADD A,[HL]
adds the byte i n the accumulator (35h) and the byte addressed by the regis­ ter pair HL (4Ah) and stores the result (7Fh) in the accumulator.
The ADC and SBB instructions allow the 8080 to add and subtract 16- bit, 24-bit, 32-bit, and larger numbers. For example, suppose the register pairs BC and DE both contain 16-bit numbers. You want to add them and put the result in BC. Here’s how to do it:
95

Two Classic Microprocessors 269
HOY A,C ADD A,E HOY C,A HOY A,B ADC A.D HOY B,A
Low-order byte High-order byte
The two addition instructions are ADD for the low-order byte and ADC for the high-order byte. Any carry bit that results from the first addition is included in the second addition. But because you can add only with the accumula­ tor, this little snippet of code requires no fewer than 4 MOV instructions. Lots of MOV instructions usually show up in 8080 code.
This is a good time to talk about the 8080 flags. In our processor in Chapter 1 7, we had a Carry flag and a Zero flag. The 8080 bas three more, called Sign, Parity, and Auxiliary Carry. All the flags are stored in yet an­ other 8-bit register called the Program Status Word (PSW). Instructions such as I.DA, STA, or MOV don’t affect the flags at all. The ADD, SUB, ADC, and SBB instructions do affect the flags, however, in the following way:
• The Sign flag is set to 1 if the most significant bit of the result is 1, meaning that the result is negative.
• The Zero flag is set to 1 if the result is 0.
• The Parity flag is set to 1 if the result has even parity, which
means that the number of 1 bits in the result is even. The parity flag is 0 if the result has oddparity. Parity is sometimes used as a crude form of error checking. This flag isn’t often used in 8080 programming.
• The Carry flag is set to 1 if an ADD or ADC operation results in a carry or if a SUB and SBB does not result in a carry. (This is different from the implementation of the Carry flag in the Chap­ ter 17 computer.)
• The Auxiliary Carry flag is 1 if the operation results in a carry from the low nibble into the high nibble. This flag is used only for the DAA (Decimal Adiust Accumulator) instruction.
Two instructions affect the carry flag directly:
Opcode Instruction 37 STC
JF CHC
Meaning
Set Carry flag to 1 Complement Carry flag
The computer in Chapter 1 7 performed ADD, ADC, SUB, and SBB in­ structions (although not with nearly as much flexibility), but the 8080 does Boolean AND, OR, and XOR operations as well. Both arithmetic and logical operations are performed by the processor’s Arithmetic Logic Unit (ALU).

270
Chapter Nineteen
Opcode
AO At Al A3 A4 A5 A6 A7 A8 A9 AA AB AC AD AE AF
Instruction
AND A,B AND A,C AND A,D AND A,E AND A,H ANDA,L AND A,[HL] AND A,A XOR A,B XOR A,C XOR A,D XOR A,E XOR A,H XOR A,L XOR A,[HL] XOR A,A
Opeode Instruction
80 OR A.B
81 OR A,C
82 OR A,D
83 OR A,E
84 OR A,H
85 OR A,L
86 OR A,[HL] 87 OR A,A
88 CHP A,B
89 CHP A,C
BA CHP A,D
BB CHP A.E
BC CHP A,H
BD CHP A,L
BE CHP A,[HL] 8F CHP A.A
The AND, XOR, and OR instructions perform bitwise operations. This means that the logical operation is performed on each pair of bits separately. For example,
HVI A, 9Fh HVI B,SSh AND A,B
The value in the accumulator will be 05h. If the third instruction were an OR, the result would be 5Fh. If the instruction were an XOR, the result would be 5Ah.
The CMP (Compare) instruction is just like the SUB instruction except that the result isn’t stored in the accumulator. In other words, the CMP performs a subtraction and then throws away the result. What’s the point?
The flags! The flags tell you the relationship between the 2 bytes that you compared. For example, consider the following instructions:
HVI 8,25h CHP A,B
After this instruction, the contents of A remain unchanged. However, the Zero flag is set if the value in A equals 25h. The Carry flag is set if the value in A is less than 25h.
The eight arithmetic and logic operations also have versions that oper­ ate on an immediate byte:

Two Classic Microprocessors 271
Opcode
C6 CE D6 DE
Instruction ADI A.xx ACI A,xx SUI A.xx SBI A,xx
Opeode Instruction E6 ANI A.xx EE XRI A,xx F6 ORI A,xx FE CPI A,xx
For example, the two lines shown above can be replaced with
CPI A.ZSh
Here are two miscellaneous 8080 instructions:
Opcode
27 DAA 2F Cfo:IA
CMA stands for Complement Accumulator. It performs a ones’ comple­ ment of the value in the accumulator. Every 0 becomes a 1 and every 1 be­ comes a 0. If the accumulator is 0 1 1 00 1 0 1 , the CMA instruction causes it to be 1 00 1 1 0 1 0 . You can also complement the accumulator using rhe instruction
XRI A,FFh
DAA stands for Decimal Adiust Accumulator, a s I mentioned earlier, and it’s probably the most sophisticated single instruction in the 8080. A whole little section of the microprocessor is dedicated specifically to performing this instruction.
The DAA instruction helps a programmer implement decimal arithmetic using a method of representing numbers known as binary-coded decimal, or BCD. In BCD, each nibble of data may range only from 0000 through 1 00 1 , corresponding to decimal digits 0 through 9. The 8 bits of a byte can store two decimal digits in BCD format.
Suppose the accumulator contains the BCD value 27h. Because this is a BCD value, it actually refers to the decimal value 27. (Normally, the hexa­ decimal value 27h has the decimal equivalent 39.) Suppose also that regis­ ter B contains the BCD value 94h. If you execute the instruction
MVI A.Z7h MVI 8,94h ADD A,B
the accumulator will contain the value BBh, which, of course, isn’t a BCD value because the nibbles of BCD bytes never exceed 9. But now execute the instruction
DAA
Now the accumulator contains 2 1 h, and the Carry flag is set. That’s because the decimal sum of 27 and 94 equals 121. This can be handy if you need to do BCD arithmetic.
Very often it’s necessary to add 1 to a particular value or subtract 1 from a value. In the multiplication program in Chapter 1 7, we needed to subtract
Instruction

272
Chapter Nineteen
1 from a value, and the way we did it was to add FFh, which is the two’s complement value of – 1 . The 8080 includes special instructions for increasing a register or memory location by 1 (this is known as an increment) or de­ creasing by 1 (decrement):
Opeode 04 oc 14 lC 24 2C 34 3C
lnsttucrion
INR B INR C INR 0 INR E INR H INR L INR [HL] INR A
Opcode lnsttuction 05 OCR B
00 OCR C
15 OCR 0
1D OCR E 25 OCR H 20 OCR L
OCR [HL] 30 OCR A
The single-byte INR and DCR instructions afectf all flags except the Carry flag. The 8080 also includes four Rotate instructions. These instructions shift
the contents of the accumulator 1 bit to the left or right:
Opcode 07 OF 17
I F
lnsttucrion RLC RRC RAL
RA R
Meaning
Rotate accumulator left
Rotate accumuJator right
Rotate accumulator left through carry Rotate accumulator right through carry
Only the Carry flag is affected by these instructions.
Suppose the accumulator contains the value A7h, or 101001 1 1 in binary.
The RLC instruction shifts the bits left. The lowest bit (shifted out of the bottom) becomes the highest bit (shifted into the top) and also determines thestateoftheCarryflag.Theresultis01001111,andtheCarryflagis 1. The R R C instruction shifts the bits right in the same way. Beginning with 101001 1 1, the result after an RRC instruction is 1 1010011, and the Carry flagis 1.
The RAL and RAR instructions work a linle differently. The RAL instruc­ tion sets the Carry flag to the lowest bit of the accumulator when shifting left but sets the highest bit to the previous contents of the Carry flag. For example, if the accumulator contains 1 0 1 00 1 1 1 and the Carry flag is 0, RAL causes the accumulator to become 01001110 and the Carry flag to be 1. Similarly, under the same initial conditions RAR causes the accumulator to become 01010011 and the Carry flag to be set to 1.
The shift instructions come in handy when you’re multiplying a number by 2 (that’s a shift left) or dividing a number by 2 (a shift right).
The memory that the microprocessor addresses is called random access memory (RAM) for a reason: The microprocessor can access any particular memory location simply by supplying an address of that location. RAM is like a book that we can open to any page. It’s not like a week’s worth of a news­ paper on microfilm. Finding something in Saturday’s edition requires us to scan through most of the week. Similarly, playing the last song on a cassette
35

Two Classic Microprocessors 273
tape requires us to fast forward through the whole side of the album. The term for microfilm or tape storage isn’t random access but sequential access.
Random access memory is definitely a good thing, particularly for micro­ processors, but sometimes it’s advantageous to treat memory a little differ­ ently. Here’s a form of storage that’s neither random nor sequential: Suppose you work in an office where people come to your desk to give you jobs to do. Each job involves a file folder of some sort. Often when you’re work­ ing on one job, you find that before you can continue you must do a related job using another file folder. So you leave the first folder on your desk and put the second one on top of it to work on that. Now someone comes to your desk to give you yet another job that has higher priority than the earlier one. You’re handed a new file folder and you work with that one on top of the other two. That job requires yet anothe-r file folder, and soon you have a pile of four file folders on your desk.
Notice that this pile is actually a very orderly way to store and keep track of all the jobs you’re doing. The topmost file folder always has the highest­ priority job. After you get rid of that one, the next one on the pile must be attended to, and so on. When you finally get rid of the last file folder on your desk (the first one you started with), you can go home.
The technical term for this form of storage is a stack. You’re stacking things from the bottom up and removing them from the top down. It’s also called last-in-first-out storage, or LIFO. The last thing put on the stack is the first thing taken off the stack. The first thing put on the stack is the last thing taken off the stack.
Computers also can use a stack, not for storing jobs but for storing num­ bers, and it’s something that turns out to be quite convenient. Putting some­ thing on the stack is called a push, and taking something off the stack is called a pop.
Suppose you were writing an assembly-language program that used reg­ isters A, B, and C. But you notice that you’ve reached a point where the program needs to do something else-another little calculation that also needs to use registers A, B, and C. You eventually want to come back to what you were doing before, however, and continue using A, B, and C with the values they previously had.
What you could do, of course, is simply store registers A, B, and C in various locations in memory and later load these locations back into the registers. But that requires keeping track of where you stored them. A much cleaner way to do it is to push the registers on the stack:
PUSH A PUSH B PUSH C
I’ll explain what these instructions actually do in a moment. For now, all we need to know is that they somehow save the contents of the registers in last­ in-first-out memory. Once these statements are executed, your program can use these registers for other purposes without worry. To get the earlier val­ ues back, you simply pop them from the stack in the reverse order, as shown at the top of the following page.

274
Chapter Nineteen
POP C POP B POP A
Remember: Last in, first out. Accidentally switching around these POP state­ ments would constitute a bug.
What’s particularly nice about the stack mechanism is that lots of differ­ ent sections of a program can use the stack without causing problems. For example, after the program pushes A, 8, and C on the stack, another sec­ tion of the program could decide it needs to do the same thing with regis­ ters C, D, and E:
PUSH C PUSH 0 PUSH E
Then all that’s necessary is for that section of the program to restore the registers this way:
POP E POP D POP C
before the first section popped C, 8, and A.
How is the stack implemented? The stack is, first of all, just a section of
normal RAM that isn’t being used for anything else. The 8080 micropro­ cessor contains a special 1 6-bit register that addresses this section of memory. That 1 6-bit register is called the Stack Pointer.
My examples of pushing and popping individual registers weren’t quite accurate for the 8080. The 8080 PUSH instruction actually stores 16-bit values on the stack, and the POP instruction retrieves them. So instead of instructions like PUSH C and POP C, we have the following 8 instructions:
Opcodc
cs 05 E5 F5
Instruction
PUSH BC PUSH DE PUSH HL PUSH PSW
Opcodc Instruction Ct POP BC Dt POP DE El POP HL Ft POP PSW
The PUSH BC instruction stores registers 8 and C on the stack, and POP BC retrieves them. The abbreviation PSW in the last row refers to the Pro­ gram Status Word, which, as you’ll recall, is the 8-bit register that contains the flags. The two instructions in the bottom row actually push and pop both the accumulator and the PSW. If you want to save the contents of all the registers and flags, you can use
PUSH PSW PUSH BC PUSH DE PUSH HL

Two Classic Microprocessors 275 When you later need to restore the contents of these registers, use the POP
instructions in reverse order:
POP HL POP DE POP BC POP PSW
How does the stack work? Let’s assume the Stack Pointer is 8000h. The PUSH BC instruction causes the following to occur:
• The Stack Pointer is decremented to 7FFFh.
• The contents of register 8 are stored at the Stack Pointer address,
or 7FFFh.
• The Stack Pointer is decremented to 7FFEh.
• The contents of register C are stored at the Stack Pointer address,
or 7FFEh.
A POP BC instruction executed when the Stack Pointer is still 7FFEh un­ does everything:
• The contents of register C are loaded from the Stack Pointer ad­ dress, or 7F.
• The Stack Pointer is incremented to 7FFFh.
• The contents of register B are loaded from the Stack Pointer ad­ dress, or 7FFFh.
• The Stack Pointer is incremented to 8000h.
For every PUSH instruction, the stack increases 2 bytes in size. It’s pos­ sible-possibly due to a bug in a program-that the stack will get so big that it will begin to overwrite some code or data needed by a program. This is a problem known as stack overflow. Similarly, too many POP instructions can prematurely exhaust the contents of the stack, a condition known as stack underflow.
If you have 64 KB of memory connected to your 8080, you might want to initially set the Stack Pointer to OOOOh. The first PUSH instruction dec­ rements that address to FFFFh. The stack then occupies the area of memory with the very highest addresses, quite a distance from your programs, which will probably be in the area of memory staning at address OOOOh.
The instruction to set the value of the stack register is LXI, which stands for Load Extended Immediate. These instructions also load 1 6-bit register pairs with the two bytes that follow the opcode:
Opcode Instruction
01 LXI BC,xxxx 11 LXI DE,xxxx 21 LXI HL,xxxx 31 LXI SP,xxxx

276
Chapter Nineteen
The instruction
LXI BC,527Ah
is equivalent to
HYI 8,52 HYI C,7Ah
The LXI instruction saves a byte. In addition, the last LXI instruction in the preceding table is used to set the Stack Pointer to a particular value. It’s not uncommon for this instruction to be one of the first instructions that a microprocessor executes after being restarted:
0000h : LX I SP , 0000h
It’s also possible to increment and decrement register pairs and the Stack Pointer as if they were 1 6-bit registers:
Opcode Instruction 03 INX BC u INX DE 23 INX HL 33 INX SP
Opcode Instruction 08 DCX BC 18 DCX DE 28 DCX HL 38 DCX SP
While I’m on the subject o f 1 6-bit instructions, let’s look a t a few more. The following instructions add the contents of 1 6-bit register pairs to the register pair HL:
Opeode Instruction 09 DAD HL,BC 19 DAD HL,DE 29 DAD HL,HL 39 DAD HL,SP
These instructions could save a few bytes. For example, the first of these instructions would normally require 6 bytes:
HOY A,L ADD A,C HOY L.A HOY A.H ADC A.B HOY H,A
The DAD instruction is normally used for calculating memory addresses. The only flag that the instruction affects is the Carry flag.
Next let’s look at some miscellaneous instructions. These two opcodes arc followed by a 2-byte address and store and load the contents of the register pair HL at that address:
Opcode lh lAh
Instruction
SHLD [aaaa],HL LHLD HL. [aaaa]
Meaning StoreHLDirect Load HL Direct

Two Classic Microprocessors 277 The L register is stored at address aaaa, and the H register is stored at ad­
These two instructions load the Program Counter or the Stack Pointer from the register pair HL:
Opeode E9h F9h
Instruction
PCHL PC , H L SPHL SP , HL
Meaning
load Program Counter from Hl load Stack Pointer from Hl
The PCHL instruction is actually a type ofJump. The next instruction that the 8080 executes is the one located at the address stored in the HL regis­ ter pair. SPHL is another method to set the Stack Pointer.
These two instructions exchange the contents of HL first with the two bytes located on top of the stack and second with the register pair DE:
Opeode E3h EBh
Instruction
XTHL HL.[SP] XCHG HL. DE
Meaning ExchangetopofstackwithHl Exchange DE and Hl
I haven’t described the 8080Jump instructions yet, except for PCHL. As you’ll recall from Chapter 1 7, a processor includes a register called the Program Counter that contains the memory address the processor uses to retrieve the instructions that it executes. Normally the Program Counter causes the processor to execute instructions that are located sequentially in memory. But some instructions-usually named jump or Branch or Goto-­ cause the processor to deviate from this steady course. Such instructions cause the Program Counter to be loaded with another value. The next instruction that the processor fetches is somewhere else in memory.
While a plain old ordinary Jump instruction is certainly useful, conditional jumps are even better. These instructions cause the processor to jump to another address based on the setting of a particular flag, such as the Carry flag or the Zero flag. The presence of a conditional J ump instruction is what turned the Chapter 1 7 automated adding machine into a general-purpose digital computer.
The 8080 has five flags, four of which are used for conditional jumps. The 8080 supports nine different Jump instructions, including the unconditional Jump and conditional jumps based on whether the Zero, Carry, Parity, and Sign flags are 1 or 0.
Before I show these instructions to you, however, I want to introduce two other types of instructions that are related to the Jump. The first is the Call instruction. A Call is similar to a Jump except that prior to loading the Pro­ gram Counter with a new address, the processor saves the previous address. Where does it save that address? Why, on the stack, of course!
This strategy means that the Call instruction effectively saves a reminder of where it iumped from. The saved address allows the processor to even­ tually return to the original location. The returning instruction is called, appropriately, Return. The Return instruction pops 2 bytes from the stack and loads the Program Counter with that value.
dress aaaa + 1 .

278
Chapter Nineteen
The Call and Return instructions are extremely important features of any processor. They allow a programmer to implement subroutines, which are snippets of frequently used code. ( By frequently I generally mean more than once.) Subroutines are the primary organizational elements of assembly­ language programs.
Let’s look at an example. Suppose you’re writing an assembly-language program and you come to a point where you need to multiply 2 bytes. So you write some code that does precisely that, and you continue with the program. Now you come to another point where you need to multiply 2 bytes. Well, you already know how to multiply rwo numbers, so you can simply use the same instructions all over again. But do you simply enter the instruc­ tions into memory a second rime? I hope not. It’s a waste of rime and memory. What you’d rather do is just jump to the previous code. But the normal Jump doesn’t work either because there’s no way to rerum to the current place in the program. That’s what the Call and Return instruCtions let you do.
A group of instruCtions that multiply 2 bytes is an ideal candidate for a subroutine. Let’s take a look at such a subroutine. In Chapter 17, the bytes to be multiplied (and the result) were stored in particular locations in memory. This 8080 subroutine instead multiplies the byte in register B by the byte in register C and puts the 16-bit product in register HL:
Multiply:
PUSH PSW PUSH BC
SUB H,H SUB L.L
HOY A,B CPI A,00h JZ Al l Done
HVI 8,00h
DAD HL.BC DEC A
JNZ Hultloop
POP BC POP PSW RET
: Save registers being altered
Set HL (result) to BBB0h
The multiplier goes in A If it’s e. we’re finished.
Set high byte of BC to 0
Add BC to HL Decrement multiplier Loop if it’s not 0
Restore saved registers Return
Hultloop:
AllDone:
Notice that the first line of the subroutine begins with a label, which is the word Multiply. This label, of course, aCtually corresponds to a memory address where the subroutine is located. The subroutine begins with rwo PUSH instructions. Usually a subroutine should attempt to save (and later restore) any registers that it might need to use.
The subroutine then sets the contents of the H and L registers to 0. It could have used the MVI (Move Immediate) instructions rather than SUB instruc­ tions for this job, but that would have required 4 instruction bytes rather than 2. The register pair HL will hold the result of the multiplication when the subroutine is completed.

‘liuo Classic Microprocessors 279
Next the subroutine moves the contents of register 8 (the multiplier) into A and checks if it’s 0. If it’s 0, the multiplication subroutine is complete because the product is 0. Since registers H and L arc already 0, the subrou­ tine can just use the JZ Uump IfZero) instruction to skip to the two POP jnsuuctions at the end.
Otherwise, the subroutine sets register 8 to 0. Now the register pair 8C contains a 16-bit multiplicand and A contains the multiplier. The DAD instruction adds 8C (the multiplicand) to HL (the result). The multiplier in A is decremented and, as long as it’s not 0, the JNZ Uump IfNot Zero) in­ suuction causes 8C to be added to HL again. This little loop will continue until 8C is added to HL a number of times equal to the multiplier. (It’s pos­ sible to write a more efficient multiplication subroutine using the 8080 shift instructions. )
A program that wishes to make use of this subroutine to multiply (for example) 25h by 1 2h uses the following code:
HVI B,25h
HVI C,l2h CALL Mult1ply
The CALL instruction saves the value of the Program Counter on the stack. The value saved on the stack is the address of the next instruction after the CALL instruction. Then the CALL instruction causes a jump to the instruc­ tion identified by the label Multiply. That’s the beginning of the subroutine. When the subroutine has calculated the product, it executes a RET (Return) instruction, which causes the Program Counter to be popped from the stack. The program continues with the next statement after the CALL instruction.
The 8080 instruction set includes conditional Call instructions and con­ ditional Return instructions, but these are used much less than the condi­ tional Jump instructions. The complete array of these instructions is shown in the following table:
Opcocle
Instruction
Opcode Instruction
Opcode
C9
RET
C3 JHP aaaa
CD
co
RNZ
C2 JNZ aaaa
C4
C8
RZ
CA JZ aaaa
cc
DO
RNC
02 JNC aaaa
04
08
RC
DA JC aaaa
DC
EO
RPO
E2 JPO aaaa
E4
E8
RPE
EA JPE aaaa
EC
Condition None Znotset Zset Cnotset Cset Oddpartty Evenparity
Snotset FO RP F2 Sset F8 RH FA
JP aaaa JH aaaa
Instruction
CALL aaaa CNZ aaaa CZ aaaa CNC aaaa CC aaaa CPO aaaa CPE aaaa
F4 CP aaaa FC CH aaaa

280
Chapter Nineteen
As you probably know, memory isn’t the only thing connected to a mi­ croprocessor. A computer system usually requires input and output (1/0) devices that make it easier for humans to communicate with the machine. These input devices usually include a keyboard and a video display.
How does the microprocessor communicate with these peripherals (as anything connected to a microprocessor that isn’t memory is called ) ? Peripher­ als are built so that they have an interface similar to memory. A microprocessor can write into and read from a peripheral by specifying certain addresses that the peripheral responds ro. In some microprocessors, peripherals actually replace some addresses that would normally be used to address memory. This configuration is known as memory-mapped 1/0. In the 8080, however, 256 additional addresses beyond the normal 65,536 are specifically reserved for input and output devices. These are known as 110 ports. The 110 address signals are A0 through A7, but 110 accesses are distinguished from memory accesses through signals latched by the 8228 System Controller chip.
The OUT instruction writes the contents of the accumulator to a pon addressed by the byte that follows the instruction. The IN instruction reads a byte into the accumulator.
Opeode: 03 DB
Instruction
OUT pp IN pp
Peripherals sometimes need to get the attention of the microprocessor. For example, when you press a key on a keyboard, it’s usually helpful if the microprocessor knows about this event right away. This is a accomplished by a mechanism called an interrupt, which is a signal connected from the peripheral to the INT input of the 8080.
When the 8080 is reset, however, it doesn’t respond to interrupts. A pro­ gram must execute the El (Enable Interrupts) instruction to enable interrupts and can later execute DI (Disable Interrupts) to disable them:
Opcode Instruction F3 DI
FB EI
The INTE output signal from the 8080 indicates when interrupts have been enabled. When a peripheral needs to interrupt the microprocessor, it sets the INT input of the 8080 to 1 . The 8080 responds to that by fetching an instruction from memory, but control signals indicate that an interrupt is occurring. The peripheral usually responds by supplying one of the fol­ lowing instructions to the 8080:
Opcode C7 CF 07 DF
Instruction
RST 0 RST 1 RST 2 RST 3
Opcodc: Instruction E7 RST 4 EF RST 5 F7 RST 6 FF RST 7

Two Classic Microprocessors
These are called Restart instructions, and they’re similar to Call instruc­ tions in that the current Program Counter is saved on the stack. But the Restartinstructions then jump to specific locations: RST0 jumps to address OOOOh, RST 1 to address 0008h, and so forth, up to RST 7, which jumps to address 0038h. Located at these addresses are sections of code that deal with the interrupt. For example, an interrupt from the keyboard might cause a RST4 instruction to be executed. At address 0020h begins some code to read a byte from the keyboard. (I’ll explain this more fully in Chapter 21.)
So far I’ve described 243 opcodes. The 12 bytes that aren’t associated with any opcodes are 08h, 1Oh, 18h, 20h, 28h, 30h, 38h, CBh, D9h, DOh, EDh, and FDh. That brings the total to 255. There’s one more opcode I need to mention, and that’s this one:
Opcode Instruction
00 NOP
NOP stands for (and is pronounced) no op, as in no operation. The NOP causes the processor to do absolutely nothing. What’s it good for? Filling space. The 8080 can usually execute a bunch of NOP instructions without anything bad happening.
I won’t go into nearly as much detail discussing the Motorola 6800 be­ cause many of the aspects of its design and functionality are quite similar to those of the 8080. Here are the 40 pins of the 6800:
Vss
HALT 2 01 3 mQ4 VMA5 NMI6 BA7 Vee 8 Ao9
40 RESET TSC
38
9z
DBE 35
34RJW 33 Do 32Ot
At MC6800 02 Az11 3003
A312 2904
u 2805 As14 2706 A615 2607 A716 25AJS As17 24A14 A918 23An
AJO 19 22 A12 All 20 21 Vss
The V� indicates Ground, and Vcc is 5 volts. Like the 8080, the 6800 has 16 output Address signals and 8 Data signals used for both input and output. There’s a RESET signal and a R/W (read/write) signal. The � signal stands

.lHl
39
37
36
31
10

282
Chapter Nineteen
for interrupt request. The signal timing of the 6800 is considered to be: much simpler than that of the 8080. What the 6800 doesn’t have is the concept of UO ports. All input and output devices must be: part of the 6800 memory address space.
The 6800 has a 1 6-bit Program Counter, a 1 6-bit Stack Pointer, an 8-bit Status Register ( for flags), and two 8-bit accumulators called A and B. These are both considered accumulators (rather than B being considered just a register) because there is nothing that you can do with A that you can’t also do with B. There are no additional 8-bit registers, however.
The 6800 instead has a 1 6-bit index register that can be used to hold a 16-bit address, much like the register pair HL is used in the 8080. For many instructions, an address can be formed from the sum of the index register and the byte that follows the opcode.
While the 6800 does just about the same operations as the 808�loading, storing, adding, subtracting, shifting, jumping, calling-it should be: obvious that the opcodes and the mnemonics are completely different. Here, for example, are the 6800 Branch instructions:
Opeode Instruction 20h BRA 22h BHI 23h BLS 24h BCC 25h BCS 26h BNE 27h BEQ 28h BVC 29h BVS
2Ah BPL 2Bh BHI 2Ch BGE 2Dh BLT 2Eh BGT 2Fh BLE
Meaning Branch
Branch Branch Branch Branch Branch Branch Branch Branch Branch Branch Branch Branch Branch Branch
IfHigher
If Lower or Same If Carry Clear
If Carry Set
If Not Equal
I f Equal
I f OverAow Clear
If Overflow Set
If Plus
I f Minus
If Greater than or Equal to Zero If Less than Zero
If Greater than Zero
If Less than or Equal to Zero
The 6800 doesn’t have a Parity flag like the 8080, but it does have a flag the 8080 doesn’t have-an Overflow flag. Some of these Branch instructions
depend on combinations of flags.
Of course the 8080 and 6800 instructions sets are different. The two chips
were designed about the same time by two different groups of engineers at two different companies. What this incompatibility means is that neither chip can execute the other chip’s machine codes. Nor can an assembly-language program written for one chip be translated into opcodes that run on the other chip. Writing computer programs that run on more than one processor is the subject of Chapter 24.
Here’s another interesting difference between the 8080 and the 6800: In both microprocessors, the instruction LDA loads the accumulator from a

Two Classic Microprocessors
specified memory address. In the 8080, for example, the following sequence of bytes:
8080 LDA instruction
will load the accumulator with the byte stored at memory address 347Bh. Now compare that with the 6800 LDA instruction using the so-called 6800 extended addressing mode:
6800 LDA instruction
This sequence of bytes loads accumulator A with the byte stored at memory address 7B34h.
The difference is subtle. You expect the opcode to be different, of course: JAh for the 8080 and 86h for the 6800. But the two microprocessors treat the address that follows the opcode differently. The 8080 assumes that the low-order byte comes first, followed by the high-order byte. The 6800 as­ sumes that the high-order byte comes first!
This fundamental difference in how Intel and Motorola microprocessors store multibyte values has never been resolved. To this very day, Intel mi­ croprocessors continue to store multibyte values with the least-significant byte first (that is, at the lowest memory address), and Motorola micropro­ cessors store multibyte values with the most-significant byte first.
These two methods are known as little-endian (the Intel way) and big­ endian (the Motorola way). It might be fun to argue over which method is better, but before you do so, be aware that the term Big-Endian comes from Jonathan Swift’s Gulliver’s Travels and refers to the war between Lilliput and Blefuscu over which end of an egg to break before eating it. Such an argu­ ment is probably purposeless. (On the other hand, I feel obliged to confess that the approach I used in the Chapter 1 7 computer wasn’t the one I person­ ally prefer!) Despite neither method being intrinsically ..right,” the difference does create an additional incompatibility problem when sharing informa­ tion between systems based on linle-endian and big-endian machines.
What became of these two microprocessors? The 8080 was used in what some people have called the first personal computer but which is probably more accurately the first home computer. This is the Altair 8800, which appeared on the cover of the january 1 975 issue of Popular Electronics.

Cl1apter meteen
When you look at the Altair 800, th light and witch n the front panel hould eem familiar. Th.i i the ame type f primitive “control panel”
interface that I propo ed for the 64-KB RA.t1 array in Chapter 1 6.
The 8080 wa followed by the Intel 085 and, more ignifi antly, by the Z-80 hip made by Zil g. a ri aJ of lmel founded by former Intel employee
Federico Faggin, who had done imp rranr ‘ rk on the 4004. The Z- 0 wa entirely compatible with the 8080 but add d many more cry useful in true­ cion.In1977theZ-80wa uedintheRadio hakTR-80Model1.
AI o in 1 9 7 the Apple Computer Company, founded b} te en Job and tephen Wowiak inrroduccd the pple IL The Apple II however u cd neither the 080 nor the 6800 but instead u ed MO. Technolog)•’ le ex­
pen ive 6502 chip, which wa an cnhan emcnt of the 6800.
In June 1 97 , Intel introduced rhe 8086 whi h ” a a 1 6-bir mi ropro­ ce or that ould acce I megabyte of m mory. The 086 op ode wer n’r
omparibl with the 80 0, but I hould note that they included in tnt tion to multiply and divide. A year late� Intel inrrodu ed the 80 8 whi h int r­ nally was id nti al ro the 0 6 but e rernally acce ed memory in byte , thu allowing the microproce or ro u c the more-prevalent 8-bit upport chip designed for the 0 0. IB 1 u ed th 0 chip in it 150 Per onal Com­ puter-commonly ailed the IBM P -incrodu ed in the fall of 1981.
IBM’ entrance into the per onal computer marker had a great impact with many companie releasing machine that were compatible with the P . (What it mean ro be compatible will e explored more in ub equent hap­ ter .) ver the year “IBM PC ompanble” ha al o implied “Intel in ide
pecifi ally Intel micropr es ors of the o- ailed x86 family. The 86 family continued in 19 2with the 186 and 2 6chip , in 198- with the 32-bir386 chip, in 1 989 with rhe 486, and be inning in 1 993 with the Intel Pentium line of microproc ors that ar currently u ed in P comparibles. While th e

Two Classic Microprocessors 285
Intel microprocessors have ever-increasing instruction sets, they continue to support the opcodes of all earlier processors starting with the 8086.
The Apple Macintosh, first introduced in 1 984, used the Motorola 68000,
Since 1 994, Macintosh computers have used the PowerPC microproces­ sor that was developed in a coalition of Motorola, IBM, and Apple. The PowerPC was designed with a type of microprocessor architecture known as RISC (Reduced Instruction Set Computing), which attempts to increase the speed of the processor by simplifying it in some respects. In a RISC computer, generally each instruction is the same length (32 bits on the PowerPC), memory accesses are restricted to just load and store instructions, and instructions do simple operations rather than complex ones. RISC pro­ cessors usuaUy have plenty of registers to avoid frequent accesses of memory.
The PowerPC can’t execute 68K code because it has a whole different instruction set. But the PowerPC microprocessors currently used in the Apple Macintoshes can emulate the 68K. An emulator program running on the PowerPC examines each opcode of a 68K program, one by one, and performs an appropriate action. It’s not as fast as native PowerPC code, but it works.
According to Moore’s Law, the number of transistors in microprocessors should double every 1 8 months. What are those many additional transistors used for?
Some o f the transistors accommodate the increase in processor data width, from 4 bits to 8 bits to 1 6 bits to 32 bits. Another pan of the increase is due to new instructions. Most microprocessors these days have instructions to do floating-point arithmetic (as I’ll explain in Chapter 23); new instructions have also been added to microprocessors to do some of the repetitive cal­ culations required to display pictures or movies on computer screens.
Modern processors use several techniques to help improve their speed. One is pipelining. When the processor is executing one instruction, it’s read­ ing in the next instructions, even to a cenain extent anticipating how Jump instructions will alter the execution flow. Modern processors also include a cache (pronounced Cllsh). This is an array of very fast RAM inside the processor that is used to store recently executed instructions. Because com­ puter programs often execute small loops of instructions, the cache prevents these instructions from being repetitively reloaded. All these speed-improving features require more logic and more transistors in the microprocessor.
As I mentioned earlier, the microprocessor is only one pan (although the most important part) of a complete computer system. We’ll build such a system in Chapter 2 1 , but first we must learn how to encode something else in memory besides opcodes and numbers. We must go back to first grade and learn again how to read and write text.
a 1 6-bit microprocessor that’s a direct descendant of the 6800. The 68000 and its descendants (often called the 68K series) are some of the most be­
loved microprocessors ever made.

Chapter Twenty
ASCll and a Cast of Characters
D igital computer memory stores only bits, so anything that we want to work with on the computer must be stored in the form of bits. We’ve already seen how bits can represent numbers and machine
code. The next challenge must be text. After all, the great bulk of the accu­ mulated information of this world is in the form of text, and our libraries are full of books and magazines and newspapers. Although we’d eventually like to use our computers to store sounds and pictures and movies, text is a much easier place to begin.
To represent text in digital form, we must develop some kind of system in which each letter corresponds to a unique code. Numbers and punctua­ tion also occur in text, so codes for these must be developed as well. In short, we need codes for all alphanumeric characters. Such a system is sometimes known as a coded character set, and the individual codes are known as character codes.
The first question must be: How many bits do we need for these codes? The answer isn’t an easy one!
When we think about representing text using bits, let’s not get too far ahead of ourselves. We’re accustomed to seeing text nicely formatted on the pages of a book or in the columns of a magazine or newspaper. Paragraphs are neatly separated into lines of a consistent width. Yet this formatting isn’t essential to the text itself. When we read a short story in a magazine and years later encounter that same story in a book, we don’t think the story has changed just because the text column is wider in the book than in the magazine.
286

ASCII and a Cast of Characters 287
In other words, don’t think about text as formatted into two-dimensional columns on the printed page. Think of text instead as a one-dimensional stream of letters, numbers, and punctuation marks, with perhaps an addi­ tional code to indicate the end of one paragraph and the start of another.
Again, if you read a story in a magazine and later see it in a book and the typeface is a little different, is that a big deal? If the magazine version begins
and the book version begins
Call me Ishmael.
Call me Ishmael.
is that something we really want to be concerned with j ust yet? Probably not. Yes, the typeface subtly affects the tone of the text, but the story hasn’t been lost with the change of typeface. The typeface can always be changed back. There’s no harm done.
Here’s another way we’re going to simplify the problem: Let’s stick to plain vanilla text. No italics, no boldface, no underlining, no colors, no outlined letters, no subscripts, no superscripts. And no accent marks. No A or e or Ji or o. Just the naked Larin alphabet as it’s used in 99 percent of English.
In our earlier studies of Morse code and Braille, we’ve already seen how the letters of the alphabet can be represented in a binary form. Although these systems are fine for their specific purposes, both have their failings when it comes to computers. Morse code, for example, is a variable-width code: It uses shorter codes for frequently used letters and longer codes for less com­ mon ones. Such a code is suitable for telegraphy, but it might be awkward for computers. In addition, Morse code doesn’t differentiate between upper­ case and lowercase versions of letters.
Braille is a fixed-width code, which is much preferable for computers. Every letter is represented by 6 bits. Braille also differentiates between up­ percase and lowercase letters, although it does so with the use of a special escape code. This code indicates that the next character is uppercase. What this really means is that every capital letter requires two codes rather than one. Numbers are represented with a shift code: After that special code, the codes that follow are assumed to represent numbers until another shift code signals the return to letters.
Our goal here is to develop a coded character set so that a sentence such as
I have 2 7 sisters.
can be represented by a series of codes, each of which is a certain number of bits. Some of the codes will represent letters, some will representation punc­ tuation marks, and some will represent numbers. There should even be a code that represents the space between words. There are 18 characters in that sentence (including the spaces between the words). The consecutive character codes for such a sentence are often referred to as a text string.
That we need codes for numbers in a text string such as 27 might seem odd because we’ve been using bits to represent numbers for many chapters

.l.tns
Chapter Twenty
now. We may be tempted to assume that the codes for the 2 and 7 in this sentence are simply the binary numbers 1 0 and 1 1 1 . But that’s not neces­ sarily the case. In the context of a sentence such as this, the characters 2 and 7 can be treated like any other character found in wrinen English. They can have character codes that are completely unrelated to the actual values of the numbers.
Perhaps the most economical code for text is a 5-bit code that originated in an 1 8 74 printing telegraph developed by Emile Baudot ( pronounced haw­ doh), an officer in the French Telegraph Service; his code was adopted by the Service in 1 877. This code was later modified by Donald Murray and standardized in 1 93 1 by the Comite ConsultatifInternational Te/egraphique et Telephonique (CCfiT), which is now known as the International Tele­ communication Union (ITU). The code is formally known as the International Telegraph Alphabet No. 2, or ITA-2, and it’s more popularly known in the United States as Baudot, although it’s more correctly called the Murray code.
In the twentieth century, Baudot was often used in teletypewriters. A Baudot teletypewriter has a keyboard that looks something like a typewriter, except that it has only 30 keys and a spacebar. Teletypewriter keys are ac­ tually switches that cause a binary code to be generated and sent down the teletypewriter’s output cable, one bit after the other. A teletypewriter also contains a printing mechanism. Codes coming through the teletypewriter’s input cable trigger electromagnets that print characters on paper.
Because Baudot is a 5-bit code, there are only 32 codes. The hexadeci­ mal values of these codes range from OOh through 1 Fh. Here’s how the 32 available codes correspond to the letters of the alphabet:
Hex Code Baudot Lener Hex Code Baudot Lener
00 10E 01 T 11 z 02 Carriage Return 12 D 030 13B
Space 1 4 s H H y N 16 F M 17 X
04
05
06
07
08
09l 19w
OA R lA J
OB G 1 8 Figure Shift oc ICu
LineFeed 18 A
Q
Code OOh isn’t assigned to anything. Of the remaining 31 codes, 26 are assigned to leners of the alphabet and the other five are indicated by itali­ cized words or phrases in the table.
00p 1D
OE c IE K
OF v lf Letter Shift

ASCII and a Cast of Characters 289
Code 04h is the Space code, which is used for the space separating words. Codes 02h and 08h are labeled Carriage Return and Line Feed. This termi­ nology comes from rhe rypewrirer. When you’re ryping on a rypewrirer and reach rhe end of a line, you push a lever or button that does two things. First, it causes the carriage to be moved ro the right so that the next line begins ar rhe left side of the paper. That’s a carriage rerum. Second, the rypewrirer rolls the carriage so that the next line is underneath the line you just finished. That’s rhe linefeed. In Baudot, separate keyboard keys generate these two codes. A Baudot relerypewriter printer responds ro these two codes when printing.
Where are the numbers and punctuation marks in the Baudot system? That’s the purpose of code lBh, identified in the table as Figure Shift. After the Figure Shift code, all subsequent codes are interpreted as numbers or punctuation marks until the Letter Shift code ( l Fh ) causes them to reven to the letters. Here are the codes for the numbers and punctuation:
Hex Code 00
01
02
OJ
04
05
06
07
08
Baudot Figure
5
Ca”iag� R�turn
9
Spac�
Hex Code Baudot Figure 103
11 +
12 Who Ar� You!’ 13
# 15 6 16s
17 Lin� F��d 18
192 4 1A Bell
& 1 8 Figure Shift 81C7
OD 0 1D 1
OE 1E(
OF = 1F LenerShift
Actually, the code as formalized by the ITU doesn’t define codes 05h, OBh, and 1 6h, and instead reserves them “for national use.” The table shows how these codes were used in the United States. The same codes were often used for accented letters of some European languages. The Bellcode is supposed to ring an audible bell on the teletypewriter. The “Who Are You?” code activates a mechanism whereby a relerypewriter can identify itself.
like Morse code, this 5-bir code doesn’t differentiate between uppercase and lowercase. The sentence
I SPENT $25 TODAY.
is represented by the following stream of hexadecimal data:
0C0414OD1006010418161901 1F04010312181518070208
14
09
OA
08
oc

Notice the three shift codes: 1 Bh right before the number, 1 Fh after the number, and 1Bh again before the final period. The line concludes with carriage-return and linefeed codes.
Unfortunately, if you sent this stream of data to a teletypewriter printer twice in a row, it would come out like this:
I SPENT $25 TODAY. 8 ‘03,5 $25 TODAY.
What happened? The last shift code the printer received before the second line was a Figure Shift code, so the codes at the beginning of the second line were interpreted as numbers.
Problems like this are typical nasty results of using shift codes. Although Baudot is certainly an economical code, it’s probably preferable to use unique codes for numbers and punctuation, as well as separate codes for lowercase and uppercase letters.
So if we want to figure out how many bits we need for a better character encoding system than Baudot, just add them up: We need 52 codes just for the uppercase and lowercase letters and 1 0 codes for the digits 0 through 9. We’re up to 62 already. Throw in a few punctuation marks, and we top 64 codes, which means we need more than 6 bits. But we seem to have lots of leeway before we exceed 1 28 characters, which would require 8 bits.
So the answer is 7. We need 7 bits to represent the characters of English text if we want uppercase and lowercase with no shifting.
And what are these codes ? Well, the actual codes can be anything we want. If we were going to build our own computer, and we were going to build every piece of hardware required by this computer, and we were going to program this computer ourselves and never use the computer to connect to any other computer, we could make up our own codes. All we need do is assign every character we’ll be using a unique code.
But since it’s rarely the case that computers are built and used in isola­ tion, it makes more sense for everyone to agree to use the same codes. That way, the computers that we build can be more compatible with one another and maybe even actually exchange textual information.
We also probably shouldn’t assign codes in a haphazard manner. For example, when we work with text on the computer, certain advantages ac­ crue if the letters of the alphabet are assigned to sequential codes. This or­ dering scheme makes alphabetizing and sorting easier, for example.
Fortunately, such a standard has already been developed. It’s called the American Standard Code for Information Interchange, abbreviated ASCU, and referred to with the unlikely pronunciation ASS-key. It was formalized in 1 967 and remains the single most important standard in the entire computer indus­ try. With one big exception (which I’ll describe later)� whenever you deal with text on a computer you can be sure that ASCII is involved in some way.
ASCII is a 7-bit code using binary codes 0000000 through 1 1 1 1 1 1 1 , which are hexadecimal codes OOh through 7Fh. Let’s take a look at the ASCII codes, but let’s not stan at the very beginning because the first 32 codes are conceptually a bit more difficult than the rest of the codes. I’ll begin with
Chapter Twenty

ASCII and a Cast of Characters
2 9 1
the second batch of 32 codes, which includes punctuation and the ten nu- meric digits. This table shows the hexadecimal code and the character that corresponds to that code:
Hex Code ASCII Character Hex Code ASCD Character
20 Space 30 0 21 311 22 322 23 # 33 3 24s 4 25 % 35 5 26 & 36 6 27 7 28 388 29 399
2A• 3A 28+ 38
2C 3C< 20 30= 2E 3E> 2F I 3F
Notice that 20h is the space character that divides words and sentences. The next 32 codes include the uppercase letters and some additional punctuation. Aside from the @ sign and the underscore, these punctuation symbols aren’t normally found on typewriters. They’re all now standard on
computer keyboards.
Hex Code ASCD Character Hex Code ASCD Character
40 @ so p
Q
c 53 s 44D T E 55 u 46 F 56 v
G 57 w 48 H 58 X 49 I 59 y 4A J SA z
48 K 58
4C L sc
A51
42 8 52 R
40 M 50 4E N SE 4F 0 SF
I
I
1
34
43 45
37
41
54
47

292
Chapter Twenty
The next 32 characters include all the lowercase letters and some addi- tional punctuation, again not often found on typewriters:
Hex Code ASCD Character Hex Code ASCII Character
60 70p
61 a 71
62 b 72 r 63c s 64 d 74
65 e 75 u 66 v 67g w 68
69
6A
68
6C
6F 0
h 78 X 79y i 7A z
k 78 7C 60m 70 6E n 7E
q
Notice that this table is missing the last character corresponding to code 7Fh. If you’re keeping count, the three tables here show a total of 95 characters. Because ASCII is a 7-bit code, 128 codes are possible, so 33 more codes should be available. I’ll get to those shortly.
The text string
can be represented in ASCII using the hexadecimal codes
48 65 6C 6C 6F 2C 20 79 6F 75 21
Notice the comma (code 2C), the space (code 20) and the exclamation point (code 2 1 ) as well as the codes for the letters. Here’s another short sentence:
I am 1 2 years old. and its ASCII representation:
49 20 61 60 20 31 32 20 79 65 61 72 73 20 6F 6C 64 2E
Notice that the number 12 in this sentence is represented by the hexadeci­ mal numbers 3 1h and 32h, which arc the ASCII codes for the digits 1 and 2. When the number 1 2 is part of a text stream, it should not be represented by the hexadecimal codes 0 1 h and 02h, or the BCD code 1 2h, or the hexa­ decimal code OCh. These other codes all mean something else in ASCII.
Hello, you!
73
76
77

ASCII and a Cast of Characters
293
A particular uppercase letter in ASCII differs from its lowercase counter­ part by 20h. This fact makes it fairly easy to write some code that (for ex­ ample) capitalizes a string of text. Suppose a certain area of memory contains a text string, one character per byte. The following 8080 subroutine assumes that the address of the first character in the text string is stored in register HL. Register C contains the length of that text string, which is the number of characters:
Capitalize: HOY CPI
A.C
A , 99h A1 1 Done
Skipit:
Increment the text address
C = number of characters l eft Compare wi th 9
If C is e. we’re finished
Get the next character Check 1f it’s less than ‘a’ If so. ignore it
Check if it’s greater than ‘z’ If so. ignore it
I t ‘ s l owercase, so subtract 29h Store the character
INX HL
OCR C
JHP Capi tal i ze Go back to the top
JZ
A, [HL] A,6lh
HOY
CPI
JC Skipit
CPI A , 7Bh JNC Skipit
SBI A,29h HOY [HL] .A
A1 1 Done: RET
The statement that subtracts 20h from the lowercase letter to convert it to uppercase can be replaced with this:
ANI A,DFh
The ANI instruction is an AND Immediate. It performs a bitwise AND operation between the value in the accumulator and the value DFh, which is 1 1 0 1 1 1 1 1 in binary. By bitwise, I mean that the instruction performs an AND operation between each pair of corresponding bits that make up the two numbers. This AND operation preserves all the bits in A except the third from the left, which is set to 0. Setting that bit to 0 also effectively converts an ASCII lowercase letter to uppercase.
The 95 codes shown above are said to refer to graphic characters because they have a visual representation. ASCII also includes 33 control characters that have no visual representation but instead perform certain functions. For the sake of completeness, here are the 33 ASCII control characters, but don’t worry if they seem mostly incomprehensible. At the time ASCII was devel­ oped, it was intended mostly for teletypewriters, and many of these codes are currently obscure.
Decrement the counter

“�
Chapter Twenty
He• Code 00
01
02
03
04
05
06
07
08
09
OA
OB
oc
00
OE
OF
10
11
12
14 15 16 17 18 19 1A 1B 1C 1D 1E 1F 7F
Acronym NUL SOH STX
E TX EOT ENQ
Coouol Character Name Null (Nothing) Start of Heading Start of Text
E n d o f Te x t
End of Transmission Enquiry (i.e., Inquiry)
ACK Acknowledge
BEL
BS Hf LF VT FF CR so SI OLE D C 1 DC2 DC3 DC4 NAK SYN ETB
Bell
Backspace Horizontal Tabulation line Feed
Vertical Tabulation Form Feed Carriage Return Shift-Out
Shift-In
Data link Escape Device Control 1 Device Control 2 Device Control 3 Device Control 4 Negative Acknowledge Synchronous Idle
End of Transmission Block
CAN Cancel
EM End of Medium SUB Substitute Character ESC Escape
FS GS RS us
File Separator or Information Separator 4 Group Separator or Information Separator 3 Record Separator or Information Separator 2 Unit Separator or Information Separator I
DEL Delete
The idea here is that control characters can be intermixed with graphic characters to do some rudimentary formatting of the text. This is easiest to understand if you think of a device-such as a teletypewriter or a simple printer-that types characters on a page in response to a stream of ASCII codes. The device’s printing head normally responds to character codes by printing a character and moving one space to the right. The most important control characters alter this normal behavior.
For example, consider the hexadecimal character string
41 09 42 09 43 09
13

ASCII and a Cast of Characters 295
The 09 character is a Horizontal Tabulation code, or Tab for shon. If you think of all the horizontal character positions on the printer page as being numbered starting with 0, the Tab code usually means to print the next character at the next horizontal position that’s a multiple of 8, like this:
ABc
This is a handy way to keep text lined up in columns.
Even today, many computer printers respond to a Form Feed code ( 1 2h)
by ejecting the current page and staning a new page.
The Backspace code can be used for printing composite characters on
some old printers. For example, suppose the computer controlling the tele­ rypewriter wanted to display a lowercase e with a grave accent mark, like so: e. This could be achieved by using the hexadecimal codes 65 08 60.
By far the most imponant control codes are Carriage Return and Line Feed, which have the same meaning as the similar Baudot codes. On a printer, the Carriage Return code moves the printing head to the left side of the page, and the Line Feed code moves the printing head one line down. Both codes are generally required to go to a new line. A Carriage Return can be used by itself to print over an existing line, and a Line Feed can be used by itself to skip to the next line without moving to the left margin.
Although ASCU is the dominant standard in the computing world, it isn’t used on many of IBM’s larger computer systems. In connection with the System/360, IBM developed its own 8-bit character code known as the Ex­ tended BCD Interchange Code, or EBCDIC (pronounced EBB-see-dick), which was an extension of an earlier 6-bit code known as BCDIC, which was derived from codes used on IBM punch cards. This style of punch card­ capable of storing 80 characters of text-was introduced by IBM in 1 928 and used for over 50 years.
ABCDEFGHI J�LMNOPQRSTUVWIYZ
81 23456789
…………….. ……………………………………………..
lliiii;iiillllli’ii1illiiii1iiili1iiiiiiiiiiiiiliiiliiiii1iiiiiliii ii’i111i111iii 21zzzzzzzZ1zzzzzzzzllzzzzzzzzzzzzzzzzz12zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz 3ll3333333li33333JJ313JJ3333333333333llllllllll33333333llllll33333333333lllll3ll 44�444444�444444�444444444444444441444444444444444444444444444444444444444 555SI555555551555555551Sssssssss555555555155555555555555555555555555555555555555 66666166666666166666666166666666666666666616666666666666666666666666666666666666 77777717777777717777777717777777777777777771777777777777777777777777777777777777 88888881888888881888888881888888888888888888181888888888888888 88888888888888888 99999999199999999199999999199999999999999999919999999999999999999999999999999999

When considering the relationship between punch cards and their associated 8-bit EBCDIC character codes, keep in mind that these codes evolved over many decades under several different rypes of technologies. For that reason, don’t expect to discover too much logic or consistency.
A character is encoded on a punch card by a combination of one or more rectangular holes punched in a single column. The character itself is often
I I0 I I l Il,f.,ll’l 101 l”lf’l
rLIInN

Nf••”•” UIIM ••II
…..
….
..
..
,.,…………VIUOIW
II·-· W”-IM
.t•-••l rtflll
‘l ‘l . ..

296
Chapter Twenty
printed near the top of the card. The lower 1 0 rows are identified by num­ ber and are known as the 0-row, the t -row, and so forth through the 9-row. The unnwnbered row above the 0-row is called the 1 1 -row, and the top row is called the 12-row. There is no 10-row.
More IBM punch card terminology: Rows 0 through 9 are known as the digit rows, or digit punches. Rows 1 1 and 1 2 are known as the zone rows, or zone punches. And some IBM punch card confusion: Sometimes rows 0 and 9 are considered to be zone rows rather than digit rows.
An 8-bit EBCDIC character code is composed of a high-order nibble (4- bit value) and a low-order nibble. The low-order nibble is the BCD code corresponding to the digit punches of the character. The high-order nibble is a code corresponding (in a fairly arbitrary way) to the zone punches of the character. You’ll recall from Chapter 19 that BCD stands for binary­ coded decimal-a 4-bit code for digits 0 through 9.
For the digits 0 through 9, there are no zone punches. That lack of punches corresponds to a high-order nibble of 1 1 1 1 . The low-order nibble is the BCD code of the digit punch. Here’s a table of EBCDIC codes for the digits 0 through 9:
Hex Code EBCDIC Character FO 0
Fl I
F2 2
F3 3 4 F5 5 F6 6 F77 FS 8 F9 9
For the uppercase leners, a zone punch of just the 12-row is indicated by the nibble 1 100, a zone punch of just the 1 1-row is indicated by the nibble 1 101, and a zone punch of just the 0-row is indicated by the nibble 1 1 10. The EBCDIC codes for the uppercase letters are
Hex EBCDIC Hex EBCDIC Code Character Code Character
Ct A
C2 8
C3 c
C4 0
csE
C6F
C7G C8H08Q C9 09R
Hex EBCDIC Code Character
01 J
02 K E2 s 03 L E3 T 04 M F.4 u
05N 060 07p
1:5 v E6 w E7 X ES y
z
F4
E9

ASCII and a Cast of Characters
Notice the gaps in the numbering of these codes. In some applications, these gaps can be maddening when you’re writing programs using EBCDIC text. The lowercase letters have the same digit punches as the uppercase let­ ters but different zone punches. For lowercase letters a through i, the 1 2-row and 0-row are punched, corresponding to the code 1000. For i through r, the 12-row and 1 1 -row are punched. This is the code 1 00 1 . For the letters s through z, the 1 1 -row and 0-row are punched-the code 1 0 1 0 . The EBCDIC
codes for the lowercase letters are
Hex EBCDIC Hex Code Character Code 81a91 82 b 92
EBCDIC Hex EBCDIC Character Code Character
j
k A1 s A3
83 c
84 d 94 m 85 e 95 n 86 f 96 0
g 97 p 88 h 98 q 89 99 r
A4 u ASv A6w A7 X A8 y A9z
Of course, there are other EBCDIC codes for punctuation and control charac­ ters, but it’s hardly necessary to do a full-blown exploration of this system. It might seem as if each column of an IBM punch card is sufficient to encode 12 bits of information. Each hole is a bit, right? So it should be possible to encode ASCII character codes on a punch card using only 7 of the 12 positions in each column. But in practice, this doesn’t work very well. Too many holes get punched, threatening the physical integrity of the card. Many of the 8-bit codes in EBCDIC aren’t defined, suggesting that the use of 7 bits in ASCII makes more sense. At the time ASCII was being de­ veloped, memory was very expensive. Some people felt that ASCII should be a 6-bit code using a shih character to differentiate between lowercase and uppercase to conserve memory. Once that idea was rejected, others believed that ASCII should be an 8-bit code because even at that time it was consid­ ered more likely that computers would have 8-bit architectures than 7-bit architectures. Of course, 8-bit bytes are now the standard. Although ASCll
is technically a 7-bit code, it’s almost universally stored as 8-bit values.
The equivalence of bytes and characters is cenainly convenient because we can get a rough sense of how much computer memory a panicular text document requires simply by counting the characters. To some, the kilos and megas of computer storage are more comprehensible when expressed in
terms of text.
For example, a traditional double-spaced typewritten S ‘h-by- 1 1 -inch page
with l -inch margins has about 27 lines of text. Each line is about 6 lfz inches wide with 10 characters per inch, for a total of about 1 750 bytes. A single­ space typewritten page has about double that, or 3.5 kilobytes.
297
93
87

298
Chapter Twenty
A page in The New Yorker magazine has 3 columns of text with 60 lines per column and about 40 characters per line. That’s 7200 characters (or bytes) per page.
The New York Times has six columns of text per page. lf the entire page is covered with text without any titles or pictures (which is highly unusual), each column has 1 55 lines of about 35 characters each. The entire page has 32,550 characters, or 32 kilobytes.
A hardcover book has about 500 words per page. An average word is about 5 letters-actually 6 characters, counting the space between words. So a book has about 3000 characters per page. Let’s say the average book has 333 pages, which may be a made-up figure but nicely implies that the average book is about 1 million bytes, or 1 megabyte.
Of course, books vary all over the place:
F. Scott Fitzgerald’s The Great Gatsby is about 300 kilobytes.
j. D. Salinger’s Catcher in the Rye is about 400 kilobytes.
Mark Twain’s The Adventures of Huckleberry Finn is about 540 kilobytes. John Steinbeck’s The Grapes of Wrath is about a megabyte.
Herman Melville’s Moby Dick is about 1 .3 megabytes.
Henry Fielding’s The History of Tom Jones is about 2.25 megabytes. Margaret Mitchell’s Gone with the Wind is about 2.5 megabytes.
Stephen King’s complete and uncut The Stand is about 2.7 megabytes.
Leo Tolstoy’s War and Peace is about 3.9 megabytes.
Marcel Proust’s Remembrance of Things Past is about 7.7 megabytes.
The United States Library of Congress has about 20 million books for a total of 20 trillion characters, or 20 terabytes, of text data. (It has a bunch of photographs and sound recordings as well.)
Although ASCO is cenainly the most important standard in the computer industry, it isn’t perfect. The big problem with the American Standard Code for Information Interchange is that it’s just too dam American! Indeed, ASCD is hardly suitable even for other nations whose principal language is English. Although ASCO includes a dollar sign, where is the British pound sign? And what about the accented letters used in many Western European languages? To say nothing of the non-Latin alphabets used in Europe, including Greek, Arabic, Hebrew, and Cyrillic. Or the Brahmi scripts of India and Southeast Asia, including Devanagari, Bengali, Thai, and Tibetan. And how can a 7- bit code possibly handle the tens of thousands of ideographs of Chinese,
japanese, and Korean and the ten thousand-odd Hangul syllables of Korean? Even when ASCII was being developed, the needs of some other nations were kept in mind, although without much consideration for non-Latin alphabets. According to the published ASCD standard, ten ASCII codes (40h, 5Bh, 5Ch, 5Dh, 5Eh, 60h, 7Bh, 7Ch, 7Dh, and 7Eh) are available to be redefined for national uses. In addition, the number sign (#) can be replaced by the British pound sign (!), and the dollar sign ($) can be replaced by a generalized currency sign (O) if necessary. Obviously, replacing symbols

A C l l and a Cast of Characters
299
make ense only when everyone involved in u ing a parti ular text do u­ ment ontaioing the e redefined ode know about the hange.
Be au e many ompurer y tern core character a -bit alue it’ po ible to devi e an e:�tmded A Cll character set that contain 256 char­ a rer rather than ju t 128. In uch a haracter et odes OOb through 7Fh are defined ju t a they are in ASen· code SOh through FFh can be orne­ thing el e entirely. Thi te bnique ha been u ed to define additional char­ acter odes to accommodat.e accented letter and non-Latin a l phabet . A an e ·ample here’ a 96–character extension of A en called the Larin Alpha­ bet o. 1 that define character for code AOb through FFh. In thi table rhe high-order nibble of the hexadecimal character code i shown in th top row· the low-order nibble i hown in the left olumn.
A- E- F- -0 aa
0
0-
AE>
±AN
2
l
A6 A6 A.0
j.l
c;
.
,
�0
0
£0
A0 A:0
X
�0
lt£0 1/.tl0
Y2fy
t!lT1:1
e
I
r
i
-1 i
-2 ¢
-3 f
�4 0
-5 ¥ I
-7 § -8 � -9 c ·A � ·B ” .c ..,
a fi a 0 a 6 a 0 A 0
IiE0
-0
.

e 0 e u � u
u
u
y
-E ®
-F – l. T B i 9
The hara rer for ode AOh i defined a a no-break space. U ally when a computer program format text into line and paragraph , it breaks each line at a pace haracter, which i A en code 20h. Code Oh i uppo ed co be di played a a pace bm an’t be u ed for breaking a line. A no-break pace
p
8-
-6
C-

300
Chapter Twenty
might be used in the text “WW U,” for example. Code ADh is defined as a soft hyphen. This is a hyphen used to separate syllables in the middle of words. It appears on the printed page only when it’s necessary to break a word between two lines.
Unfortunately, many different extensions of ASCU have been defined over the decades, leading to much confusion and incompatibility. ASCII has been extended in a more radical way to encode the ideographs of Chinese, Japa­ nese, and Korean. In one popular encoding�alled Shift-JIS (japanese In­ dustrial Standard)-

330
Chapter Twenty-Two
address OtOOh. This is how you run programs that are located on the disk. For example, if you type
CALC
following the CP/M prompt, and if a file named CALC.COM exists on the disk, the CCP loads that file into memory starting at address 01OOh and then executes the program by jumping to the machine-code instruction located at address OlOOh.
Earlier I explained how you can insert machine-code instructions any· where into memory and execute them, but in CP/M programs that are stored in disk files must be designed to be loaded into memory beginning at a spe­ cific memory location, which is OlOOh.
CP/M comes with several useful programs, including PIP, the Peripheral Interchange Program, which allows you to copy files. The ED program is a text editor that allows you to create and modify text files. Programs such as PIP and ED, which are small and designed to do simple chores, arc often known as utility programs. If you were running a CP/M system, you would probably purchase larger application programs, such as word processors or computer spreadsheets. Or you might write such programs yourself. All these programs are also stored in files of the COM type.
So far we’ve seen how CP/M ( like most operating systems) provides com· mands and utilities that let you perform rudimentary housekeeping regarding files. We’ve also seen how CP/M loads program files into memory and executes them. An operating system also has a third major function. ·
A program running under CP/M often needs to write some output to the video display. Or the program might need to read something that you’ve typed on the keyboard. Or the program might need to read a file from the disk or to write a file to the disk. But in most cases, the CP/M program does not write its output directly into video display memory. Likewise, the CP/M program does not access the hardware of the keyboard to sec what you’ve typed. And the CP/M program definitely docs not access the disk drive hard­ ware to read and write disk sectors.
Instead, a program running under CP/M makes use of a collection of subroutines built into CP/M for performing these common chores. These subroutines have been specifically designed so that programs can get easy access to all the hardware of the computer-including the video display, keyboard, and disk-without worrying programmers about how these pe· riphcrals are actually connected. Most important, a program running un­ der CP/M doesn’t need to know about disk sectors and tracks. That’s CP/M’s job. It can instead store whole files on the disk and later read them.
Providing a program with easy access to the hardware of the computer is the third major function of an operating system. The access that the oper­ ating system provides is called the application programming interface, or API.
A program running under CP/M uses the API by setting register C to a particular value (called the function value) and executing the: instruction
CALL 5

The Oper11ting System .J.Jl For example, a program obtains the ASCII code of a key typed on the key­
board by executing
HVI C,9lh CALL 5
On return, accumulator A contains the ASCII code of the key that was pressed. Similarly,
MVI C,92h CALL 5
writes the ASCU character in accumulator A to the video display at the cursor position and then increments the cursor.
If a program needs to create a file, it sets register pair DE to an area of memory that basically contains the name of the file. Then it executes the code:
MVI C,l6h CALL 5
In this case, the CALL 5 instruction causes CP/M to create an empty file on the disk. The program can then use other functions to write to the file and eventually close the file, which means it has finished using the file for now. The same program or another program can later open the file and read its contents.
What does CALL 5 actually do? The memory location at 0005h is set up by CP/M to contain a JMP Uump) instruction, which jumps to a location in the Basic Disk Operating System (BOOS) of CP/M. This area contains a bunch of subroutines that execute each of the CP/M functions. The BOOS­ as its name implies-is primarily responsible for maintaining the file system on the disk. Frequently, the BOOS has to make use of subroutines in the Basic Input/Output System (BIOS) of CP/M, which is the area that actually accesses the hardware of the keyboard, the video display, and the disk drives. In fact, the BIOS is the only section of CP/M that needs to know about the hard­ ware of the computer. The CCP does everything it needs to do using BOOS functions, and so do the utilities that come with CP/M.
The API is a device-independent interface to the hardware of the computer. What this means is that a program written for CP/M doesn’t need to know the actual mechanics of how the keyboard works on a particular machine, or how the video display works, or how to read and write disk sectors. It simply uses the CP/M functions to perform tasks that involve the keyboard, display, and disk. The bonus is that a CP/M program can run on many differ­ ent computers that might use very different hardware to access these pe­ ripherals. (All CP/M programs must have an Intel 8080 microprocessor, however, or a processor that executes 8080 instructions, such as the Intel 8085 or the Zilog Z-80. ) just as long as the computer is running CP/M, the program uses the CP/M functions to indirectly access this hardware. With­ out standard APis, programs would have to be specifically tailored to run on different types of computers.

Chapter Twenty-Two
CP/M was once a very popular operating system for the 8080 and remains historically imponant. CP/M was the major influence behind a 1 6-bit oper­ ating system named QDOS (Quick and Diny Operating System) written by lim Paterson of Seattle Computer Products for Intel’s 16-bit 8086 and 8088 chips. QDOS was eventually renamed 86-DOS and licensed by Microsoft Corporation. Under the name MS-DOS (Microsoft Disk Operating System, pronounced em ess dabs, like the German anicle das), the operating system was licensed to IBM for the first IBM Personal Computer, introduced in 1 98 1 . Although a 1 6-bit version of CP/M (called CP/M-86) was also avail­ able for the IBM PC, MS-DOS quickly became the standard. MS-DOS (called PC-DOS on IBM’s computers) was also licensed to other manufacturers who created computers compatible with the IBM PC.
MS-DOS didn’t retain CP/M’s file system. The file system in MS-DOS instead used a scheme called the File Allocation Table, or FAT, which had been originally invented at Microsoft in 1 977. The disk space is divided into clusters, which-depending on the size of the disk-

Reviews

There are no reviews yet.

Only logged in customers who have purchased this product may leave a review.

Shopping Cart
[SOLVED] CS代考计算机代写 chain ada Java information theory flex x86 gui computer architecture compiler ER Fortran scheme AI assembler jvm file system assembly algorithm Excel javascript interpreter c++ Elm cache The Hidden Language of Computer Hardware and Software
30 $