Showing posts with label 8008. Show all posts
Showing posts with label 8008. Show all posts

Reverse-engineering the surprisingly advanced ALU of the 8008 microprocessor

A computer's arithmetic-logic unit (ALU) is the heart of the processor, performing arithmetic and logic operations on data. If you've studied digital logic, you've probably learned how to combine simple binary adder circuits to build an ALU. However, the 8008's ALU uses clever logic circuits that can perform multiple operations efficiently. And unlike most 1970's microprocessors, the 8008 uses a complex carry-lookahead circuit to increase its performance.

The 8008 was Intel's first 8-bit microprocessor, introduced 45 years ago.1 While primitive by today's standards, the 8008 is historically important because it essentially started the microprocessor revolution and is the ancestor of the x86 processor family that you are probably using right now.2 I recently took some die photos of the 8008, which I described earlier. In this article, I reverse-engineer the 8008's ALU circuits from these die photos and explain how the ALU functions.

Inside the 8008 chip

The image below shows the 8008's tiny silicon die, highly magnified. Around the outside of the die, you can see the 18 wires connecting the die to the chip's external pins. The rest of the chip contains the chip's circuitry, built from about 3500 tiny transistors (yellow) connected by a metal wiring layer (white).

Die photo of the 8008 microprocessor, showing important functional blocks.

Die photo of the 8008 microprocessor, showing important functional blocks.

Many parts of the chip work together to perform an arithmetic operation. First, two values are copied from the registers (on the right side of the chip) to the ALU's temporary registers (left side of the chip) via the 8-bit data bus. The ALU computes the result, which is stored back into the accumulator register via the data bus. (Note that the data bus splits and goes around both sides of the ALU to simplify routing.) The carry lookahead circuit generates the carry bits for the sum in parallel for higher performance.3 This is all controlled by the instruction decode logic in the center of the chip that examines each machine instruction and generates signals that control the ALU (and other parts of the chip).

The Arithmetic-Logic Unit

The 8008's ALU implements four functions: Sum, AND, XOR and OR. The Sum operation adds two 8-bit numbers. The remaining three operations are standard Boolean logic operations. The AND operation sets an output bit if the bit is set in the first AND the second number. OR checks if a bit is set in the first OR the second number (or both). XOR (exclusive-or) checks if a bit is set in the first OR the second number (but not both).

The concept of carries during addition is a key part of the ALU. Binary addition in a processor is similar to grade-school long addition, except with binary numbers instead of decimal. Starting at the right, each column of two numbers is added and there can be a carry to the next column. Thus, in each column, the ALU adds two bits as well as a carry bit.

In most early microprocessors, addition of each column needs to wait until the column to the right has been added and the carry is available. The carry "ripples" through the bits, right to left, slowing the addition. The 8008, however, uses a fast carry-lookahead circuit3 to generate the carries for all 8 columns in parallel before the addition happens. Then all the columns can all be added in parallel without waiting for the carry to "ripple" through the sum. This carry-lookahead circuit is an unusual feature to see in an early microprocessor due to its complexity.

Since the 8008 is an 8-bit processor, the ALU operates on two eight-bit arguments. Most 8-bit processors (including the 8008) use a "bit-slice" construction for the ALU, with a one-bit ALU slice repeated eight times. Each one-bit ALU slice takes two input bits and the carry-in bit, and produces the output bit. In most 8-bit processors, the bit-slice ALU is arranged by stacking 8 rectangular ALU slices to form a compact, regular block. However, the 8008 has its eight ALU slices arranged in an irregular fashion—some blocks are even sideways—as shown in the diagram below. The motivation for this is that the carry lookahead circuit takes up a triangular space on the chip. To fit the remaining space better, the 8008's ALU is arranged into its unusual triangular layout.

Arrangement of the eight ALU slices on the 8008 microprocessor die. Unlike most processors, the 8008's ALU slices are arranged in a haphazard triangular arrangement. This fits better with the triangular carry-lookahead circuit above the ALU.

Arrangement of the eight ALU slices on the 8008 microprocessor die. Unlike most processors, the 8008's ALU slices are arranged in a haphazard triangular arrangement. This fits better with the triangular carry-lookahead circuit above the ALU.

Zooming in on the die photo, we can look at one of the ALU slices and see how the circuitry is constructed. The chip is built from three layers (to simplify slightly). The topmost layer is the metal wiring. It is the most visible feature, and looks metallic (not surprisingly). In the detail below, you can see the horizontal and vertical metal traces. The polysilicon layer is underneath the metal layer and appears yellow/orange under the microscope. Polysilicon can act as wiring, but more importantly it forms the gates of the transistors, switching them on and off. The bottom layer is the grayish silicon die itself, but it is hard to see under the other layers.

Die photo of the 8008 processor, zoomed in on the circuit for one bit of the ALU.

Die photo of the 8008 processor, zoomed in on the circuit for one bit of the ALU.

In the diagram above, the carry c and the complemented a and b inputs enter through the metal wires at the top. The ALU output is at the bottom. The control signals are horizontal metal lines. The circuit is powered by the Vcc (+5 volts) and Vdd (-9 volts) metal lines. The brighter yellow polysilicon regions are transistors. Each gate in the circuit requires a "load resistor" connected to Vdd to pull its output low; for improved performance, these are implemented with transistors rather than resistors.

Removing the metal layer with acid makes the silicon and polysilicon layers more visible, as shown below.6 The chip is formed on a silicon wafer with regions of it "doped" with impurities to create regions of semiconducting silicon. You can see dark lines along the border between doped silicon and undoped silicon. A transistor is formed where a yellowish polysilicon wire crosses the doped silicon. The transistor forms a switch between the two silicon sides, controlled by the polysilicon gate. Each ALU slice contains 20 transistors; the diagram below points out two of them.5

With the metal layer removed from the 8008 processor die, the underlying silicon is visible. The photo shows bit 1 of the 8008's ALU.

With the metal layer removed from the 8008 processor die, the underlying silicon is visible. The photo shows bit 1 of the 8008's ALU.

Simulating one slice of the ALU

By examining the die photos carefully, you can map out the ALU slice's 20 transistors and their connections. From this, you can reverse-engineer the gates that make up the circuit. I explained in my previous article how PMOS gates are structured, so I won't go into the details here. The result is the schematic below, showing one bit of the ALU. Each ALU slice takes two inputs (a and b) and the input carry c, and outputs one result bit. There are three mode lines (m1, m2 and m3) that select one of the four ALU operations.7

The schematic below is interactive. First, select an operation and the table will update with results for the eight different inputs. Next, click a row in the table, and the schematic will update, showing how the ALU computes that row. (Note that the a and b inputs to the ALU are inverted, indicated by an overbar.)

Operation:

While this ALU slice looks like it is made of many gates, physically it is only three gates: two large, multilevel AND-OR-NAND gates and one NAND gate. The AND-OR-NAND logic is implemented on the chip as a single complex gate, rather than by combining simpler gates, since a single large gate provides better performance with less circuitry than multiple small gates. One feature of MOS logic is it's just as easy to form an AND-OR-NAND gate (for instance) as a plain NAND gate.

Understanding the ALU logic

The 8008's ALU circuit above looks like a mysterious collection of gates, but eventually I figured out the structure behind it. The starting point is a full adder that handles the Sum operation. (A full adder adds three input bits (a, b and c) and outputs the (low-order) sum bit and a carry bit.) The full adder is then heavily modified to support the logic operations, yielding the ALU from the previous section. The logic operations are implemented by using the mode lines to block parts of the circuit, yielding XOR, AND or OR, rather than the more complex Sum.

The diagram below strips down the 8008's ALU circuit to reveal the full adder "hidden" inside. The gate in red generates the carry-out from the three inverted inputs, using relatively straightforward logic. (Since the 8008 uses carry-lookahead, this carry-out signal isn't passed to the next ALU slice, but just used to generate the ALU output.) If you examine the possible sum cases, you will see that the sum bit is almost always just the carry-out inverted, except for the 0+0+0 and 1+1+1 cases. Thus, the sum bit can be generated by inverting the carry-out and handling the two exceptional cases.8 The two gates indicated below handle the exceptions by forcing the sum output to the correct value.

Simplified 8008 ALU slice, showing the full adder circuit.

Simplified 8008 ALU slice, showing the full adder circuit.

Comparing the full adder with the full ALU circuit earlier shows how the mode lines support the logic operations. Once you have a full adder, generating XOR is simply a matter of setting the carry-in to 0, which is done by the m3 control line. For the OR and AND operations, mode lines m3 and m2 respectively disable all of the circuit except the gates labeled in green.9 Thus, if you start with a full-adder and extend it to support XOR, AND and OR, the 8008's ALU circuit is a logical result.

Intel's earlier 4004 microprocessor had a simple ALU that only supported addition and subtraction, not any logic operations.10 Interestingly, the 4004's ALU circuit is almost identical to the full adder circuit shown above. So it's very likely that Intel designed the 8008 ALU by extending the 4004 ALU as described above. This would explain why the 8008's ALU generates carries internally, even though the carry lookahead circuit made this redundant.11

The 8008's ALU logic is very similar to the Z80's ALU,12 although the Z80's ALU is (surprisingly) 4 bits (details). The 8085 uses a different complex gate arrangement. The 6502 on the other hand, uses an entirely different approach: straightforward circuits for addition, AND, OR, XOR and shift-right, using pass-transistor multiplexers to select the operation.

Instruction decoding: how the ALU knows what operation to do

The 8008 executes 8-bit instructions, which move data, perform I/O, branch, call subroutines, and so forth. The instruction decoding logic examines the instruction and determines what operation to perform, generating about 30 control signals.13 Over a quarter of the instructions perform ALU operations, and the instruction set is carefully designed so three bits of the instruction specify which of the eight operations to perform.14 By examining these bits, the instruction decoder generates the ALU's mode control lines m1, m2 and m3.

Looking at AND instructions illustrates how this works. All AND instructions have the bit pattern xx100xxx (where x is either 0 or 1). For instance, the instruction to AND with memory is 10100111 and the instruction to AND with a constant is 00100100. When the instruction decode circuit matches this pattern, it pulls the m1 control line low, which causes the ALU to perform an AND operation.7 Other bit patterns generate the other ALU control signals.15

Part of the 8008's instruction decode PLA. The three indicated transistors match opcode pattern XX100XXX, indicating an AND instruction.

Part of the 8008's instruction decode PLA. The three indicated transistors match opcode pattern XX100XXX, indicating an AND instruction.

The diagram above shows part of the instruction decode circuit. The instruction bits (and their complements) are on yellow polysilicon wires running vertically through the circuit. Each row matches a bit pattern, with a transistor connected to each instruction bit to be matched. (The doped silicon regions forming transistors are the black outlines. Circles are connections between a transistor and the row's metal line.) For example, the three transistors marked with arrows match bit 3 low, bit 4 low, and bit 5 high, detecting the AND instruction pattern. Thus, the processor uses the grid of transistors in the instruction decoder to determine the meaning of each instruction.

Loose ends: Subtraction and rotating

The ALU implements a Sum operation, so you might wonder how subtraction is implemented. By using two's complement arithmetic, the CPU can perform subtraction by simply flipping all the bits on a value and then adding it. The ALU uses two temporary registers to hold the two operands since the ALU can't read the operands from the register file and write the result back simultaneously. One of the temporary registers has the feature that its value can be fed to the ALU directly or inverted. The subtraction instructions generate a signal causing the temporary register to provide the inverted value to the ALU, causing the ALU to perform subtraction.

One important operation in most processors is rotating or shifting the bits in a value, to the left or to the right. In most of the microprocessors I've examined, shifting is performed by the ALU.16 The 8008, on the other hand, implements the rotate logic in the register access circuit, on the opposite side of the chip from the ALU. When reading a register, the bits can be shifted one position left or right by a simple circuit before going onto the data bus.

History of the 8008

The Intel 8008 is important historically since it is the ancestor of the dominant Intel x86 architecture that you're probably using right now.2 I wrote a detailed article for the IEEE Spectrum on early microprocessor history, so I'll just give the outline of the 8008's complicated history here.

The 8008 copies the instruction set and architecture of the Datapoint 2200, a popular minicomputer introduced in 1970 as a programmable terminal.17 As was typical for minicomputers, the Datapoint 2200 contained a CPU build from individual TTL chips, filling up a circuit board. Datapoint contracted with both Intel and Texas Instruments to build a single-chip CPU that would replace this processor board, but keeping the same architecture and instruction set.

The Datapoint 2200 computer. The 8008 microprocessor was built to implement the Datapoint 2200's architecture and instruction set. Photo courtesy of Austin Roche.

The Datapoint 2200 computer. The 8008 microprocessor was built to implement the Datapoint 2200's architecture and instruction set. Photo courtesy of Austin Roche.

Texas Instruments was first to build a 2200-compatible microprocessor, creating the TMC 1795 chip. Intel got their version, the 8008, working a bit later, around the end of 1971. Datapoint rejected both processors, instead updating the Datapoint 2200 to use the 74181 TTL ALU chip. Texas Instruments couldn't find a new customer for the TMC 1795 and abandoned it. Intel, on the other hand, came up with the idea of selling the 8008 as a well-supported general-purpose processor. The 8008 led to the 8080, the 8085, 8086, and Intel's x86 line, which still retains some features of the 8008.

Conclusion

Although the 8008 was a very early microprocessor, its ALU was more advanced than you might expect. In particular, it used a complex carry-lookahead circuit for higher performance. Unfortunately, even with the carry-lookahead circuit, the 8008 was slower than the TTL-based Datapoint 2200 processor it was supposed to replace; addition took 20µs on the 8008, compared to 16µs on the original Datapoint 2200 and just 3.2µs on the upgraded Datapoint 2200. This illustrates the speed advantage that TTL had over MOS in the early 1970s. To us, a microprocessor may seem obviously better than a board of chips, but this wasn't always the case.

If you're interested in the 8008, my previous article has a detailed discussion of the architecture, more die photos and information on how to take them, and information on semiconductor history, so take a look.

I announce my latest blog posts on Twitter, so follow me at kenshirriff. I also have an RSS feed.

Notes and references

  1. The 8008 chip was publicly announced in an article in Electronics on March 13, 1972, entitled "8-bit parallel processor offered on a single chip", offering the chips for $200 each. 

  2. If you're not using an x86 processor right now, you're probably using an ARM processor. Don't feel neglected, though, since I've reverse-engineered the ARM-1 too. (Although there are many more ARM chips out there than x86, analytics show 71% of my readers are on x86.)  

  3. Using a carry look ahead circuit avoids the delay from a standard ripple-carry adder, where the carries propagate through the sum. The 8008's carry-lookahead is based on the Manchester carry chain, but with a separate carry chain for each carry, yielding the triangular structure you see on the die. For performance, the carry chain is implemented with dynamic logic, depending on wire capacitance, rather than with standard Boolean gates. The 74181 ALU chip in comparison, uses a different carry lookahead scheme implemented with standard logic. I plan to write more about the 8008's carry lookahead later. 

  4. The 8008 implements eight different arithmetic/logic functions: Add, Add with carry, Subtract, Subtract with borrow, AND, XOR, OR, and Compare.14 These are implemented in terms of the ALU's four basic operations. Subtraction is performed by inverting the second argument. The operations without carry/borrow clear the carry-in bit. Compare is simply a subtraction that doesn't store the result; it just sets the flags with the status. Thus, the four fundamental operations of the ALU are used to implement eight different arithmetic/logic operations. 

  5. Note that the 8008 uses PMOS transistors, rather than the faster NMOS transistors in later microprocessors such as the 8080, 6502 and Z80. If you're familiar with NMOS circuits, PMOS can be confusing since everything is backwards. PMOS transistors turn on if the gate is low, and typically pull the output high. Vdd in PMOS is negative, and "ground" is positive. The "pull-up resistor" in a PMOS gate pulls the output down. A PMOS NAND gate has transistors in parallel (compared to serial for an NMOS NAND gate). A PMOS NOR gate has transistors in serial (compared to parallel for an NMOS NOR gate). 

  6. The metal layer of the chip is protected by silicon dioxide passivation layer. The professional way to remove this layer is with dangerous hydrofluoric acid. Instead, I used Armour Etch glass etching cream, which is slightly safer and can be obtained at craft stores. I applied the etching cream to the die and wiped it for four minutes with a Q-tip. (Since the cream is designed for frosting glass, it only etches in spots. It must be moved around to obtain a uniform etch.) After this, I soaked the die in hydrochloric acid (pool acid from the hardware store) overnight to dissolve the metal. This was probably too long, since the edges of the polysilicon were eaten away in places. 

  7. The following values are used for the three mode lines to select the ALU function:

    Operationm1m2m3
    Sum111
    And010
    Or100
    Xor110
     

  8. A more straightforward way of generating the sum bit is by xoring the three inputs: a⊕b⊕c. Unfortunately, an XOR gate is relatively difficult to implement with Boolean logic, so designers will often try to avoid XOR. 

  9. You might wonder why the OR operation is implemented with an AND gate, and vice versa. Since the inputs and the output of the OR gate are inverted, this is equivalent to an AND gate (by De Morgan's laws), and similarly for the AND gate. 

  10. Strictly speaking, the 4004 microprocessor has an AU (arithmetic unit), not an ALU (arithmetic/logic unit), since it doesn't do logical operations. Since the 4004 was designed for a calculator, logical operations weren't required. 

  11. The 8008's full adder generates the carry-out first, and generates the sum from that. In contrast, the typical full adder circuit combines two half adders to generate the sum and carry-out separately. If the typical full adder circuit had been used in the 8008, the carry-out logic could easily be omitted. 

  12. To see the similarity between the Z80's ALU circuit and the 8008's, you need to swap AND and OR gates. (Apply De Morgan's laws since the 8008's ALU inputs are inverted.) In the Z80, the carry-out comes from the ALU rather than a carry-lookahead circuit, so the control lines are somewhat different. But the fundamental ALU circuit is otherwise the same between the 8008 and Z80, which is not surprising since Federico Faggin worked on both chips. 

  13. Instruction decoding is based on a Programmable Logic Array (PLA), an arrangement of transistors that efficiently implements logic gates. These gates match bit patterns and generate the appropriate control signals for the rest of the chip. The 8008's PLA has 16 input lines flow vertically through the PLA. Each row in the PLA matches a bit pattern and generates a control signal output.

    In more detail, each row output line is pulled low by a load resistor/transistor to Vdd. The transistors are connected between the row line and Vcc (+5V). The bit lines are connected to the transistor's gate. If any bit line is low (indicating a mismatch), the PMOS transistor turns on, pulling the row line high. Thus, if there is no mismatch, the control line is low, and if there is a mismatch, the control line is high. In other words, each row is a NAND gate with instruction bit inputs.

    The input lines are ordered as follows: bit 3, bit 3 complement, 4, 4', 5, 5', 0, 0', 1, 1', 2, 2', 6, 6', 7, 7'. This order may seem strange, but there's a reason for it. In the 8008, the ALU operation is selected by bits 3, 4 and 5 of the instruction. By putting those bits on the left side of the PLA, they are closer to the ALU. Some rows of the PLA actually decode two instructions: bits 3, 4 and 5 are decoded on the left side, generating an ALU control signal, while the remaining bits are decoded on the right side generating a different control signal. This increases the PLA density and saves space on the chip. 

  14. The 8008's instruction set is designed around octal. Among other things, there are 8 ALU operations, 8 registers and 8 conditionals. In octal, the ALU instructions have the value 2ar, where a is the ALU operation to perform (0 through 7) and r is the register to use (0 through 7, where 7 indicates memory). The octal structure originates with the Datapoint 2200, which decoded instructions with TTL 7442 BCD chips that decoded groups of three bits. This octal structure persisted in descendants of the 8008, including the Z80 and x86. Unfortunately, these instruction sets are almost always presented in hexadecimal, which hides the underlying structure. 

  15. The instruction decoder generates all the signals required by the ALU. As described above, AND matches xx100xxx, pulling the m1 control signal low. An OR opcode has the bit pattern xx110xxx, which causes the instruction decode circuit to pull the m2 control line low. An XOR instruction has the bit pattern xx101xxx. The m3 control line is pulled low for patterns xx10xxxx or xx1x0xxx, matching AND, OR or XOR instructions. The subtract (with and without borrow) instructions match xx01xxxx, generating a signal that inverts the second argument. 

  16. Different processors use a variety of techniques for shifting. In the Z80, shifting is performed as data enters the ALU. The 6502 performs a left shift with "A plus A", and has a path inside the ALU for right shifts; the 8085 is similar. The ARM-1 has a barrel shifter next to the ALU that performs arbitrary shifts. 

  17. The instruction set of the Datapoint 2200 is described in the Reference Manual. The 8008 has a couple minor changes. For instance, the 8008 has increment and decrement instructions that are not present in the 2200. 

Die photos and analysis of the revolutionary 8008 microprocessor, 45 years old

Intel's groundbreaking 8008 microprocessor was first produced 45 years ago.1 This chip, Intel's first 8-bit microprocessor, is the ancestor of the x86 processor family that you may be using right now. I couldn't find good die photos of the 8008, so I opened one up and took some detailed photographs. These new die photos are in this article, along with a discussion of the 8008's internal design.

Die photograph of the 8008 microprocessor

Die photograph of the 8008 microprocessor

The photo above shows the tiny silicon die inside the 8008 package. (Click the image for a higher resolution photo.) You can barely see the wires and transistors that make up the chip. The squares around the outside are the 18 pads that are connected to the external pins by tiny bond wires. You can see the text "8008" on the right edge of the chip and "© Intel 1971" on the lower edge. The initials HF appear on the top right for Hal Feeney, who did the chip's logic design and physical layout. (Other key designers of the 8008 were Ted Hoff, Stan Mazor, and Federico Faggin.)

Inside the chip

The diagram below highlights some of the major functional blocks of the chip. On the left is the 8-bit Arithmetic/Logic Unit (ALU), which performs the actual data computations.3 The ALU uses two temporary registers to hold its input values. These registers take up significant area on the chip, not because they are complex, but because they need large transistors to drive signals through the ALU circuitry.

Die of the 8008 microprocessor showing major components.

Die of the 8008 microprocessor showing major components.

Below the registers is the carry look ahead circuitry. For addition and subtraction, this circuit computes all eight carry values in parallel to improve performance.2 Since the low-order carry depends on just the low-order bits, while the higher-order carries depend on multiple bits, the circuit block has a triangular shape.

The triangular layout of the ALU is unusual. Most processors stack the circuitry for each bit into a regular rectangle (a bit-slice layout). The 8008, however, has eight blocks (one for each bit) arranged haphazardly to fit around the space left by the triangular carry generator. The ALU supports eight simple operations.3

In the center of the chip is the instruction register and the instruction decoding logic that determines the meaning of each 8-bit machine instruction. Decoding is done with a Programmable Logic Array (PLA), an arrangement of gates that matches bit patterns and generates the appropriate control signals for the rest of the chip. On the right are the storage blocks. The 8008's seven registers are in the upper right. In the lower right is the address stack, which consists of eight 14-bit address words. Unlike most processors, the 8008's call stack is stored on the chip instead of in memory. The program counter is just one of these addresses, making subroutine calls and returns very simple. The 8008 uses dynamic memory for this storage

The physical structure of the chip is very close to the block diagram in the 8008 User's Manual (below), with blocks located on the chip in nearly the same positions as in the block diagram.

Block diagram of the 8008 microprocessor, from the User's Manual.

Block diagram of the 8008 microprocessor, from the User's Manual.

The structure of the chip

What does the die photo show? For our purposes, the chip can be thought of as three layers. The diagram below shows a closeup of the chip, pointing out these layers. The topmost layer is the metal wiring. It is the most visible feature, and looks metallic (not surprisingly). In the detail below, these wires are mostly horizontal. The polysilicon layer is below the metal and appears orange under the microscope.

A closeup of the 8008 die, showing the metal layer, the polysilicon, and the doped silicon.

A closeup of the 8008 die, showing the metal layer, the polysilicon, and the doped silicon.

The foundation of the chip is the silicon wafer, which appears purplish-gray in the photo. Pure silicon is effectively an insulator. Regions of it are "doped" with impurities to create semiconducting silicon. Being on the bottom, the silicon layer is difficult to distinguish, but you can see black lines along the border between doped silicon and undoped silicon. A few vertical silicon "wires" are visible in the photo.4

Transistors are the key component of the chip, and a transistor is formed where a polysilicon wire crosses doped silicon. In the photo, the polysilicon appears as a brighter orange where it forms a transistor.

Why an 18 pin chip?

One inconvenient feature of the 8008 is it only has 18 pins, which makes the chip slower and much more difficult to use. The 8008 uses 14 address bits and 8 data bits so with 18 pins there aren't enough pins for each signal. Instead, the chip has 8 data pins that are reused in three cycles to transmit the low address bits, high address bits, and data bits. A computer using the 8008 requires many support chips to interact with this inconvenient bus architecture.5

There was no good reason to force the chip into 18 pins. Packages with 40 or 48 pins were common with other manufacturers, but 16 pins was "a religion at Intel".6 Only with great reluctance did they move to 18 pins. By the time the 8080 processor came out a few years later, Intel had come to terms with 40-pin chips. The 8080 was much more popular, in part because it had a simpler bus design permitted by the 40-pin package.

Power and data paths in the chip

The data bus provides data flow through the chip. The diagram below shows the 8-bit data bus of the 8008 with rainbow colors for the 8 data lines. The data bus connects to the 8 data pins along the outside of the upper half of the chip. The bus runs between the ALU on the left, the instruction register (upper center), and the registers and stack on the right. The bus is split on the left with half along each side of the ALU.

Die photo of the 8008 microprocessor. The power bus is shown in red and blue. The data bus is shown with 8 rainbow colors.

Die photo of the 8008 microprocessor. The power bus is shown in red and blue. The data bus is shown with 8 rainbow colors.

The red and blue lines show power routing. Power routing is an under-appreciated aspect of microprocessors. Power is routed in the metal layer due to its low resistance. But since there is only one metal layer in early microprocessors, power distribution must be carefully planned so the paths don't cross.7 The diagram above shows Vcc lines in blue and Vdd lines in red. Power is supplied through the Vcc pin on the left and the Vdd pin on the right, then branches out into thin, interlocking wires that supply all parts of the chip.

The register file

To show what the chip looks like in detail, I've zoomed in on the 8008's register file in the photo below. The register file consists of an 8 by 7 grid of dynamic RAM (DRAM) storage cells, each using three transistors to hold one bit.8 (You can see the transistors as the small rectangles where the orange polysilicon takes on a slightly more vivid color.) Each row is one of the 8008's seven 8-bit registers (A, B, C, D, E, H, L). On the left, you can see seven pairs of horizontal wires: the read select and write select lines for each register. At the top, you can see eight vertical wires to read or write the contents of each bit, along with 5 thicker wires to supply Vcc. Using DRAM for registers (rather than the more common static latches) is an interesting choice. Since Intel was primary a memory company at the time, I expect they chose DRAM due to their expertise in the area.

The register file in the 8008. The chip has seven 8-bit registers: A, B, C, D, E, H, L

The register file in the 8008. The chip has seven 8-bit registers: A, B, C, D, E, H, L

How PMOS works

The 8008 uses PMOS transistors. To simplify slightly, you can think of a PMOS transistor as a switch between two silicon wires, controlled by a gate input (of polysilicon). The switch closes when its gate input is low and it can pull its output high. If you're familiar with the NMOS transistors used in microprocessors like the 6502, PMOS may be a bit confusing because everything is backwards.

A simple PMOS NAND gate can be constructed as shown below. When both inputs are high, the transistors are off and the resistor pulls the output low. When any input is low, the transistor will conduct, connecting the output to +5. Thus, the circuit implements a NAND gate. For compatibility with 5-volt TTL circuits, the PMOS gate (and thus the 8008) is powered with unusual voltages: -9V and +5V.

A NAND gate implemented with PMOS logic.

A NAND gate implemented with PMOS logic.

For technical reasons, the resistor is actually implemented with a transistor. The diagram below shows how the transistor is wired to act as a pull-down resistor. The detail on the right shows how this circuit appears on the chip. The -9V metal wire is at the top, the transistor is in the middle, and the output is the silicon wire at the bottom.

In PMOS, a pull-down resistor (left) is implemented with a transistor (center). The photo on the right shows an actual pull-down in the 8008 microprocessor.

In PMOS, a pull-down resistor (left) is implemented with a transistor (center). The photo on the right shows an actual pull-down in the 8008 microprocessor.

History of the 8008

The 8008's complicated story starts with the Datapoint 2200, a popular computer introduced in 1970 as a programmable terminal. (Some people consider the Datapoint 2200 to be the first personal computer.) Rather than using a microprocessor, the Datapoint 2200 contained a board-sized CPU build from individual TTL chips. (This was the standard way to build a CPU in the minicomputer era.) Datapoint and Intel decided that it would be possible to replace this board with a single MOS chip, and Intel started the 8008 project to build this chip. A bit later, Texas Instruments also agreed to build a single-chip processor for Datapoint. Both chips were designed to be compatible with the Datapoint 2200's 8-bit instruction set and architecture.

The 8008 processor was first described publicly in "Electronic Design", Oct 25, 1970. Although Intel claimed the chip would be delivered in January 1971, actual delivery was more than a year later in April, 1972.

The 8008 processor was first described publicly in "Electronic Design", Oct 25, 1970. Although Intel claimed the chip would be delivered in January 1971, actual delivery was more than a year later in April, 1972.

Around March 1971, Texas Instruments completed their processor chip, calling it the TMC 1795. After delaying the project, Intel finished the 8008 chip later, around the end of 1971. For a variety of reasons, Datapoint rejected both microprocessors and built a faster CPU based on newer TTL chips including the 74181 ALU chip. TI tried unsuccessfully to market the TMC 1795 processor to companies such as Ford, but ended up abandoning the processor, focusing on highly-profitable calculator chips instead. Intel, on the other hand, marketed the 8008 as a general-purpose microprocessor, which eventually led to the x86 architecture you're probably using right now. Although TI was first with the 8-bit processor, it was Intel who made their chip a success, creating the microprocessor industry.

A family tree of the 8008 and some related processors. Black arrows indicate backwards compatibility. Light arrows indicate significant architecture changes.

A family tree of the 8008 and some related processors. Black arrows indicate backwards compatibility. Light arrows indicate significant architecture changes.

The diagram above summarizes the "family tree" of the 8008 and some related processors.10 The Datapoint 2200's architecture was used in the TMC 1795, the Intel 8008, and the next version Datapoint 220011. Thus, four entirely different processors were built using the Datapoint 2200's instruction set and architecture. The Intel 8080 processor was a much-improved version of the 8008. It significantly extended the 8008's instruction set and reordered the machine code instructions for efficiency. The 8080 was used in groundbreaking early microcomputers such as the Altair and the Imsai. After working on the 4004 and 8080, designers Federico Faggin and Masatoshi Shima left Intel to build the Zilog Z-80 microprocessor, which improved on the 8080 and became very popular.

The jump to the 16-bit 8086 processor was much less evolutionary. Most 8080 assembly code could be converted to run on the 8086, but not trivially, as the instruction set and architecture were radically changed. Nonetheless, some characteristics of the Datapoint 2200 still exist in today's x86 processors. For instance, the Datapoint 2200 had a serial processor, processing bytes one bit at a time. Since the lowest bit needs to be processed first, the Datapoint 2200 was little-endian. For compatibility, the 8008 was little-endian, and this is still the case in Intel's processors. Another feature of the Datapoint 2200 was the parity flag, since parity calculation was important for a terminal's communication. The parity flag has continued to the x86 architecture.

The 8008 is architecturally unrelated to Intel's 4-bit 4004 processor12. The 8008 is not an 8-bit version of the 4-bit 4004 in any way. The similar names are purely a marketing invention; during its design phase the 8008 had the unexciting name "1201".

If you want more early microprocessor history, I wrote a detailed article for the IEEE Spectrum. I also wrote a post about TI's TMC 1795.

How the 8008 fits into the history of semiconductor technology

The 4004 and 8008 both used silicon-gate enhancement-mode PMOS, a semiconductor technology that was only used briefly. This puts the chips at an interesting point in chip fabrication technology.

The 8008 (and modern processors) uses MOS transistors. These transistors had a long path to acceptance, being slower and less reliable than the bipolar transistors used in most computers of the 1960s. By the late 1960s, MOS integrated circuits were becoming more common; the standard technology was PMOS transistors with metal gates. The gates of the transistor consisted of metal, which was also used to connect components of the chip. Chips essentially had two layers of functionality: the silicon itself, and the metal wiring on top. This technology was used in many Texas Instruments calculator chips, as well as the TMC 1795 chip (the chip that had the same instruction set as the 8008).

A key innovation that made the 8008 practical was the self-aligned gate—a transistor using a gate of polysilicon rather than metal. Although this technology was invented by Fairchild and Bell Labs, it was Intel that pushed the technology ahead. Polysilicon gate transistors had much better performance than metal gate (for complex semiconductor reasons). In addition, adding a polysilicon layer made routing of signals in the chip much easier, making the chips denser. The diagram below shows the benefit of self-aligned gates: the metal-gate TMC 1795 is bigger than the 4004 and 8008 chips combined.

Intel's 4004 and 8008 processors are much denser than Texas Instruments' TMC 1795 chip, largely due to their use of self-aligned gates.

Intel's 4004 and 8008 processors are much denser than Texas Instruments' TMC 1795 chip, largely due to their use of self-aligned gates. TMC 1795 die photo courtesy of Computer History Museum.

Shortly afterwards, semiconductor technology improved again with the use of NMOS transistors instead of PMOS transistors. Although PMOS transistors were easier to manufacture initially, NMOS transistors are faster, so once NMOS could be fabricated reliably, they were a clear win. NMOS led to more powerful chips such as the Intel 8080 and the Motorola 6800 (both 1974). Another technology improvement of this time was ion-implantation to change the characteristics of transistors. This allowed the creation of "depletion-mode" transistors for use as pull-up resistors. These transistors improved chip performance and reduced power consumption. They also allowed the creation of chips that ran on standard five-volt supplies.13 The combination of NMOS transistors and depletion-mode pull-ups was used for most of the microprocessors of the late 1970s and early 1980s, such as the 6502 (1975), Z-80 (1976), 68000 (1979), and Intel chips from the 8085 (1976) to the 80286 (1982).

In the mid 1980s, CMOS took over, using NMOS and PMOS transistors together to dramatically reduce power consumption, with chips such as the 80386 (1986), 68020 (1984) and ARM1 (1985). Now almost all chips are CMOS.14

As you can see, the 1970s were a time of large changes in semiconductor chip technology. The 4004 and 8008 were created when the technological capability intersected with the right market.

How to take die photos

In this section, I explain how I got the photos of the 8008 die. The first step is to open the chip package to expose the die. Most chips come in epoxy packages, which can be dissolved with dangerous acids.

The 8008 microprocessor in a ceramic package

The 8008 microprocessor in a ceramic package

Since I would rather avoid boiling nitric acid, I took a simpler approach. The 8008 is also available in a ceramic package (above), which I got on eBay. Tapping the chip along the seam with a chisel pops the two ceramic layers apart. The photo below shows the lower half of the ceramic package, with the die exposed. Most of the metal pins have been removed, but their positions in the package are visible. To the right of the die is a small square; this connects ground (Vcc) to the substrate. A couple of the tiny bond wires are still visible, connected to the die.

Inside the package of the 8008 microprocessor, the silicon die is visible.

Inside the package of the 8008 microprocessor, the silicon die is visible.

Once the die is exposed, a microscope can be used to take photographs. A standard microscope shines the light from below, which doesn't work well for die photographs. Instead, I used a metallurgical microscope, which shines the light from above to illuminate the chip.

I took 48 photographs through the microscope and then used the Hugin stitching software to combine them into one high-resolution image (details). Finally, I adjusted the image contrast to make the chip's structures more visible. The original image (which is approximately what you see through the microscope) is below for comparison.

Die photograph of the 8008 microprocessor

Die photograph of the 8008 microprocessor

Conclusion

I took detailed die photos of the 8008 that reveal the circuitry it used. While the 8008 wasn't the first microprocessor or even the first 8-bit microprocessor, it was truly revolutionary, triggering the microprocessor revolution and leading to the x86 architecture that dominates personal computers today. In future posts, I plan to explain the 8008's circuits in detail to provide a glimpse into the roots of todays computers.

I announce my latest blog posts on Twitter, so follow me at kenshirriff. Or you can use the RSS feed.

Notes and references

  1. According to the oral history of the 8008, photos of the 8008 were obtained in October / November 1971 (page 6). Chip designer Federico Faggin mentions that toward the end of 1971, "everything was working except for a few errors." Faggin then debugged a problem with the dynamic memory losing data, making it ready for production (page 9). 

  2. Using the carry look ahead circuit avoids the delay from a standard ripple-carry adder, where the carries propagate through the sum. 

  3. The 8008's ALU supports eight operations: add, subtract, add with carry, subtract with carry, AND, OR, XOR, and compare. It also implements left and right shift and rotate operations. The 8008 also has increment and decrement instructions, extending the Datapoint 2200's instruction set

  4. Because silicon has higher resistance than polysilicon, most chips use the polysilicon and metal layers for wiring, not the silicon layer. The 4004 and 8008 chips are unusual in that they prefer to use the silicon layer for wiring rather than polysilicon. I expect this was due to the recent introduction of polysilicon: before polysilicon, routing needed to be done in the silicon layer and perhaps the chip designers were sticking with the older layout techniques. 

  5. The 8008 required 20 support chips according to chip architect Federico Faggin. In contrast, the 4004 and earlier MOS computers such as the Four Phase and CADC were designed with a small number of MOS chips that worked together without extra "glue chips". In this sense, the 8008 was a step backwards architecturally, saying "here's the CPU, you figure out how to make a computer out of it." 

  6. For details on Intel's insistence on 16 pins, see Oral History of Federico Faggin, page 55-56. It was only when the 1103 memory chip required 18 pins that Intel reluctantly moved beyond 16 pins. And that was treated by Intel like "the sky had dropped from heaven," resulting in "so many long faces". 

  7. If two metal lines need to cross, one of them can be routed under the other by using the polysilicon layer. To be low resistance, this cross-under must be relatively wide, so cross-unders are avoided if possible. 

  8. The 8008 registers use the "3T1C" cell: three transistors and one capacitor (details). The circuit doesn't physically contain a separate capacitor, but uses the gate capacitance of the transistor. One unusual feature of the 8008 cell is it uses one wire for both reading and writing the bit, while the typical 3T cell has separate wires for reading and writing. The 4004 had separate wires, but the design changed slightly in the 8008. 

  9. Pull-up resistors in later chips such as the 6502 were implemented using depletion-mode NMOS transistors. These yielded more faster, more efficient logic. They were also wired differently, with the gate connected to the output rather than the power rail. 

  10. The 8008 architecture and the evolution of Intel's microprocessors are discussed in detail in Intel Microprocessors: 8008 to 8086

  11. The second version of the Datapoint 2200 had a totally new implementation of the processor, still built from TTL chips. While the first version had a serial ALU (processing one bit at a time), the second version operated in parallel using 74181 ALU chips. As a result, the second version was much faster. 

  12. The extensive 4004 Anniversary Project has reverse-engineered the 4004 processor. The 4004 schematic is here

  13. The Motorola 6800 microprocessor originally used enhancement-mode transistors. To operate off a single +5V supply, it had a voltage-doubler circuit on the chip. 

  14. Interestingly, in 2007 Intel started using metal gates again in order to scale transistors further (details). In a way, semiconductor technology has gone full circle, back to metal gates, although now unusual metals such as hafnium are used. 

The Texas Instruments TMX 1795: the (almost) first, forgotten microprocessor

The first 8-bit microprocessor, the TMX 1795 had the same architecture as the 8008 but was built months before the 8008. Never sold commercially, this Texas Instruments processor is now almost forgotten even though it had a huge impact on the computer industry. In this article, I present the surprising history of the TMX 1795 in detail, look at other early processors, and explain how the TMX 1795 almost became the first microprocessor. (Originally I thought the TMX 1795 was the first microprocessor, but it appears that the 4004 slightly beat it.)

The Texas Instruments TMX 1795 microprocessor. Courtesy of Computer History Museum.

The Texas Instruments TMX 1795 microprocessor. Courtesy of Computer History Museum.

The story starts with the Datapoint 2200[1], a "programmable terminal" sized to fit on a desktop. While originally sold as a terminal, the Datapoint 2200 was really a minicomputer that could be programmed in BASIC or PL/B. Some people consider the Datapoint 2200 the first personal computer as it came out years before systems such as the Apple II or even the Altair.

The Datapoint 2200 programmable terminal / computer. Photo by Ecksemmess CC BY-SA 3.0  via Wikimedia Commons.

The Datapoint 2200 programmable terminal / computer. Photo by Ecksemmess CC BY-SA 3.0 via Wikimedia Commons.

The Datapoint 2200 had an 8-bit processor built out of dozens of TTL chips, which was the normal way of building computers at the time. The photo below shows the processor board. Keep in mind that there's no processor chip—the whole board is the processor, with a chip or two for each register, a few chips for the adder, a few chips to decode instructions, a few chips to increment the program counter, and so forth. [28] Nowadays, we think of MOS chips as high-performance and building a CPU out of TTL chips seems slow and backwards. However, in 1970, TTL logic was much faster than MOS. Even operating one bit at a time as a serial computer, the Datapoint 2200 performed considerably faster than the 8008 chip, unless it needed to wait for the slow serial memory.

The processor board from the Datapoint 2200. The 8008 was built to replace this board. Photo courtesy of zuigadrummer.

The processor board from the Datapoint 2200. The 8008 was built to replace this board. Image courtesy of zuigadrummer.

While building the Datapoint 2200, its designers were looking for ways to make the processor board smaller and generate less heat. Datapoint met with Intel in December 1969, and what happened next depends on whether you listen to Intel or Datapoint. Intel's story is that Datapoint asked if Intel could build memory chips for the processor stack that had an integrated stack pointer register. Intel engineer Stan Mazor told Datapoint that Intel could not only do that, but could put the whole 2200 processor board on a chip.[2][3] Datapoint's story is that Datapoint founder Gus Roche and designer Jack Frassanito suggested to Intel's co-founder Robert Noyce that Intel build a single-chip CPU with Datapoint's design.[4] but Noyce initially rejected the idea, thinking that a CPU chip wouldn't have a significant market.

In any case, Intel ended up agreeing to build a CPU chip for Datapoint using the architecture of the Datapoint 2200.[5] Intel developed a functional specification for the chip by June 1970 and then put the project on hold for six months. During this time, there was a mention of future 8008 chip in Electronic Design (below)—I suspect I've found the first public mention of the 8008. You might expect there was a race to build the first microprocessor, so you may be surprised that both the 4004 and 8008 projects were put on hold for months. Meanwhile, Datapoint built a switching power supply for the 2200[6], which eliminated the heating concerns, and was planning to start producing the 2200 with the processor board of TTL chips. Thus, Datapoint wasn't particularly interested in the 8008 any more.

First description of the Intel 8008 processor in print. Electronic Design, Oct 25 1970.

First description of the Intel 8008 processor in print. Electronic Design, Oct 25 1970.

A Texas Instruments salesman learned that Intel was building a processor for Datapoint and asked if Texas Instruments could build them one too. Datapoint gave TI the specifications and told them to go ahead. Texas Instruments came up with a three-chip design, but came up with a single-chip CPU after Datapoint pointedly asked, "Can't you build it on one chip like Intel?" Texas Instruments started building a CPU for Datapoint around April 1970 and this chip became the TMX 1795.

There's a lot of debate on just how much information about Intel's design was given to Texas Instruments. The main TI engineer on the project, Gary Boone, says they received hints that Intel was doing better, but didn't improperly receive any proprietary information. According to Intel, though, Texas Instruments received Intel's detailed design documents through Datapoint. For instance, the TI processor copied an error that was in Intel's documentation leaving the TI chip with broken interrupt handling.[7]

The TI chip was first mentioned in March, 1971 in Businessweek magazine, in a short paragraph calling the chip a "milestone in LSI [Large-Scale Integration]" for jamming the CPU onto a single chip.[8] A few months later, the chip received a big media launch with an article and multi-page advertising spread in Electronics (below), complete with die photos of the TMX 1795.

Article on the TMX 1795 and TI advertising section featuring the chip. Electronics, June 7 1971.

Article on the TMX 1795 and two pages from the TI advertising section featuring the chip. Note the die photos of the TMX 1795. Electronics, June 7 1971.

The article, entitled "CPU chip turns terminal into stand-alone machine", described how the chip would make the Datapoint 2200 computer much more powerful. "The 212-by-224 mil chip turns the 2200 into a complete computer that doesn't have to be connected to a time-sharing system." The components of the chip are "similar to units previously available separately, but this is the first time that they've been combined monolithically", consolidated "into a single chip". The chip and 2K of memory would cost about $100. This "central processor on a chip" would make the new Datapoint 2200 "a powerful computer with features the original one couldn't offer."

That didn't happen. Datapoint tested the TMX 1795 chip and rejected it for four reasons. First, the chip and memory didn't tolerate voltage fluctuations of more than 50mV. Second, the TMX 1795 required a lot of support chips (although not as many as the 8008 would), reducing the benefit of a single-chip CPU. Third, Datapoint had solved the heat problem with a switching power supply.[6] Finally, Datapoint had just about completed the 2200 Version II, with a much faster parallel implementation of the CPU. The TMX 1795 (operating in parallel) was slightly faster than the original serial Datapoint 2200, but the 2200 Version II was much faster than the TMX 1795. (This illustrates the speed advantage of TTL chips over MOS at the time.)

Intel engineers provided another reason for the commercial failure of the TMX 1795: the chip was too big to manufacture cost-effectively. I created the diagram below to compare the TMX 1795, 4004, and 8008 at the same scale. The TMX 1795 is larger than the 4004 and 8008 combined! One reason is that Intel had silicon-gate technology, which in effect allowed three layers of circuitry instead of two. But even taking that into account, Texas Instruments didn't seem to put much effort into the layout, which Mazor calls "pretty sloppy techniques" and "throwing some blocks together".[9] While the 4004 and especially the 8008 are densely packed, the TMX 1795 chip has copious unused and wasted space.

Comparative die sizes of the TMX 1795, 4004 and 8008 microprocessors. Note that the 4004 and 8008 are nearly the same size, while the TMX 1795 is more than twice as large.

Comparative die sizes of the TMX 1795, 4004 and 8008 microprocessors. Note that the 4004 and 8008 are nearly the same size, while the TMX 1795 is more than twice as large. The top third of the TMX 1795 is instruction decoding and control logic, the middle is the 8-bit ALU, and the bottom is storage (stack and registers). TMX 1795 die photo courtesy of Computer History Museum.

As well as rejecting the TMX 1795, Datapoint also decided not to use the 8008 and gave up their exclusive rights to the chip. Intel, of course, commercialized the 8008, announcing it in April 1972. Two years later, Intel released the 8080, a microprocessor based on the 8008 but with many improvements. (Some people claim that the 8080 incorporates improvements suggested by Datapoint, but a close examination shows that later Datapoint architectures and the 8080 went in totally different directions.) The 8080 was followed by the x86 architecture, which was designed to extend the 8080. Thus, if you're using an x86 computer now, you're using a computer based on the Datapoint 2200 architecture.[10]

Some sources dismiss the TMX 1795 as a chip that never really worked. However, the video below shows Gary Boone demonstrating the TMX 1795 in 1996. A TMX 1795 board was installed in a laptop (probably a TI LT286) for the purpose of the demo. It runs a simple text editor, a sort program, a simple budget spreadsheet, and Fibonacci numbers. The demo isn't particularly thrilling, but it shows that the TMX 1795 was a functional chip.

Considering the size of Intel and the microprocessor market, Datapoint's decision to give up exclusive rights to the 8008 seems like a huge blunder, possibly "one of the worst business decisions in history". However, it's unlikely that Datapoint would have sold 8008 chips, given that they were a computer company, not a chip company like Intel.[11] In addition, Intel had plans to produce microprocessors even without the rights to the 4004 or 8008.[12]

After rejecting the TMX 1795 (and the 8008), Datapoint continued to build processors out of TTL chips until the early 1980s. While these processors were faster and more powerful than microprocessors for a surprisingly long time, eventually Moore's law led to processors such as the 80286, which outperformed Datapoint at a lower cost. Under heavy competition from PCs, Datapoint's stock crashed in 1982, followed by a hostile takeover in 1984. The company limped along before going bankrupt in 2000. Given that Datapoint designed the architecture used in the 8008, it's ironic that Datapoint was killed by x86 microprocessors which were direct descendents of the 8008.

The TMX 1795 microprocessor installed in a circuit board.

The TMX 1795 microprocessor installed in a circuit board. This board was used in a laptop for the 1996 demo.

Unlike Intel, who commercialized the 8008 chip, Texas Instruments abandoned the TMX 1795 after Datapoint's rejection. The chip would have disappeared without a trace, except for one thing, which had a huge impact on the computer industry.

The "Dallas Legal Firm" and "TI v. Everybody"[13]

Texas Instruments figured out early on that patent litigation and licensing fees could be very profitable. After (co-)inventing the integrated circuit and receiving patents on it, Texas Instruments engaged in bitter patent battles, earning the nickname "the Dallas legal firm" for their "unethical and unprofessional legal tactics".[13] Texas Instruments continued their legal practices with the TMX 1795, receiving multiple patents on it, issued between 1973 and 1985.[14][15]

Needless to say, Intel was not happy that Texas Instruments patented the TMX 1795, since building a single-chip processor for Datapoint was Intel's idea.[16] Intel was even unhappier that that Texas Instruments had used parts of Intel's specification when designing and patenting the TMX 1795.[7][17] Intel had wanted to patent the 4004[18], but their patent attorney told them that it wasn't worth it, and the idea of putting a computer on a chip was fairly obvious. Likewise, Datapoint had considered patenting the single-chip microprocessor but was told by their patent attorney that there was nothing patentable in the idea.[3]

In order to extract substantial licensing fees, Texas Instruments sued multiple companies using their microprocessor and microcontroller patents (including the TMX 1795 patent) in a case that Gordon Bell called "TI v. Everybody".[13] Dell decided to fight back in a "bet the company" lawsuit.[14] The lawsuit dragged on for years and was about to go to trial when the case suddenly turned against Texas Instruments.

Lee Boysel of Four-Phase Systems had built a 24-bit MOS-based minicomputer in 1970, as will be discussed in more detail below. The computer had a 9-chip CPU, but in an amazing hack, Boysel took one of the three 8-bit arithmetic/logic chips and was able to build a working microcomputer from it. Since this chip was a year before than the TMX 1795, it torpedoed Texas Instruments' case and it never went to trial. As a result, many people consider the Four-Phase AL1 to be the first microprocessor. However, as I'll explain below, the demo wasn't quite what most people think.

The Four-Phase AL1 running as a single-chip processor in a patent litigation demo. From Boysel's EECS presentation.

The Four-Phase AL1 running as a single-chip processor in a patent litigation demo. From Boysel's EECS presentation.

Is the TMX 1795 really the first microprocessor?

There's a fair bit of argument of what is the first microprocessor. Several candidates for first microprocessor were introduced in a short period of time between 1968 and 1971. These are all interesting chips, but most of them have been forgotten. In this section, I'll discuss various candidates, but first I'll look at whether it makes sense to consider the microprocessor an invention.

Giving some hardware background will help the following discussion. The transistors you're probably most familiar with are bipolar transistors—they are fast, but bipolar integrated circuits can't contain large numbers of transistors. The TTL chips used in the Datapoint 2200 and other systems are built from bipolar transistors. A later technology produced MOS transistors, which are slower than bipolar, but can now be squeezed onto a chip by the millions or billions. The final term is LSI or Large-Scale Integration, referring to an integrated circuit containing a large number of components: 100 gates or more. The introduction of MOS/LSI is what made it possible to build a processor with a few chips or a single chip, rather than a board full of chips.

The inevitability of microprocessors

One perspective is that the microprocessor isn't really an invention, but rather something that everyone knew would happen, and it was just a matter of waiting for the technology and market to be correct. This view is convincingly presented in Schaller's thesis,[19] which has some interesting quotes:
The idea of putting the computer on a chip was a fairly obvious thing to do. People had been talking about it in the literature for some time.—Ted Hoff, 4004 designer
At the time in the early 1970s, late 1960s, the industry was ripe for the invention of the microprocessor.- Hal Feeney, 8008 designer
The question of ‘who invented the microprocessor?’ is, in fact, a meaningless one in any non-legal sense. - Microprocessor Report

I largely agree with this perspective. It was obvious in the late 1960s that a CPU would eventually be put on a chip, and it was just a matter of time for the density of MOS chips to improve to the point that it was practical. In addition, in the 1960s, MOS chips were slow, expensive, and unreliable[11]—a computer built out of a bunch of bipolar chips was obviously better, and this included everything from the IBM 360 mainframe to the PDP-11 minicomputer to the desktop Datapoint 2200. At first a MOS-based computer only made sense for a low-performance application (calculators, terminal), or when high density was required (aerospace, calculators).

To summarize this view, the microprocessor wasn't anything to specifically invent, but just something that happened when MOS technology improvements and a marketing need made it worthwhile to build a single chip processor.

Defining "microprocessor"

Picking the first microprocessor is largely a linguistic exercise in how you define "microprocessor". It also depends on how you define "first": this could be first design, first manufactured chips, first sales, or first patent. But I think for reasonable definitions, the TMX 1795 is first.

There's no official definition of a microprocessor. Various sources define a microprocessor as a CPU on a chip, or an arithmetic-logic unit (ALU) on a chip, or on a few chips. One interesting perspective is that "microprocessor" is basically a marketing term driven by the need of companies like Intel and Texas Instruments to give a label to their new products.[11]

In any case, I consider a microprocessor to be a CPU on a single chip, including the ALU, control, and registers. Storage and I/O is generally outside the chip. There will generally be additional support and interface chips such as buffers, latches, and clock generation. I also consider it important that a microprocessor be programmable as a general-purpose computer. This definition, I think, is a reasonable definition for a microprocessor.

One architecture that I don't consider a microprocessor is a microcoded system, where the control unit is separate and provides micro-instructions to control the ALU and the rest of the system. In this system, the microcode can be provided by a ROM and a latch steps through the micro-instructions. Since the ALU doesn't need to do instruction decoding, it can be a much simpler chip than a full-blown CPU. I don't think it's fair to call it a microprocessor.

Timeline of early microprocessors

There are several processors that are frequently argued to be the first microprocessor, and they were created in a span of just a few years. I created the timeline below to show when they were developed. In the remainder of this article, I describe the different processors in detail. Timeline of early MOS/LSI processors.

Timeline of early MOS/LSI processors.

Four-Phase AL1

If one person could be considered the father of MOS/LSI processors, it would be Lee Boysel. While working at Fairchild, he came up with the idea of a MOS-based computer and methodically designed and built the necessary cutting-edge chips (ROM in 1966, ALU in 1967, DRAM in 1968). Along the way he published several influential articles on MOS chips, as well as a 1967 "manifesto" explaining how a computer comparable to the IBM 360 could be built from MOS.

Four-Phase AL4 arithmetic-logic chip (variant of AL1)

Four-Phase AL4 arithmetic-logic chip (variant of AL1)

Boysel left Fairchild and started Four-Phase Systems in October 1968 to build his MOS-based system. In 1970, he demoed the System/IV, a powerful 24-bit computer. The processor used 9 MOS chips: three 8-bit AL1 arithmetic / logic chips, three microcode ROMs, and three RL random logic chips. This computer sold very well and Four-Phase became a Fortune 1000 company before being acquired by Motorola in 1981.

Die photo of Four-Phase AL1 arithmetic-logic chip. Courtesy of Computer History Museum.

Die photo of Four-Phase AL1 arithmetic-logic chip. Courtesy of Computer History Museum.

As described earlier, Boysel used an AL1 chip as a processor in a courtroom demonstration system in 1995 to show prior art against TI's patents. Given this demonstration, why don't I consider the AL1 to be the first microprocessor? It used an AL1 chip as the processor, along with ROM, RAM, and I/O and some address latches, so it seems like a single-chip CPU. But I've investigated this demonstration system closely, and while it was a brilliant hack, there's also some trickery. The ROM and its associated latch are actually set up as a microcode controller, providing 24 control lines to the rest of the system. The ROM controls memory read/write, selects an ALU operation, and provides the address of the next microcode instruction (there's no program counter). After close examination, it's clear that the AL1 chip is acting as an Arithmetic/Logic chip (thus the AL1 name), and not as a CPU.

There are a few other things that show the AL1 wasn't working as a single-chip computer. The die photo published as part of the trial has the components of the AL1 chip labeled, including "Instruction Register 23 bits". However, that label is entirely fictional—if you study the die photo closely, there's no instruction register or 23 bits there, just vias where the ground lines pass under the clock lines. I can only conclude that this label was intended to trick people at the trial. In addition, the AL1 block diagram used at the trial has a few subtle changes from the originally-published diagram, removing the program counter and adding various interconnections. I examined the code (microcode) used for the trial, and it consists of super-bizarre microcode instructions nothing like the AL1's original instruction set.

Detail of AL1 die photo showing fictional 'Instruction Register 23 bits' label.

Detail of AL1 die photo showing fictional 'Instruction Register 23 bits' label.

While the demo was brilliant and wildly successful at derailing the Texas Instruments lawsuits, I don't see it as showing the AL1 was a single-chip microprocessor. It showed that combined with a microcode controller, the AL1 could be used as a barely-functioning processor. In addition, you could probably use a similar approach to build a processor out of an earlier ALU chip such as the 74181 or Fairchild 3800, and nobody is arguing that those are microprocessors.

Looking at the dates, it appears that Viatron (described below) shipped their MOS/LSI computer a bit before Four-Phase, so I can't call Four-Phase the first MOS/LSI computer. However, Four-Phase did produce the first computer with semiconductor memory (instead of magnetic core memory), and thus the first all-semiconductor computer.

Viatron

Viatron is another interesting but mostly forgotten company. It began as a hugely-publicized startup founded in November, 1967. About a year later, they announced System 21, a 16-bit minicomputer with smart terminals, tape drives, and a printer, built from custom MOS chips. The plan was volume: by building a large number of systems, they hoped to produce the chips inexpensively and lease the systems at amazingly low prices—computer rental for $99 a month.[20] Unfortunately, Viatron ran into poor chip yields, delays, and price increases. As a result, the company went spectacularly bankrupt in March 1971.

The Viatron System 21: color display, terminal keyboard, 'robot' printer, and computer. From Viatron brochure, via bitsavers.org.

The Viatron System 21: color display, terminal keyboard, 'robot' printer, and computer. From Viatron brochure, via bitsavers.org.

Viatron is literally the originator of the microprocessor—they were the first to use the word "microprocessor" in their October 1968 announcement of the 2101 microprocessor. However, this microprocessor wasn't a chip—it was an entire smart terminal, leasing for the incredibly low price of $20 a month. Viatron used the term microprocessor to describe the whole desktop unit complete with keyboard and tape drives. Inside the microprocessor cabinet were a bunch of boards—the processor itself consisted of 18 custom MOS chips on 3 boards, with more boards of custom MOS and CMOS chips for the keyboard interface, tape drive, memory, and video display.

The 3-board processor inside the 2101 was specialized for its terminal role. It read and wrote multiple I/O control lines, moved data between I/O devices and memory, updated the display, and provided serial input and output.[20] The processor was very limited, not even providing arithmetic. Nonetheless, I think the Viatron 2101 "microprocessor" can be considered the first (multichip) MOS/LSI processor, shipping before the Four Phase System/IV.

One of the three CPU boards from the Viatron System 21 terminal. Photo courtesy of UMMR.

CPU board #2 of three from the Viatron System 21 terminal. Top row holds two RAR register chips and six ROM chips. Bottom chips are IBR multiplexer, flag chip and ROM multiplexer, Photo courtesy of UMMR.

Viatron also built an advanced general-purpose 16-bit computer, the 62-pound 2140 minicomputer, which leased for $99 a month and came with a Fortran compiler. It had 4K 16-bit words of core memory and two 16-bit arithmetic units. The microcoded processor had an extensive instruction set including multiply and divide operations, and supported 48-bit arithmetic. Coming on the market slightly before the Four-Phase computer, the Viatron 2140 appears to be the first MOS/LSI general-purpose computer. Unfortunately, sales were poor and the 2140 projected ended in 1973.

MP944 / F-14 CADC

The Central Air Data Computer was a flight control system for the F-14 fighter, using the MP944 MOS/LSI chipset developed between 1968 and 1970. This computer processed information from sensors and generated outputs for instrumentation and to control the aircraft. The main operation it performed was computing polynomial functions on the inputs. This chipset was designed by Ray Holt, who argues on his website (firstmicroprocessor.com) that this 20-bit serial computer should be considered the first microprocessor.

Block diagram of the F14A CADC computer. From 'Architecture Of A Microprocessor'.

Block diagram of the F14A CADC computer. Module 1 performs multiplication, module 2 performs division, and module 3 performs special logic functions. From Architecture Of a Microprocessor.

The architecture of this computer is pretty unusual; it consists of three functional modules: a multiplier, a divider, and "special logic". Each functional unit has a microcode ROM (including an address register) that provides a 20-bit microinstruction, a data steering unit (SL) that selects between 13 data inputs and performs addition, the arithmetic chip (multiply (PMU), divide (PDU) or special logic (SLF)), and a small RAM chip for storage (RAS). Each data line transfers a 20-bit fixed-point value, shifted serially one bit at a time. The main purpose of the SLF (special logic function) chip is to clamp a value between upper and lower bounds. It also converts Gray code to binary[21] and performs other logic functions.[22]

I don't consider this a microprocessor since the control, arithmetic, and storage are split across four separate chips in each functional unit.[23] Not only is there no CPU chip, there's not even a general-purpose ALU chip. Computer architecture expert David Patterson says, "No way Holt's computer is a microprocessor, using the word as we mean it today."[24] Even if you define a microprocessor as including a multi-chip processor, Viatron beat the CADC by a few months. While the CADC processor is very interesting, I don't see any way that it can be considered the first microprocessor.

Intel 4004

The well-known Intel 4004 is commonly considered the first microprocessor, but I believe the TMX 1795 beat it. I won't go into details of how Busicom contracted with Intel to have the 4004 built for a calculator, since the story is well-known.[25] I did a lot of research into the dates of the 4004 to determine which was first: the 4004 or the TMX 1795. According to the 4004 oral history, the first successful 4004 chip was the end of February 1971 and shipped to Busicom in March. TI wrote a draft announcement with photos of the TMX 1795 on February 24, 1971, and it was written up in Businessweek in March. The TMX 1795 was delivered to Datapoint in the summer and TI applied for a patent on August 31. The 4004 wasn't announced until November 15.

To summarize, the dates are very close but it appears that the TMX 1795 chip was built first (assuming the chip was working for the Feb 24 writeup) and announced first, while the 4004 was delivered to customers first. On the other hand, Federico Faggin claims that the 4004 was a month or two before the TMX 1795[17]. However, the TMX 1795 was patented; I assume that someone would have mentioned in all the patent litigation if the 4004 really beat the TMX 1795 (rather than building a demo out of the Four-Phase AL1). Based on the evidence, I conclude that the TMX 1795 was slightly before the 4004 as the first microprocessor built, while the 4004 is clearly the first microprocessor sold commercially. Texas Instruments claims on their website: "1971: Single-chip microprocessor invented", and I agree with this claim.

Intel 8008

Many people think of the Intel 8008 as the successor to the 4004, but the two chips are almost entirely independent and were developed roughly in parallel. In fact, some of the engineers on the 4004 worried that the 8008 would come out first because the 8008 project consisted of one chip to the four in the 4004 project. The 8008 was originally called the 1201 in Intel's naming scheme because it was the first custom MOS chip Intel was developing. The 4004 would have been the 1202 except Faggin, a key engineer on the project, convinced management that 4004 was a much better name. The 1201 was renamed the 8008 before release to fit the new naming pattern.

According to my research, the 8008 may be the first microprocessor described in print. I found a reference to it (although without the 8008 name) in a four-paragraph article in Electronic Design in Oct 25, 1970, discussing Intel's chip under development for the Datapoint 2200. The article briefly describes the chip's instruction set, architecture, and performance. It said the processor would be used in the 2200 "smart terminal" (which of course didn't happen), and said the chip was scheduled for January, 1971 delivery (it slipped and was officially announced in March 1972).

Gilbert Hyatt's microcontroller patent

The story of how Gilbert Hyatt obtained a broad patent covering the microcontroller in 1990 and lost it a few years later is complex, but I will try to summarize it here. The story starts with the founding of Micro-Computer Incorporated in 1968. Hyatt built a 16-bit serial computer out of TTL chips and sold it as a numerical control computer. He had plans to build this processor as a single chip, but before that could happen, the company went out of business in 1971. Mr. Hyatt claims that investors Noyce and Moore (of Intel fame) cut off funding because "their motive was to sell the company and take the technology."

The Nu-troller IV CNC machine using Gilbert Hyatt's 16-bit processor built from TTL chips. Photo from Numerical Control Society Proceedings, 1971.

The Nu-troller IV CNC machine using Gilbert Hyatt's 16-bit processor built from TTL chips. Photo from Numerical Control Society Proceedings, 1971.

In 1990, seemingly out of nowhere, Gilbert Hyatt received a very general patent (4942516) covering a computer with ROM and storage on a single chip. Hyatt had filed a patent on his computer in 1969, and due to multiple continuations, he didn't receive the patent until 1990.[15] This patent caused considerable turmoil in the computer industry since pretty much every microcontroller was covered by this patent. Hyatt ended up receiving substantial licensing fees until Texas Instruments challenged the patent a few years later and the patent office canceled Hyatt's key patent claims.[26] In any case, Gilbert Hyatt's microprocessor was never built (except in TTL form), there was no design for it, and the patent didn't provide any information on how to put the computer on a chip. Thus, while this computer built from TTL chips is interesting, it never became a microprocessor.

TMS 0100 calculator-on-a-chip / microcontroller

Texas Instruments created the TMS 1802NC calculator-on-a-chip in 1971; this was the first chip in the TMS 0100 series.[27] This chip included program ROM, storage, control logic and an ALU that performed arithmetic on 11-digit decimal numbers under the control of 11-bit opcodes.

The TMS 1802 calculator chip, first chip in the TMS 0100 series. Photo courtesy of datamath.org.

The TMS 1802 calculator chip, first chip in the TMS 0100 series. Photo courtesy of datamath.org.

While the TMS 0100 series was usually called a calculator-on-a-chip, it was also intended for microcontroller tasks. The patent describes "Programming of the calculator system for non-calculator functions", including digital volt meter, tax-fare meter, scale, cash register operations, a controller, arithmetic teaching unit, clock, and other applications. As the first "computer-on-a-chip", the TMS 0100 gave Texas Instruments several important microcontroller patents. which they used in patent litigation (including the Dell case described earlier).[14] (The key difference between a microcontroller and a microprocessor is the microcontroller includes the storage and program ROM, while the microprocessor has them externally.)

The TMX 1795 (first microprocessor) and TMS 0100 (first microcontroller) were both developed by Gary Boone and team (Mike Cochran, Jerry Vandierendonck, and others) at Texas Instruments almost simultaneously, which is a remarkable accomplishment. The TMS1802NC / TMS 0100 was announced September 17, 1971.

In 1974, Texas Instruments released the successor to the TMS 0100 series, the TMS 1000 series, and marketed it as a microcontroller. Externally, the TMS 1000 series had I/O similar to the TMS 0100 series, but internally it was entirely different. The 11-bit opcodes of the TMS 0100 were replaced by 8-bit opcodes and the 11-digit decimal storage was replaced by 4-bit binary storage. Some sources call the TMS 1000 series the first microcontroller or first microprocessor. This is entirely wrong and based on confusion between the two series. Confusing the TMS 0100 and TMS 1000 is like confusing the 8008 and 8080: the latter is a related, but entirely new chip.

Conclusions

Because the TMX 1795 wasn't commercially successful, the chip is almost forgotten, even though the chip has an important historical role. I've uncovered some history about this chip and take a detailed technical look at other chips that are sometimes considered the first microprocessor. The "first microprocessor" title depends on how exactly you define a microprocessor, but the TMX 1795 is first under a reasonable definition—a CPU-on-a-chip. It's interesting, though, how multiple MOS/LSI processor chips were built in a very short span once technology permitted, and how most of them are now almost entirely forgotten. In a future article, I'll look at the implementation and circuitry of the TMX 1795 in detail.

Thanks to Austin Roche for detailed information on Datapoint. Thanks to K. Kroslowitz of the Computer History Museum" for obtaining TMX 1795 photos for me; the chip is so obscure, there were no photos of it on the internet up until now.

Notes and references

[1] The Datapoint Corporation was founded in 1968 as CTC (Computer Terminal Corporation), CTC later changed its name to Datapoint as the name of its product was much better known than the company name itself. For simplicity, I'll use Datapoint instead of CTC to refer to the company in this article.

[2] The Computer History Museum's Oral History Panel on the Development and Promotion of the Intel 8008 Microprocessor discusses the history of the 8008 in great detail. The story of the initial idea to build a single chip for Datapoint is on page 2. Texas Instruments' chip development is on page 3-4. The use of little-endian format is discussed on page 5. TI's chip is discussed on page 6. Automated design of TI's chip is on page 25.

[3] The Computer History Museum's Oral History of Victor (Vic) Poor provides a lot of history of Datapoint. Page 34 describes Stan Mazor suggesting that Intel put Datapoint's processor on a single chip. Page 43 describes the TI chip and its noise issues. Page 46 explains how Datapoint's patent attorney told them there was nothing patentable about the single-chip microprocessor.

[4] Much of the information on Datapoint comes from the book Datapoint: The Lost Story of the Texans Who Invented the Personal Computer Revolution. The story of Datapoint suggesting a single-chip CPU to Noyce is on pages 70-72.

[5] The 8008 processor was originally given the number 1201 under Intel's numbering scheme. The first digit indicated the type of circuitry: 1 for p-MOS. The second digit indicated the type of chip: 2 for random logic. The last two digits were a serial number. For some reason, the 4004 was numbered after the 8008 and would have been the 1202. Fortunately, its developers argued that 4004 would be a better name for marketing reasons. The 1201 was later renamed the 8008 to fit this pattern. Thus, the 8008 is often though of as a successor to the 4004, even though the chips were developed in parallel and have totally different architectures.

[6] A switching power supply is much more efficient than the less complex linear power supplies commonly used at the time, so it generates much less heat. The Datapoint 2200 used a push-pull topology switching power supply. Steve Jobs called the Apple II's power supply "revolutionary", saying "Every computer now uses switching power supplies, and they all rip off Rod Holt's design." Note that the Datapoint 2200 with its swiching power supply came out 6 years before the Apple II. I've written a lot more about the history of switching power supplies here. (By the way, don't confuse Ray Holt of the CADC with Rod Holt of Apple.)

[7] According to Ted Hoff[18], Intel had a flaw in the original interrupt handling specification for the 8008 and TI copied that error in the TMX 1795, demonstrating that TI was using Intel specifications. In particular, when the 8008 processor is interrupted, a RESTART instruction can be forced onto the bus, redirecting execution to the interrupt handler. The stack pointer must be updated by the RESTART instruction to save the return address, but Intel didn't include that in the initial specification. (The RESTART instruction is not part of the original Datapoint architecture.)

I've verified from the patent that the RESTART logic in the TMX 1795 doesn't update the stack pointer, so interrupt handling is broken and there's no way to return from an interrupt. (The interrupt handling section of the TMX 1795 patent is kind of a mess. It discusses a "CONTINUE" instruction that doesn't exist.) According to Ted Hoff, this demonstrates that Texas Instruments was using Intel's proprietary specification without entirely understanding it.

[8] The text of the TMX 1795 announcement in Businessweek, March 27 1971, p52:
"Computer Terminal Corp., of San Antonio, Tex., has designed a remote cathode-ray computer terminal no bigger than a typewriter that also functions as a powerful minicomputer. In what must rank as a milestone in LSI, Texas Instruments has managed to jam this terminal's entire central processing unit- the equivalent of 3,100 MOS transistors-on a single custom chip roughly 2 in. square."

[9] In the Intel 8080 Oral History, the layout of the TMX 1795 is criticized on page 35.

[10] One enduring legacy of the Datapoint 2200 is the little-endian storage used by Intel x86 processors, which is backwards compared to most systems. Because the Datapoint 2200 had a serial processor, it accessed bits one at a time. For arithmetic, it needed to start with the lowest bit, in order to handle carries (the same as long addition starts at the right). As a consequence of this, Datapoint 2200 instructions had the low-order byte before the high-order byte. There's no need for a processor accessing bits in parallel to be little endian: processors such as the 6800 and 8051 use the more natural big-endian format. But all the microprocessors descended from the 8008 (8080, Z80, x86) kept the little-endian format used by Datapoint. (See also 8008 Oral History, page 5.)

[11] The perspective that Four-Phase and Intel treated the microprocessor differently because For Phase was a computer manufacturer and Intel is a chip manufacturer is discussed at length in When is a microprocessor not a microprocessor? in Exposing Electronics. This also goes into the history of Boysel and Four-Phase. It contains the interesting remark that the Texas Instruments litigation turned an old integrated circuit (the Four-Phase AL1) into a new microprocessor. Related discussion is in the book To the Digital Age: Research Labs, Start-up Companies, and the Rise of MOS Technology.

[12] While designing the 4004, Intel had a little-known backup plan in case the 4004 turned out to be too complex to build. This backup plan would also allow Intel to sell processors even though Busicom had exclusive rights to the 4004. (The 4004 was built under contract to calculator manufacturer Busicom, who had exclusive rights to the 4004 (which they later gave up). Federico Faggin explains (Oral History) that while Busicom had exclusive rights to use the 4004, they didn't own the intellectual property, so Intel was free to build similar processors.) This backup plan was the simpler 4005 chip. While the 4004 had 16 registers and an on-chip stack, the 4005 just had the program counter, a memory address register, and an accumulator, using external RAM for registers. When the 4004 chip succeeded, Intel didn't need the 4005 and licensed it to a Canadian company, MicroSystems International, which released the chip as the MF7114 in the second half of 1972. Sales were poor and the MF7114 was abandoned in 1973, so the chip is almost unknown today. The history of the MF7114 is described in detail in The MIL MF7114 Microprocessor.

[13] The description "TI versus Everybody trial" is from The Evolution to the Computer History Museum" by Gordon Bell, p28. Texas Instruments was referred to as "The Dallas Legal Firm" by the CEO of Cypress Semiconductors according to History of Semiconductor Engineering p 194-195.

[14] Texas Instruments received several broad patents on the TMX 1795. 3,757,306: "Computing Systems CPU" covers a CPU on a single chip with external memory. 4,503,511: "Computing system with multifunctional arithmetic logic unit in single integrated circuit" covers an ALU, registers, and logic on a chip. 4,225,934: "Multifunctional arithmetic and logic unit in semiconductor integrated circuit" describes an ALU on a single chip with a parallel bus.

The Texas Instruments v. Dell litigation featured multiple patents. The TMX 1795 patent in the litigation was 4,503,511: "Computing system with multifunctional arithmetic logic unit in single integrated circuit"; the other TMX 1795 patents were not part of the litigation. Several were TMS 0100 calculator/microcontroller patents: 4,326,265: "Variable function programmed calculator", 4,471,460: "Variable function programmed system", 4,471,461: "Variable function programmed system", 4,485,455: "Single-chip semiconductor unit and key input for variable function programmed system". Finally there were some miscellaneous patents: 3,720,920: "Open-ended computer with selectable I/O control", 4,175,284: "Multi-mode process control computer with bit processing", RE31,864: "Self-test feature for appliance or electronic systems operated by microprocessor".

The broader lawsuit Texas Instruments v. Daewoo, et al was against computer manufacturers Cordata (formerly Corona Data Systems), Daewoo, and Samsung. It went on from 1990 to 1993, and ended up with the companies needing to license the patents. The Dell lawsuit, Texas Instruments v. Dell, also went from 1990 to 1993 but ended in a settlement favorable to Dell after Boysel's demonstration of the AL1 chip acting as a single-chip CPU in 1992.

[15] It may seem strange that someone can get a patent a decade or two after their invention. This is accomplished through a "continuation", which lets you file updated patents with additional claims. This process can be dragged out for decades, resulting in a submarine patent.

Patents used to be good for 17 years from the date it was granted, no mater how delayed. This delay can make a patent much more valuable; there are a lot more companies to sue over a microprocessor patent in 1985 than in 1971, for instance. Plus, if you have a similar non-delayed patent too, it's like having a free extension on the patent. US patents are now valid for 20 years from filing, eliminating submarine patents (except for those still in the system).

[16] Ted Hoff's article Impact of LSI on future minicomputers, IEEE International Convention Digest, Mar. 1970, discusses the difficulty of building LSI parts that can be used in large (and thus cost-effective) volumes. He suggests that since a MOS chip can hold 1000 to 6000 devices, a standardized CPU could be built on a single LSI chip and sold for $10 to $20.

[17] The 4004 Oral History has information on the 4004 timeline. Federico Faggin says that the TI chip was a month or two after the 4004 (page 32). Page 33 discusses the interrupt problem on the TMX 1795.

[18] Interview with Marcian (Ted) Hoff (archived) provides a lot of background on development of the 4004. It describes how by October 1969 they were committed to building the 4004 as a computer on a chip. The first silicon for the 4004 was in January 1971, and by February 1971 the chip was working. In May 1971, Busicom ran into financial difficulties and negotiated a lower price for the 4004 in exchange for giving up exclusive rights to the chip. He describes how at the Fall Joint Computer Conference, many customers would argue that the 4004 wasn't a computer but just a bit slice; after looking at the datasheet, they realized that it was a computer. Ted Hoff also describes the origins of the 8008, saying that he and Stan Mazur proposed the single-chip processor to Datapoint, much to Vic Poor's surprise, but later Vic Poor claimed that he had planned a single-chip processor all along.

[19] The thesis Technological Innovation in the Semiconductor Industry by Robert R. Schaller, 2004, has several relevant chapters. Chapter 6 analyzes the history of the integrated circuit in detail. Chapter 7, The Invention of the Microprocessor, Revisited, provided a lot of background for this article. Chapter 8 is a detailed analysis of Moore's Law.

[20] By carefully studying the Viatron terminal schematics, I uncovered details about the multi-chip processor in the Viatron terminal. The processor handled 8-bit characters and was programmed in 12-bit microcode, 512 words stored in ROM chips. It had three data registers (IBR, TEMP, and AUX), and two microcode ROM address registers (RAR and RAAR). Arithmetic operations appear to be entirely lacking from the processor. The memory was built from shift register memory chips and was used for the display. The Viatron price list is in the Viatron System 21 Brochure.

[21] The Gray code is a way of encoding values in binary so only one bit changes at a time. This is useful for mechanical encoding because it avoids errors during transitions. For instance, if you use binary to encode the position of an aircraft control, as it moves from 3 to 4 the binary values are 011 and 100. If the first bit changes before the rest, you get 111 (i.e. 7) and your plane may crash. With Gray code, 3 and 4 are encoded as 010 and 110. Since only one bit changes, it doesn't matter if the bits don't change simultaneously—you either have 3 or 4 and no bad values in between.

[22] Ray Holt's firstmicroprocessor.com calls the SLF (special logic function) chip the CPU. In the original paper, this chip was not called the CPU and was only described briefly. In the paper, each of the three multi-chip functional units is called a CPU. It's clear that the SLF chip was recently renamed the CPU just to support the claim that the CADC was the first microprocessor.

[23] The MP944 chips had considerably fewer transistors than the 4004: 1063 in the PMU, 1241 in the PDU, 743 in the SLF, and 771 in the SLU, compared to 2300 in the 4004.

[24] David Patterson's analysis of the CADC computer can be found on the firstmicroprocessor.com website.

[25] The inventors of the 4004 wrote a detailed article about the chip: The history of the 4004. Other articles with details on the 4004's creation are The birth of the microprocessor and The Microprocessor.

[26] For more information on Gilbert Hyatt's patent, see Chip Designer's 20-Year Quest and For Texas Instruments, Some Bragging Rights, Inventor battling U.S. over patents from '70s and Gilbert Who? An obscure inventor's patent may rewrite microprocessor history.

The specific legal issues and maneuvering over Hyatt's patent are complex, but described in the appeal summary and Berkeley Technology Law Journal. If you try to follow this, note that Boone's '541 application and '541 patent are two totally different things, even though they have the same title and end in 541. The presentation Patent litigations that shaped their industries provides an overview of the litigation over the "Single Chip Computer" and other inventions.

[27] Note that the TMS 0100 is actually a series of chips (TMS 01XX) and likewise the TMS 1000 is also a series. Confusingly, the first chip in the TMS 0100 series was the TMS 1802NC calculator chip, which was renamed the TMS 0102; despite its name, it was not in the TMS 1000 series.

[28] The Datapoint 2200 was a serial processor—while it was an 8-bit processor, it operated on one bit at a time, had a one-bit ALU, and a one-bit internal bus. While this seems bizarre from our perspective, implementing a processor serially was a fairly common way to reduce the cost of a processor; the PDP-8/S was another serial minicomputer. (This should not be confused with the Motorola MC14500B, which genuinely is a one-bit processor designed for simple control applications.)