Showing posts with label microcode. Show all posts
Showing posts with label microcode. Show all posts

Inside the 8086 processor's instruction prefetch circuitry

The groundbreaking 8086 microprocessor was introduced by Intel in 1978 and led to the x86 architecture that still dominates desktop and server computing. One way that the 8086 increased performance was by prefetching: the processor fetches instructions from memory before they are needed, so the processor can execute them without waiting on the (relatively slow) memory. I've been reverse-engineering the 8086 from die photos and this blog post discusses what I've uncovered about the prefetch circuitry.

The 8086 was introduced at an interesting point in microprocessor history, where memory was becoming slower than the CPU. For the first microprocessors, the speed of the CPU and the speed of memory were comparable.1 However, as processors became faster, the speed of memory failed to keep up. The 8086 was probably the first microprocessor to prefetch instructions to improve performance. While modern microprocessors have megabytes of fast cache2 to act as a buffer between the CPU and much-slower main memory, the 8086 has just 6 bytes of prefetch queue. However, this was enough to increase performance by about 50%.3

The die photo below shows the 8086 microprocessor under a microscope. The metal layer on top of the chip is visible, with the silicon and polysilicon mostly hidden underneath. Around the edges of the die, bond wires connect pads to the chip's 40 external pins. I've labeled the key functional blocks; ones that are important to the prefetch queue are highlighted in red and will be discussed in detail below. Architecturally, the chip is partitioned into a Bus Interface Unit (BIU) at the top and an Execution Unit (EU) below. The BIU handles memory accesses, while the Execution Unit (EU) executes instructions.

The 8086 die under a microscope, with main functional blocks labeled. This photo shows the chip's single metal layer; the polysilicon and silicon are underneath. Click on this image (or any other) for a larger version.

The 8086 die under a microscope, with main functional blocks labeled. This photo shows the chip's single metal layer; the polysilicon and silicon are underneath. Click on this image (or any other) for a larger version.

Prefetching and the architecture of the 8086

Prefetching had a major impact on the design of the 8086. Earlier processors such as the 6502, 8080, or Z80 were deterministic. The processor fetched an instruction, executed the instruction, fetched the next instruction, and so forth. Memory accesses corresponded directly to instruction fetching and execution and instructions took a predictable number of clock cycles. This all changed with the introduction of the prefetch queue. Memory operations became unlinked from instruction execution since prefetches happen as needed and when the memory bus is available.

Since memory operations and instruction execution happen independently, the implementors of the 8086 split the chip into two processing units: the Bus Interface Unit (BIU) that handles memory accesses, and the Execution Unit (EU) that executes instructions, as shown below.4 The Bus Interface Unit contains the 6-byte instruction prefetch queue; it supplies instructions to the Execution Unit via the Q (queue) bus. The adder (Σ) performs address calculation, adding the segment register base to an address offset, among other things. The Execution Unit is what comes to mind when you think of a processor: it has most of the registers, the arithmetic/logic unit (ALU), and the microcode that implements instructions. The address adder and the ALU are independent arithmetic units. The segment registers (CS, DS, SS, ES) and the Instruction Pointer (IP) are in the Bus Interface Unit since they are directly involved in memory accesses, while the general-purpose registers are in the Execution Unit.

Block diagram of the 8086 processor.
This diagram differs from most 8086 block diagrams because it shows the actual physical implementation, rather than the programmer's view of the processor.
The "Internal Communication Registers" consist of the Indirect Register (IND) and the Operand Register (OPR). These hold a memory address and memory data value respectively.
From The 8086 Family User's Manual.

Block diagram of the 8086 processor. This diagram differs from most 8086 block diagrams because it shows the actual physical implementation, rather than the programmer's view of the processor. The "Internal Communication Registers" consist of the Indirect Register (IND) and the Operand Register (OPR). These hold a memory address and memory data value respectively. From The 8086 Family User's Manual.

The 8086's segment registers play an important part in this architecture, so I'll review them quickly. One of the challenges of the 8086 was how to support more than 64K of memory with 16-bit registers. The much-reviled solution was to create a 1-megabyte (20-bit) address space consisting of 64K segments, with segment registers indicating the start of each segment. Specifically, a memory address was specified by a 16-bit offset address along with a particular segment register selecting a segment (Code Segment, Data Segment, Stack Segment, or Extra Segment). The segment register's value was shifted by 4 bits to give the segment's 20-bit base address. The 16-bit offset address was added, yielding a 20-bit memory address. This gave the processor a 1-megabyte address space, although only 64K could be accessed without changing a segment register.

It may seem inefficient for the Bus Interface Unit to have its own adder instead of using the ALU, but there are a couple of reasons for the separate adder. First, every memory access uses the adder at least once to add the segment base and offset. The adder is also used to increment the PC or index registers. Since these operations are so frequent, they would create a bottleneck if they used the ALU. Second, since the Execution Unit and the Bus Interface Unit run asynchronously with respect to each other, it would be complicated to share the ALU without causing delays and conflicts.

Prefetching had another major but little-known effect on the 8086 architecture: the designers were considering making the 8086 a two-chip microprocessor. Prefetching, however, required a one-chip design because the number of control signals required to synchronize prefetching across two chips exceeded the package pins available. This became a compelling argument for the one-chip design that was used for the 8086.3 (The unsuccessful Intel iAPX 432, which was under development at the same time, ended up being a two-chip processor: one to fetch and decode instructions, and one to execute them.)

Implementing the queue

The instruction prefetch queue is implemented with three 16-bit queue registers along with two hardware pointers that keep track of the current position in the queue. One two-bit counter keeps track of the current read position from 0 to 2, i.e. the queue register that will provide the next instruction. The second counter keeps track of the current write position, i.e. the queue register that will receive the next instruction from memory. As words are fetched from the queue, the read pointer advances. As words are added to the queue, the write pointer advances. Because the queue registers hold words, while the prefetch circuitry provides bytes, another flip-flop keeps track of whether the high byte or the low byte of the word is being used. I call this the HL flip-flop. This causes the low byte to be provided first and then the high byte (since the 8086 is little-endian).

The diagram below shows an example queue configuration with four bytes. The first two queue registers (Q0 and Q1) hold data. The read pointer and HL pointer indicate that the next prefetched byte will come from the low byte of Q0. The write pointer indicates that the next prefetched word will go into Q2.

A queue configuration with four bytes in the prefetch queue. Bytes in blue hold prefetched data.

A queue configuration with four bytes in the prefetch queue. Bytes in blue hold prefetched data.

The diagram below shows how the queue pointers can wrap around. In this configuration, one byte has been used from Q2 so the next byte will be Q2's high byte. Q0 holds the next prefetched word. The next word to be prefetched will be stored in Q1, as indicated by the write pointer.

A queue configuration with three bytes in the prefetch queue.

A queue configuration with three bytes in the prefetch queue.

The relative positions of the write and read pointers indicate how much data is in the queue. If the write pointer is one position before the read pointer (modulo 3), the queue holds 3 or 4 bytes. If the write pointer is one position after the read pointer (modulo 3), the queue holds 1 or 2 bytes. But what about when the read pointer and write pointer indicate the same register? This can either indicate that the queue is empty or that the queue is full (5 or 6 bytes). To distinguish these cases, a flip-flop is set if the queue enters the empty state. This flip-flop generates a signal that Intel called MT (empty).

Another complication occurs if you jump to an odd address. Because of its 16-bit bus, the 8086 will prefetch a word from the even address one less. This loads one usable byte and one byte that needs to be discarded. The 8086 handles this by setting the HL flip-flop high, using a handful of gates to detect this case. As in the diagram above, the unwanted low byte will be skipped.

The diagram below zooms in on the prefetch and queue control circuitry on the die, with the main flip-flops and circuitry labeled. The lower half manages the queue, keeping track of the read and write positions and computing the queue length. The upper circuitry controls prefetch operations and interacts with the rest of the memory cycle circuitry.

The queue and prefetch circuitry on the die. The metal layer has been removed for the closeup to show the silicon of the underlying transistors.

The queue and prefetch circuitry on the die. The metal layer has been removed for the closeup to show the silicon of the underlying transistors.

Even though there is not a lot of circuitry involved (about a dozen flip-flops and associated logic gates), this circuitry occupies a substantial part of the 8086 die. (The relatively small amount of circuitry did not make this easy to reverse-engineer, however!) Compared to modern chips, the density of the 8086 is very low; you can almost see the flip-flops with the naked eye. This diagram only shows the circuitry directly involved in prefetching. Additional circuitry is scattered through the memory cycle control circuitry to deal with prefetching, and the queue registers take up a substantial part of the register file. Thus, prefetching was a moderately expensive feature for the 8086, as far as die area.

The loader

To decode and execute an instruction, the Execution Unit must get instruction bytes from the Bus Interface Unit, but this is not entirely straightforward. The main problem is that the queue can be empty, in which case instruction decoding must block until a byte is available from the queue. The second problem is that instruction decoding is relatively slow, so for maximum performance, the decoder needs a new byte before the current instruction is finished. A circuit called the "loader" solves these problems by providing synchronization between the prefetch queue and the instruction decoder. The loader uses a small state machine to efficiently fetch bytes from the queue at the right time and to provide timing signals to the decoder and microcode engine.

In more detail, as the loader requests the first two instruction bytes from the prefetch queue, it generates two timing signals that control the microcode execution. The FC (First Clock) indicates that the first instruction byte is available, while the SC (Second Clock) indicates the second instruction byte. Note that the First Clock and Second Clock are not necessarily consecutive clock cycles because the prefetch queue could be empty or contain just one byte, in which case the First Clock and/or Second Clock would be delayed. The instruction decoding circuitry and the microcode engine are controlled by the First Clock and Second Clock signals, so they remain synchronized with the bytes supplied by the prefetch queue.

At the end of a microcode sequence, the Run Next Instruction (RNI) micro-operation causes the loader to fetch the next machine instruction. However, fetching and decoding the next instruction is a bit slow so microcode execution would be blocked for a cycle. In many cases, this slowdown can be avoided: if the microcode knows that it is one micro-instruction away from finishing, it issues a Next-to-last (NXT) micro-operation so the loader can start loading the next instruction. This achieves a degree of pipelining in most cases; fetching the next instruction is overlapped with finishing the execution of the previous instruction.

The state machine for the 8086 "loader" circuit.
The 1BL signal indicates a 1-byte instruction implemented in logic rather than microcode.
From patent US4449184.

The state machine for the 8086 "loader" circuit. The 1BL signal indicates a 1-byte instruction implemented in logic rather than microcode. From patent US4449184.

The diagram above shows the state machine for the loader. I won't explain it in detail, but essentially it keeps track of whether it is waiting for a First Clock byte or a Second Clock byte, and if it is performing a fetch in advance (NXT) or at the end of an instruction (RNI). The state machine is implemented with two flip-flops to support its four states.

Other memory accesses

The loader takes care of fetching an instruction that consists of an opcode byte and a Mod R/M (addressing mode) byte. However, many instructions have additional bytes or don't follow this format For example, an opcode such as "ADD AX" can be followed by an 8- or 16-bit immediate value, adding that value to the AX register. Or a "move memory to AX" instruction can be followed by a 16-bit memory address The microcode uses a separate mechanism for fetching these instruction bytes from the queue. Specifically, each micro-instruction contains a source register and a destination register that specify a data move. By specifying "Q" (the queue) as the source, a byte is fetched from the prefetch queue.

A third path is used for arbitrary memory reads and writes, such as when an instruction stores a register's contents to memory. In this case, the microcode puts the memory address in the IND (indirect) register. The microcode then issues a read or write micro-operation which causes the memory contents to be read into the OPR (operand register) or written from the OPR. (The IND and OPR registers are internal 8086 registers that are not visible to the programmer.) In the 8086, a memory cycle takes at least four clock cycles (called T1 through T4), including adding the segment register to compute the memory address. An "unaligned" memory access takes twice as long, though, because the 8086 has a word-based 16-bit data bus. Thus, if you try to access a word from an odd address, two memory accesses are required, one for the first byte and one for the second byte.5

As you can see, a memory access is a fairly complex operation. In the 8086, all these steps are done by hardware in the Bus Interface Unit, rather than being performed by microcode. (I'll discuss the complex memory control circuitry in detail in a future post.) After issuing a memory read or write, the microcode engine is blocked until the memory request completes.

Microcode instructions and the correction circuitry

The microcode interacts with prefetching in several ways. In addition to requesting a byte from the queue (as discussed above), microcode can perform three micro-instructions that involve prefetching: SUSP, FLUSH, and CORR. The SUSP (suspend) micro-instruction stops prefetching, typically before a change to execution flow. The FLUSH micro-instruction flushes the prefetch queue and resumes prefetching. To implement these, the prefetching circuitry has a flip-flop to keep track of the suspended state, and logic to reset the queue pointer counters to flush the queue.

The CORR (correct) micro-instruction corrects the Instruction Pointer to point to the next execution position. This is an interesting and more complicated micro-instruction. Like most processors, the 8086 has a program counter (PC) to keep track of what instruction to execute; the 8086 calls this the Instruction Pointer (IP). In the programmer's view, the Instruction Pointer points to the memory address of the next instruction to execute. However, in the hardware, the Instruction Pointer points to the next instruction to be fetched, which is generally several bytes after the next instruction to be executed.6

For the most part, this doesn't matter; the queue provides instructions in the order they were fetched and it doesn't matter if the Instruction Pointer runs ahead. However, there are a few cases where the "real" Instruction Pointer address is needed. For example, a relative jump instruction causes execution to jump to an address relative to the current instruction. When performing a subroutine call, the return address must be pushed on the stack. The correct Instruction Pointer value is also needed for an interrupt. Thus, the 8086 needs a mechanism to compute the real Instruction Pointer value from the value in the Instruction Pointer register.

The solution is the CORR micro-instruction, which corrects the Instruction Pointer value by subtracting the prefetch queue length, so the Instruction Pointer holds the "true" value. For instance, if there are 4 bytes in the queue, then the address in the Instruction Pointer register is four more than the desired Instruction Pointer address. The Bus Interface Unit performs this subtraction by using the addressing adder and a small table of constants called the Constant ROM.7

The diagram below zooms in on the Constant ROM, located next to the adder. The Constant ROM is implemented as a PLA (programmable logic array), a two-level structured arrangement of gates. The first level (bottom) selects the desired correction constant, while the second level (middle) generates the bits of the constant: three bits plus a sign bit. The necessary correction constant is selected based on the length of the queue in words, the HL pointer, and the empty (MT) flag.

The Constant ROM, highlighted on the die.

The Constant ROM, highlighted on the die.

The Constant ROM is used for more than just address correction. For example, it is also used to increment the Instruction Pointer by 2 after a prefetch. Other constants are used for the 8086's string operations, which act on a block of memory. The index registers are incremented or decremented by 1 for bytes or 2 for words. When popping a value from the stack, the stack pointer is decremented, which uses the constant -2. Additional constants are required to increment and decrement the IND register when accessing words from unaligned (odd) addresses. These increment/decrement values are selected in the upper part of the Constant ROM. In total, the Constant ROM holds values from -6 to +2.

Policy

There are some "policy" decisions on prefetching, and it's interesting to see how the 8086 implements them. Prefetching is not free: there is a tradeoff when performing a prefetch between saving time later versus delaying memory accesses from an executing instruction. Moreover, if a jump operation takes place, the prefetch queue is discarded and the memory cycles were wasted. Thus, the length of the queue is an "extremely tricky design problem, because performance can deteriorate if the queue is too long as well as if it is too short."3

Intel performed simulations to determine the best queue length. A 4-byte queue provided a large benefit, while a 6-byte queue (which they chose) was slightly better. The designers were surprised to find that performance flattened out after that; they expected a much longer queue would be necessary. The 8088 process has only a 4-byte prefetch queue because its 8-bit bus changes the tradeoffs.8

The basic prefetch policy is that if a memory access and a prefetch are requested at the same time, the memory access "wins", since it is guaranteed to be useful while the prefetch is just speculative. If the queue holds 0 to 2 bytes, prefetch happens during the next free memory cycle. If the queue holds 5 or 6 bytes, no prefetch can happen, since prefetch happens a word at a time. However, if the queue holds 3 or 4 bytes, prefetch is delayed for two clock cycles, which is an interesting choice. This gives an instruction more opportunities to perform a memory operation without being delayed by a prefetch. There is a tradeoff because maybe delaying the prefetch will waste two cycles of memory bandwidth, but performing the prefetch might waste four cycles of memory bandwidth. The motivation for this delay is that the last two bytes in the queue are less valuable because they are more likely to be discarded.

Another policy decision is how to handle a change in execution flow, such as a jump or subroutine call. The 8086 simply discards the prefetch queue and starts fetching from the new address. The 8086 designers considered better ways of handling jumps, but it wasn't practical to implement at the time. There is no intelligence if the instructions are already in the queue (e.g. jumping forward a couple of bytes). There is also no branch prediction; prefetching proceeds linearly regardless of branch instructions.

The 8086 does nothing to ensure consistency between the prefetch queue and memory if a prefetched instruction is modified in memory.9 In this case, the "stale" instruction in the queue is executed. This situation may seem contrived, but self-modifying code used to be fairly popular, where a program would change its own instructions.10

Prefetching and the 8087 coprocessor

One feature of the 8086 microprocessor is that it supports coprocessors such as the 8087 floating point chip.11 The 8087 implements high-performance floating-point computation, performing arithmetic and transcendental computations up to 100 times faster than the 8086. The 8087 gets instructions in an interesting fashion, executing floating-point instructions from the 8086's instruction stream. Specifically, an "ESCAPE" opcode indicates an instruction that is performed by the 8087 rather than the 8086. However, prefetching adds a lot of complexity to the coprocessor because the 8087 monitors the bus to determine when it should execute an instruction. With prefetching, the instruction on the bus doesn't match the instruction being executed. An instruction may be executed many cycles after it was fetched over the bus. A prefetched instruction may even be discarded and never executed.

To solve this problem, the 8087 manages its own copy of the prefetch queue to determine when the 8086 would be executing a floating-point instruction. The 8087 watches the bus to see when instructions are prefetched. The 8086 provides queue status signals (QS0 and QS1) to indicate when it takes bytes from the queue or flushes the queue. These signals allow the 8087 coprocessor to keep track of the 8086's queue state so it can tell what instruction the 8086 is executing. In other words, the 8086 doesn't tell the 8087 coprocessor what to do; instead, the two chips process the instruction stream in parallel. Another complication is the 8087 coprocessor can be used with the 8088 processor chip, which has a smaller 4-byte queue. Thus, the 8087 coprocessor must detect whether it is connected to an 8086 or an 8088 and maintain its queue appropriately.

Brief history

Caching and prefetching were used in mainframe computers dating back to the 1960s. For instance, the IBM System/360 Model 91 (1966) had a cache with prefetching. Minicomputers such as the VAX 11/780 (1977) later used caching and prefetching. However, these features took a while to trickle down to microprocessors. The Motorola 68000 (1980) had a 4-byte prefetch queue. As far as I can tell, the 8086 was the first microprocessor with a prefetch queue.

We can view the 8086 as a stepping-stone towards the large caches first used externally in the 80386 and internally in the 486. The 80186 and 80286 kept the 6-byte prefetch buffer size of the 8086. The 80386 has a 16-byte prefetch buffer, although apparently due to a bug it was shrunk to 12 bytes in later revisions. As well as the prefetch queue, the 80386 supported an external cache.

Early microprocessors such as the 6502 or Z80 could fetch the next instruction while they were finishing the previous instruction. This minimal two-stage pipelining improved performance, but was much more limited than 8086-style prefetching. An Intel study found that this simple overlap provides a 35% performance increase with 15% more hardware, while implementing prefetching provided an additional 11% gain with 14% more hardware.3 This illustrates how the increasing transistor counts from Moore's law opened up new opportunities to improve performance. But it also shows diminishing returns as performance increases become smaller and more expensive.

Conclusions

Well, this was supposed to be a quick post about the prefetch queue, but the topic turned out to have a lot more complexity than I expected. A six-byte prefetch queue may seem like a simple feature to add to a processor, but it affects many parts of the system. Prefetching is tied closely to the memory access circuitry, of course, but it also required a Constant ROM to handle the difference between the execution address and the prefetch address. Prefetching also impacted the microcode, with three micro-instructions to support prefetching.

Prefetching also illustrates some of the ways that each feature and corner case of a processor like the 8086 leads to more complexity. For instance, byte-aligned (rather than word-aligned) instructions require a mechanism to fetch bytes as well as words. Supporting multiple instruction formats (1-byte opcodes, an opcode byte followed by a Mod R/M byte, multi-byte instructions) resulted in the loader state machine. The segment registers required an adder to compute the memory address for every access. Looking at the 8086 internals makes it easier to understand the motivation behind RISC processors, discarding the complexity and corner cases to create a simpler but faster processor.

I plan to continue reverse-engineering the 8086 die so follow me on Twitter @kenshirriff or RSS for updates. I've also started experimenting with Mastodon recently as @oldbytes.space@kenshirriff. If you're interested in the 8086, I wrote about the 8086 die, its die shrink process and the 8086 registers earlier.

Notes and references

  1. Steve Furber, co-creator of the ARM chip, mentions that "The first integrated CPUs were coincidentally quite well matched to semiconductor memory speeds, and were therefore built without caches. This can now be seen as a temporary aberration." See VLSI Risc Architecture and Organization p77. To make this concrete, the Apple II (1977) used a MOS 6502 processor running at about 1 megahertz while its 4116 DRAM chips could perform an access in 250 nanoseconds (4 times the clock speed). The 8086 processor ran at 5-10 MHz which meant that 250 ns DRAM chips were slower than the clock speed. Nowadays, processors run at 4 GHz but DRAM access speed is about 50 nanoseconds (1/200 the clock speed). 

  2. Modern processors use caches to improve memory performance; caches are often megabytes in size. Accessing data from a cache is faster than accessing it from main memory, but the tradeoff is that caches are smaller. The 8086's prefetch queue is similar to a cache in some ways, but there are some key differences. First, the prefetch queue is strictly sequential. If you jump ahead two bytes, even if the prefetch queue has those instruction bytes, the processor can't use them. Second, the prefetch queue can't reuse bytes. If you have a 6-byte loop, even though all the code fits in the prefetch queue, it will be reloaded every time. Third, the prefetch queue doesn't provide any consistency. If you modify an instruction in memory a couple of bytes ahead of the PC, the 8086 will run the old instruction if it's in the queue. 

  3. The design decisions for the 8086 prefetch cache (and many other aspects of the chip) are described in: J. McKevitt and J. Bayliss, "New options from big chips," in IEEE Spectrum, vol. 16, no. 3, pp. 28-34, March 1979, doi: 10.1109/MSPEC.1979.6367944. 

  4. A detailed block diagram of the 8086 is provided in the patent. Conveniently, the layout of the diagram is close to the physical layout of the chip.

    Detailed block diagram of the 8086, based on patent US4449184. I have modified the register names to match the common naming.

    Detailed block diagram of the 8086, based on patent US4449184. I have modified the register names to match the common naming.

    I won't discuss this block diagram in detail here, but I'll point out the Q (queue) control logic in the upper center, with its associated read and write pointers. The Q bus connects the queue to various parts of the instruction decoding circuitry.

     

  5. Supporting misaligned memory accesses (i.e. accessing a word at an odd address) adds complexity to the 8086. It's not surprising that many RISC chips prohibit unaligned accesses. On SPARC, for instance, an unaligned access fails with a "bus error", which the Sun programmers out there probably recognize. ARM processors before ARMv7 didn't support unaligned accesses. RISC-V supports misaligned data accesses but not misaligned instructions. 

  6. The 8086 patent describes how the program counter in the 8086 does not hold the "real" value:

    PC is not a real or true program counter in that it does not, nor does any other register within CPU, maintain the actual execution point at any time. PC actually points to the next byte to be input into queue. The real program counter is calculated by instruction whenever a relative jump or call is required by subtracting the number of accessed instructions still remaining unused in queue from PC.
     

  7. The CORR correction operation adds more complexity to the system than you might expect, with synchronization between the Bus Interface Unit and the Execution Unit. Because the correction computation uses the addressing adder, the correction operation must be synchronized with memory accesses that also use the adder. To accomplish this, the Bus Interface Unit waits until any memory operation is finished and then generates two clock cycles of "fake" memory operation, keeping the adder free for the CORR instruction. As a result, the memory control circuitry needs logic to implement this memory cycle. Meanwhile, the microcode engine is stopped until the CORR instruction completes, requiring synchronization circuitry. 

  8. The 8088 is famous as the processor in the original IBM PC. The 8088 processor is essentially the same as the 8086 except that it has an 8-bit data bus instead of a 16-bit data bus, so it performs memory accesses a byte at a time instead of a word at a time. Internally, the 8088 is nearly identical to the 8086 but there are a few differences in microcode and in the bus circuitry. The most visible difference is that the 8088 has a 4-byte prefetch queue instead of a 6-byte prefetch queue. Simulations showed that a 4-byte queue was sufficient for the 8088. Because it fetches one byte at a time instead of two bytes, the 8088 fills the prefetch queue more slowly and wouldn't get much benefit from the larger queue. I haven't looked at the 8088's prefetch circuitry in detail, so I can't describe it exactly. 

  9. At some point, Intel implemented consistency between cached instructions and memory. This ensures that self-modifying code will run the latest version of an instruction rather than a stale instruction in the cache. I couldn't determine exactly when this was implemented; various sources say the 486, the Pentium, or the Pentium Pro. (If you have a definitive answer, please let me know.) 

  10. Self-modifying code can be used as a way to distinguish between the 8086 and 8088 chips in software. Since the 8086 has a 6-byte queue and the 8088 has a 4-byte queue, you can create self-modifying code that will run a prefetched instruction on the 8086 but run the modified instruction on the 8088. 

  11. Although the 8087 is the most well-known coprocessor for the 8086, it was not the only coprocessor. The Intel 8089 input/output coprocessor provided mainframe-style I/O channels, offloading I/O processing from the 8086. More than just a DMA engine, the 8089 was a separate processor with its own instruction set. Unlike the 8087, the 8089 didn't take instructions from the 8086's instruction stream so it didn't interact with prefetching; instead, the 8086 sent a Channel Attention signal to the 8089 and the 8089 read instructions from shared memory. The 8089 was complex and expensive and wasn't very popular. The Intel 82586 Ethernet coprocessor used a similar Channel Attention scheme. 

Simulating the IBM 360/50 mainframe from its microcode

The IBM System/360 was a groundbreaking family of mainframe computers announced on April 7, 1964. System/360 was an extremely risky "bet-the-company" project for IBM, costing over $5 billion, but the System/360 ended up as a huge success, setting the direction of the computer industry for decades. The S/360 architecture was so successful that it is still supported by IBM's latest mainframes, almost 60 years later. I'm developing a microcode-level simulator1 for the IBM System/360 Model 50 (link to the simulator); this blog post provides background to understand the Model 50 and the simulator.

Screenshot of the simulator running in a browser.

Screenshot of the simulator running in a browser.

The radical decision behind System/360 was to use a single architecture for the entire product line of computers.3 The name symbolized “360 degrees to cover the entire circle of possible uses.” Using a common architecture seems obvious now (e.g. x86), but prior to the System/360, IBM (like other computer manufacturers) produced multiple computers with entirely incompatible architectures.

Internally, the different System/360 models had completely different implementations to support a wide range of cost and performance levels: the fastest model was over 1000 times as powerful as the slowest. Low-end models used simple hardware and an 8-bit datapath while advanced models used wide datapaths, fast semiconductor registers, out-of-order instruction execution, and caches.2 Despite these internal differences, the models all looked the same to the programmer.

Architecture of System/3604

You might expect a computer architecture from the 1960s to be simple, but System/360 is remarkably complex, partly because it merged six computer families into one architecture. It is a 32-bit architecture that supports many datatypes. As well as 32-bit integers and half words, it supports decimal arithmetic on numbers up to 31 digits long. Floating-point arithmetic supports short (32 bit), long (64 bit), or extended (128 bit) values. The processor also supports character strings up to 256 bytes long.

The System/360 instruction set has about 100 different instructions and several addressing modes. Some of these instructions are straightforward arithmetic, logic, or control operations. Other instructions are more complex, such as the "character move" that copies up to 256 characters in memory, or the floating-point instructions.

One of the most complex instructions is "edit", which formats a sequence of decimal digits for printing, for example inserting commas, a minus sign, or decimal point; removing leading zeroes, or filling leading spaces with characters. The number 1234567 could be "edited" into the string "$***12,345.67" for printing on a check. Keep in mind that this is a single instruction, not a library function like printf.

IBM System/360 Model 50 control panel. The dataflow diagram in the upper right illustrates the system's internal design. Photo by Sandstein, CC BY-SA 3.0.

IBM System/360 Model 50 control panel. The dataflow diagram in the upper right illustrates the system's internal design. Photo by Sandstein, CC BY-SA 3.0.

The System/360 architecture also included I/O, defining IBM's "channel" architecture. A channel is a programmable I/O subsystem with its own instruction set. On larger systems, the channel was an independent unit connected to the computer. But smaller systems such as the Model 50 used the same microcode engine to run CPU programs and channel programs.

The point is that System/360 has a large and complex instruction set. A single instruction could result in hundreds of memory accesses and processing steps. The dense instruction set helped programmers to cram programs into the extremely limited core memory of the 1960s. However, the complex instruction set was a problem for the computer designer, who had to implement the complex circuitry to carry out these instructions. The solution was microcode.

The System/360 Model 50 in a datacenter. The console and processor are at the left. An IBM 1442 card reader/punch is behind the IBM 1052 printer-keyboard that the operator is using. At the back, another operator is loading a tape onto an IBM 2401 tape drive. Photo from IBM.

The System/360 Model 50 in a datacenter. The console and processor are at the left. An IBM 1442 card reader/punch is behind the IBM 1052 printer-keyboard that the operator is using. At the back, another operator is loading a tape onto an IBM 2401 tape drive. Photo from IBM.

Microcode

One of the hardest parts of computer design is creating the control logic that tells each part of the processor how to carry out each instruction. In 1951, Maurice Wilkes came up with the idea of microcode: instead of building the control circuitry from complex logic gates, the control logic could be replaced with code (i. e. microcode) stored in a special memory called a control store. To execute an instruction, the computer internally executes several simpler microinstructions, specified by the microcode. Microcode turns the processor's control logic into a programming task instead of a logic design task.5

Microcode played a key role in the success of the System/360, helping IBM produce a line of computers with the same instruction set architecture but widely different implementations. It also allowed a processor to support different instruction sets; System/360 machines could be backward compatible with customers' older machines6 so customers could keep their existing software. For these reasons, the System/360 computers used microcode unless there was a compelling reason not to.7

Another advantage of microcode is that it provides an easy way to fix design flaws and bugs in the field. Instead of modifying the hardware, a service engineer could replace the microcode with a new version. The photo below shows a copper sheet with microcode etched into it for the Model 50.

A replaceable BCROS sheet, holding 17,600 bits. Dimensions are 505mm × 211mm. Photo courtesy of Chris Bigos.

A replaceable BCROS sheet, holding 17,600 bits. Dimensions are 505mm × 211mm. Click for a high-resolution version. Photo courtesy of Chris Bigos.

Microcode can be implemented in a variety of ways. Many computers use "vertical microcode", where a microcode instruction is similar to a machine instruction, just less complicated. The System/360 designs, on the other hand, used "horizontal microcode", with complex, wide instructions of up to 100 bits, depending on the model. These microinstructions were more like a collection of fields, each controlling low-level signals. This improved performance since multiple parts of the processor could be controlled in parallel.

Hardware of the Model 508

The Model 50 was roughly in the middle of the System/360 lineup, providing a powerful mainframe that could be used by a medium-sized business or university department. The Model 50 typically rented for about $18,000 - $32,000 per month (equivalent to $120,000-$200,000 a month in current dollars).

IBM S/360 Model 50. The console was attached to the main frame, about 5 feet deep. The storage frame and power frame are the black cabinets at the back. Photo from Pinterest.

IBM S/360 Model 50. The console was attached to the main frame, about 5 feet deep. The storage frame and power frame are the black cabinets at the back. Photo from Pinterest.

The Model 50 occupied three large cabinets, each 5 feet long, about 2 feet wide, 6 feet tall, and weighing nearly a ton each.9 The main frame, behind the console, contained the CPU, I/O channel circuitry, and the microcode storage. Behind this, the power cabinet contained the computer's power supplies. To the left, the cabinet at the back contained the main storage: one or two core memory modules, each with 128 kilobytes of memory. (I wrote in detail about the Model 50's core memory earlier.) The computer's cables ran under a raised floor to the I/O devices, which typically included tape drives, a card reader, printers, disk drives, I/O controllers, and so forth.

This diagram shows the three frames that made up the basic S/360 Model 50. Source: Model 50 Maintenance Manual page 138.

This diagram shows the three frames that made up the basic S/360 Model 50. Source: Model 50 Maintenance Manual page 138.

The System/360 processors weren't implemented with integrated circuits, but with SLT (Solid Logic Technology) modules, hybrid modules that contain a few transistors, diodes, and resistors. A typical module implemented a logic gate, so it takes many circuit boards full of modules to construct the processor.

A logic board using SLT modules. Each square metal can is a module.

A logic board using SLT modules. Each square metal can is a module.

Like most computers of the 1960s, the Model 50 used magnetic core memory, with a tiny ferrite ring to store each bit. The photo below shows a core plane that stores 32768 bits (along with 512 bits for I/O). A stack of 18 planes formed a 64-kilobyte memory module, with two parity bits.10

A Model 50 core plane is arranged as a grid of cores. The Y lines run horizontally. X and sense/inhibit lines run vertically. The sense/inhibit lines form loops at the top and bottom. Each of the four vertical pairs of blocks has separate sense/inhibit lines. Each core plane was about 10¾ × 6¾ × ⅛ inches.

A Model 50 core plane is arranged as a grid of cores. The Y lines run horizontally. X and sense/inhibit lines run vertically. The sense/inhibit lines form loops at the top and bottom. Each of the four vertical pairs of blocks has separate sense/inhibit lines. Each core plane was about 10¾ × 6¾ × ⅛ inches.

The Model 50's internal architecture

To the programmer, all processors within System/360 look the same; internal circuitry, however, may be entirely different.

It's important to keep in mind that the internal architecture of the Model 50 is very different from the architecture that the programmer sees.11 In particular, the processor's internal registers are invisible to the programmer. The programmer instead sees 16 general-purpose registers and 4 floating-point registers, but to the processor these are part of the 64-word local store, a small high-speed core memory.

The diagram below shows the complex data flow through the computer.12 The black boxes are internal registers; the processor has a surprisingly large number of registers, used for a variety of purposes. The internal components are connected by buses. Most of the internal communication is over the 32-bit buses, shown in black. The 8-bit "mover" bus is shown in gray.

This diagram shows the data flow through the IBM 360/50 and appears in the upper-right corner of the console. I drew this version since I couldn't find a clear photo of it.

This diagram shows the data flow through the IBM 360/50 and appears in the upper-right corner of the console. I drew this version since I couldn't find a clear photo of it.

The heart of the computer is the 32-bit adder, which performs addition. For subtraction, the argument is complemented by the True/Complement circuit (TC). The adder has an associated shifter to perform bit-shifts; this is especially important for multiplication, division, and floating-point calculations. Operating in parallel with the adder is the "mover", which operates on bytes. It can extract a byte from a 32-bit word, as well as manipulating 4-bit pieces of the byte. The mover also performs Boolean operations (AND, OR, XOR). (Unlike most processors, the Model 50 separates arithmetic and logical operations, instead of having an ALU perform both.)

The computer's main core-memory storage is on the left. To access memory, an address is put in the Storage Address Register (SAR). Data is then read or written through the Storage Data Register (SDR). To the left of main storage, is the Instruction Address Register (the Program Counter or PC in modern terms). At the top is the Local Store, 64 words of high-speed core memory that holds the programmer's registers as well as some internal storage. The local store is accessed through the Local Store Address Register (LSAR).

At the right are the I/O channels: the low-speed Multiplexor Channel and the high-speed Selector Channel. You can think of these as DMA (direct memory access) paths for I/O. The multiplexor channel communicates over an 8-bit bus through the mover, while the selector channel communicates over a 32-bit bus. Although the channels are conceptually separate from the processor, the channels use the same buses, circuitry, and microcode engine as the processor. This limits I/O performance compared to more advanced System/360 models that have independent circuitry for the channels.

An example of the microcode

As you can see, the processor has many registers and functional units. The microcode needs to control these components to carry out program instructions. The microcode architecture is very complex and takes over 100 pages to explain thoroughly,15 so I'm only able to scratch the surface here. Each microinstruction is 90 bits long and performs multiple tasks. In the documentation, IBM used an 11-line block to represent each microinstruction, showing all the activities that are taking place in parallel.

A sample microinstruction is shown below, part of the microcode that implements an add instruction. At this point, earlier microinstructions have fetched and decoded the instruction and put the arguments into the R and L registers. This microinstruction performs the actual 32-bit addition, but there's a lot more happening than just the addition.

One microinstruction, part of the integer addition code. This microinstruction is at micro-address 0220.

One microinstruction, part of the integer addition code. This microinstruction is at micro-address 0220.

Starting with the line "R+L→R" (red), this indicates that the ALU is taking inputs from registers R and L, and the result is going into the R register. In other words, the two arguments are added. The result R is stored into the desired programmer-visible register in local storage (blue). The processor registers FN and J select the address in local storage. Meanwhile, the SETCRALG line sets the Condition code register based on the sign (i.e. "algebraic" value) of the result, indicating if the result is positive, negative, or zero.

The line "BC⩝C" indicates that signed overflow is detected and used as the carry flag14 while CAR (yellow) indicates the microcode branches on this carry (overflow) value. Thus, the microcode will take one path if the addition was valid and a second error path if overflow occurred. A microinstruction can "emit" an arbitrary 4-bit value (green) which can be used in a variety of ways. In this case, the binary value 1000 is emitted, fed into the W register, and then the M register, for use by the next microinstruction. As you can see, the CPU performs many activities in parallel for one microinstruction, which increases the computer's performance.

All the activities of a microinstruction are encoded into a 90-bit word consisting of 28 fields.13 The microinstruction discussed above (micro-address 0220) is highlighted in the documentation below. A single microinstruction is very complex, which is why it takes an 11-line block of text to represent it.

Part of the microcode listing. The previously-discussed microinstruction is highlighted. Note that the micro-address 0220 matches the address in the upper-left corner of the microinstruction diagram.

Part of the microcode listing. The previously-discussed microinstruction is highlighted. Note that the micro-address 0220 matches the address in the upper-left corner of the microinstruction diagram.

The processor documentation contains hundreds of pages of microcode;16 one page of the floating-point multiply code is below. Each box is one microinstruction, and the lines between them indicate the complex control paths. I'm not going to explain this microcode,17 but I wanted to show its complexity.

Part of the floating-point multiply microcode. (Click for a larger view.) From ALD vol 18.

Part of the floating-point multiply microcode. (Click for a larger view.) From ALD vol 18.

The console

The discussion above has shown the complex internal architecture of the Model 50. The numerous lights and controls on the console19 provide a view into this internal state. There were three main uses for the console. The first use was basic "operator control" tasks such as turning the system on, booting it, or powering it off, using the controls in the lower section of the console. These controls were consistent across the S/360 line and were usually the only controls the operator needed. The three hexadecimal dials in the lower right selected the I/O unit that held the boot software. Once the system had booted, the operator generally typed commands into the system rather than using the console.

Control panel of the IBM System/360 Model 50. This panel has marginal check controls for auxiliary storage in the upper right, replacing the dataflow diagram.

Control panel of the IBM System/360 Model 50. This panel has marginal check controls for auxiliary storage in the upper right, replacing the dataflow diagram.

The second console function was "operator intervention": program debugging tasks such as examining and modifying memory or registers and setting breakpoints. The lights and toggle switches in the lower half of the console were used for operator intervention. The operator could enter a 24-bit address using the row of 24 toggle switches, and enter a 32-bit data value using the row of 32 toggle switches above. The lights allowed the contents of memory to be examined. With other switches, the operator could set a breakpoint, single-step through a program, and perform other debugging operations.

The third console function was system maintenance and repair performed by an IBM customer engineer. The customer engineering displays took up the top half of the console and provided detailed access to the computer's complex internal state. To save space, the Model 50 had four roller knobs on the right side, with 8 positions for each knob. Each knob position selected a different function for the row of 36 lights (32 bits plus parity). The legends above the lights rotate with the knobs, showing the meaning of each light. For example, one position would display the L register, while another position would display the current microinstruction. In the photo below, the upper roller and lights are displaying part of the microcode currently being executed (ROS = Read Only Store). The roller below shows some of the internal registers and counters.

Closeup of two rollers and the associated lights.

Closeup of two rollers and the associated lights.

Finally, the voltmeter and voltage control knobs in the upper left of the console were used by an IBM customer engineer for "marginal checking". By raising and lowering the voltage levels, borderline components could be detected and replaced before they caused problems.

The simulator

The simulator is at righto.com/360 and the code is on Github. I implemented the simulator in JavaScript so it can run in a browser. It runs a sample program by executing the Model 50's microcode, simulating each microinstruction and the hardware. Each microinstruction is displayed graphically, along with the current instruction, the registers, the local storage, and core memory. It displays the console lights accurately based on the internal state, on a zoomable virtual console. Each row of lights can display 8 different elements, which you can change by clicking on a roller. You can step also through the microcode, one microinstruction at a time.

This simulator is still under development so don't expect it to work perfectly. I also haven't implemented the toggle switches, so you can't enter a program from the console yet. I also need to implement the I/O system, which has its own registers and a different microcode format.

To build the simulator, I extracted the binary microcode from the listings using a custom OCR tool. I implemented the hundreds of micro-operations, which were tricky to get correct. While most micro-operations are simple operations such as moving a register to the bus, some microinstructions are much more complex, especially for floating-point operations.20 Another complication is that a microinstruction performs many tasks in parallel and it was hard to determine the exact order in which to perform them.

My eventual goal with the simulator is to move it into the physical world. Specifically, I plan to drive the lights on CuriousMarc's Model 50 control panel to make the panel operate accurately. We also plan to hook up his IBM tape drives and card reader so we can have all the pieces of a Model 50 mainframe working together, except for the processor itself. I plan to port the simulator to C so I can run it in a microcontroller to drive the physical console. An FPGA implementation is another possibility; this would provide the maximum speed, but would be harder to implement.

I announce my latest blog posts on Twitter, so follow me @kenshirriff for updates and future articles. I also have an RSS feed. Thanks to Richard Cornwell for discussion and data.

Notes and references

  1. My simulator is not particularly useful unless you really care about the microcode in the Model 50. If you want to run software on a simulated System/360, you probably want to use the Hercules system

  2. I'll briefly summarize some of the different implementations used in System/360 computers.

    The low-end Model 30 uses an 8-bit bus and ALU, so 32-bit operations take four steps. It uses 60-bit microcode.

    The Model 40 also has an 8-bit bus and ALU, but it has 16-bit registers and a 16-bit bus to memory, improving the performance. It has 60-bit microcode.

    The Model 50 (discussed in this blog post) has 32-bit registers, memory bus, and adder. It also has the 8-bit mover that can operate in parallel with the adder.

    The Model 65 has a 64-bit bus, and multiple adders (60 and 8-bit) that allow a floating-point fraction and exponent to be processed in parallel. It also has an 8-byte instruction buffer and external channels. It uses 100-bit microcode.

    The Model 75 has a 64-bit main adder, 8-bit exponent adder, 8-bit decimal adder, and a 24-bit addressing adder. It overlaps instruction fetching and execution, with 16 bytes of instruction prefetching and 8 bytes of data prefetching.

    The high-end Model 91 has an advanced superscalar architecture with out-of-order execution, instruction pipelining, and multiple arithmetic execution units. Higher models support memory interleaving for faster access: 2-way on the Model 65 up to 16-way on the Model 195.

    The models 44, 75, 91 and above used hardwired control instead of microcode to squeeze out more performance.

    As you can see, the System/360 line has a wide variety of implementations. At the low end, the hardware is kept to a minimum to reduce costs, while at the high end, more hardware boosts performance, with wider datapaths and multiple functional units providing parallelism. 

  3. The System/360 line didn't completely meet the goal of a compatible architecture. IBM split out the business and scientific markets on the low-end machines by marketing subsets of the instruction set. The basic instructions were provided in the "standard" instruction set. On top of this, decimal instructions (for business) were in the "commercial" instruction set and floating-point was in the "scientific" instruction set. The "universal" instruction set provided all these instructions plus storage protection (i.e. memory protection between programs). Additionally, cost-cutting on the low-end Model 20 made it incompatible with the S/360 architecture, and the Model 44 was somewhat incompatible to improve performance on scientific applications. 

  4. IBM defined the System/360 architecture in great detail in a document called the IBM System/360 Principles of Operation. It describes not only the instruction set, but also the datatypes, input/output model, the interrupt model, and even the basic structure of the system control panel. To learn more about System/360, see A Programmer's Introduction to the IBM System/360 Architecture, Instructions, and Assembler Language. A bunch of assembly examples are at rosettacode

  5. The primary benefit of microcode for IBM was economic. As described in Microprogram Control for System/360, the cost of a non-microcoded processor is roughly linear in the size of the instruction set. However, a microcoded system has a roughly fixed cost, with a small overhead for additional instructions. Thus, as instruction sets get more complex (as in System/360), there is a crossover point where microcode is more efficient. This is especially the case for smaller systems where the base cost is lower. The lower marginal cost also makes emulating other systems more feasible. The IBM System/360 was one of the first commercial computers to make extensive use of microcode. 

  6. Various System/360 machines supported compatibility features with earlier IBM computers including the 1401, 1440, 1620, 7070, 7074, 7080, 709, 7090, 7094. Generally, a smaller System/360 machine could replace a smaller IBM computer such as the 1401, while a larger mainframe such as the 7090 needed to be replaced by a larger System/360 computer such as the Model 65.  

  7. A few System/360 models did not use microcode. The Model 44 was designed as a high-performance computer for scientific applications, so it used hardwired control. The Model 85 was partially microcoded, while the Models 75 and 91 were completely hardwired. 

  8. The book IBM's 360 and Early 370 Systems describes the history of the S/360 in great detail. IBM lists data on each model, including dates, data flow width, cycle time, storage, and microcode size. Another list with model details is here. The article System/360 and Beyond has lots of info. A list of 360 models and brief descriptions is here. For information on the Model 50 specifically, see the Functional Characteristics manual, Field Engineering manuals, Wikipedia, photos here and here, CuriousMarc video

  9. For detailed dimensions of the System/360 components, see the Physical Planning Manual For more memory, another 1500-pound frame could be added to the Model 50, boosting it from 256 kilobytes of memory to 512 kilobytes. Up to four Large Capacity Storage units (IBM 2361) could be added, each providing two more megabytes. 

  10. I wrote in detail about the Model 50's core memory system here

  11. The quote is from System/360 Model 40 comprehensive introduction

  12. The Model 50 Field Engineering Diagram Manual contains the detailed data flow diagram below. This diagram corresponds to the diagram discussed earlier, but provides much more detail. In particular, it shows the exact bit widths of the various data paths and registers.

    The detailed data flow diagram. Click for a larger version.

    The detailed data flow diagram. Click for a larger version.

     

  13. The table below shows how a microinstruction is encoded into a 90-bit word.

    BitsNameMeaning
    0PParity
    1-3LUMover input left side
    4-5MVMover input right side
    6-11ZPROAR address (Read Only storage Address Register)
    12-15ZFROAR branch control
    16-18ZNAddress control field
    19-23TRAdder control
    24Unused
    25-27WSLocal store address control
    28-30SFLocal store functions
    31PParity
    32-34IVInvalid digit test
    35-39ALAdder latch gating
    40-43WMMover destination
    44-45UPByte counter function
    46MDMD counter control
    47LBL byte counter control
    48MBM byte counter control
    49-51DGLength counter
    52-53ULMover function left digit
    54-55URMover function right digit
    56PParity
    57-60CEEmit field
    61-63LXLeft adder input
    64TCTrue or complement control
    65-67RYRight adder input
    68-71ADAdder function control
    72-77ABA branch control
    78-82BBB branch control
    83Unused
    84-89SSStat setting control

    For channel instructions, the microcode format is slightly different since some of the fields need to control the channel circuitry. However, most of the fields are the same as for the CPU. The table below shows the microcode format for the channel; the highlighted entries are different from the CPU microcode.

    BitsNameMeaning
    0PParity
    1-3LUMover input left side
    4-5MVMover input right side
    6-11ZPROAR address
    12-15ZFROAR branch control
    16-18ZNAddress control field
    19-23TRAdder control
    24Unused
    25CSLocal storage address selector
    26-27SALocal storage address
    28-30SFLocal storage function
    31PParity
    32-34CTTiming signals to channel
    35-39ALAdder latch gating
    40-42WLMover destination
    43-46HCMultiplexor channel stat setting
    47-48CGControl signals to channel
    49-51MGMultiplexor channel gate control
    52-53ULMover function left digit
    54-55URMover function right digit
    56PParity
    57-60CEEmit field
    61-63LXLeft adder input
    64TCTrue or complement control
    65-67RYRight adder input
    68-70CLSelector channel adder latch tests
    71Unused
    72-77ABA branch control
    78-82BBB branch control
    83Unused
    84-89SSStat setting control
     

  14. When adding twos-complement signed numbers, an overflow occurs if the carry out of the most significant bit is different from the carry out of the second-most-significant bit. (I explain this in detail here.) IBM numbers the bits in a word "backward" with bit 0 the most significant. Thus, an overflow occurs if the carry from bit 0 XOR'd with the carry from bit 1 is nonzero. IBM uses ⩝ to indicate an exclusive or. Thus, CARRY(0) ⩝ CARRY(1) indicates an overflow, represented as BC⩝C in the microcode. 

  15. For a description of how the Model 50 microcode works, see the book "Microprogramming: Principles and Practices", S. Husson (1970), pages 295 to 411. Bitsavers has a lot of Model 50 documents, but not everything. If you have additional documentation, such as the IBM Automated Logic Diagrams, please let me know. 

  16. The Model 50's microcode listing is available in three volumes on bitsavers. The binary microcode listings are difficult to read with OCR because pages were printed on different printers; some use serif fonts and others use sans-serif fonts. I made my own OCR program designed to process binary, which was able to read the listings for the most part. The presence of parity in the microcode helped catch errors. 

  17. Ok, I'll give a brief explanation of that page of microcode, which is part of the implementation of floating-point multiplication. The implementation is designed with tradeoffs between speed, code length, and temporary memory usage. The idea is to multiply the multiplicand by the multiplier, kind of like long multiplication on paper, where you multiply a digit at a time and add the partial sums. This code processes a hex digit of the multiplier at a time, with a separate case for each digit. The multiplicand is multiplied by the digit and this is added to the running total, shifting as appropriate. To make this fast, multiples of the multiplicand are pre-computed. However, pre-computing 16 multiples (one for each hex digit value) would take too much temporary (local) storage. So the only pre-computed multiples are 1, 2, and 6, and these are combined for other digits. To multiply by the digit 7, for instance, the multiples for 1 and 6 are added. To multiply by the digit 4, the multiple for 6 is added and the multiple for 2 is subtracted.

    But what about multiplying by 9 through 15? The trick is to "borrow" 16 from the next-higher digit. For instance, to multiply by the digit 11, you borrow 16, subtract the multiple for 6, and add the multiple for 1. Then the value one less is used for the next digit to account for the borrow. Thus, all 16 possibilities can be handled by adding or subtracting at most two of the pre-computed values. With borrowing, the code needs to handle 32 cases; the included page implements 22 of these cases. This implementation makes multiplication rapid, but the microcode is complex with many paths. (There is also a bunch more code to handle the floating-point exponent, normalizing values, overflow, underflow, and so forth.) 

  18. Different System/360 models used a variety of methods to store microcode.18 An important feature of IBM's microcode storage was that the microcode could be replaced in the field. The low-end Model 25 held microcode in a 16-kilobyte section of core memory called Control Storage. The Model 30 used CCROS (Card Capacitor Read-only Store), storing the microcode on special metalized punch cards that were read capacitively. Transformer Read-Only Storage (TROS, below) was used by the System/360 Model 20 and Model 40. I wrote an article about microcode storage if you want more information.

    A TROS module from an IBM System/360 Model 20.

    A TROS module from an IBM System/360 Model 20.

    The Model 50 (as well as 65 and 67) stored microcode in BCROS (Balanced Capacitor Read-Only Storage), using copper-clad epoxy glass laminate boards, each 20″×8½″. Each sheet plane held 176 words of 100 bits, and the Model 50 used 16 sheets to store 2816 words. (Only 90 of the 100 bits in each word were used.) The data in BCROS was etched into the copper wiring (below). Each bit is represented by two squares: one connected to the upper wire and one connected to the lower wire (or vice versa), forming the balanced capacitors.

    Closeup of a BCROS sheet from a System/360 Model 50.

    Closeup of a BCROS sheet from a System/360 Model 50.

     

  19. The features of the system control panel were carefully defined in the System/360 Principles of Operation pages 117-121, providing a consistent operator experience across the S/360 line. (The customer engineering part of the panel, on the other hand, was not specified and wildly different across the product line.) Diagrams of S/360 consoles are at quadibloc. For more details on the consoles, see my article on System/360 consoles

  20. The micro-operation that caused me the most difficulty is ED*FP, which computes the difference between two exponents for floating-point, but also computes four floating-point flags including the sign depending on the type of operation. Not only is this operation complex, but I think there is a typo in the description.

    A description of the ED*FP micro-operation.

    A description of the ED*FP micro-operation.

    Another complex micro-operation is MLJK, which performs multiple actions as part of instruction decoding:

    Gate adder latch to L reg and M reg. Gate latch bits 12-15 to J reg. Gate latch bits 16-19 to MD counter. Turn off refetch stat.
    If latch bits 12-15 all zero, turn on stat 0. Otherwise turn off stat 0.
    If latch bits 16-19 all zero, turn on stat 1. Otherwise turn off stat 1.
    If latch bits 16-17 all zero, turn on one-syllable stat. Otherwise turn off one-syllable stat.
    If latch bits 0-1 equal 00, set ILC to 01.
    If latch bits 0-1 equal 01 or 10, set ILC to 10.
    If latch bits 0-1 equal 11, set ILC to 11.