IBM, sonic delay lines, and the history of the 80×24 display

What explains the popularity of terminals with 80×24 and 80×25 displays? A recent blog post "80x25" motivated me to investigate this. The source of 80-column lines is clearly punch cards, as commonly claimed. But why 24 or 25 lines? There are many theories, but I found a simple answer: IBM, in particular its dominance of the terminal market. In 1971, IBM introduced a terminal with an 80×24 display (the 3270) and it soon became the best-selling terminal, forcing competing terminals to match its 80×24 size. The display for the IBM PC added one more line to its screen, making the 80×25 size standard in the PC world. The impact of these systems remains decades later: 80-character lines are still a standard, along with both 80×24 and 80×25 terminal windows.

In this blog post, I'll discuss this history in detail, including some other systems that played key roles. The CRT terminal market essentially started with the IBM 2260 Display Station in 1965, built from curious technologies such as sonic delay lines. This led to the popular IBM 3270 display and then widespread, inexpensive terminals such as the DEC VT100. In 1981, IBM released a microcomputer called the DataMaster. While the DataMaster is mostly forgotten, it strongly influenced the IBM PC, including the display. This post also studies reports on the terminal market from the 1970s and 1980s; these make it clear that market forces, not technological forces, led to the popularity of various display sizes.

Some theories about the 80×24 and 80×25 sizes

Arguments about terminal sizes go back decades,5 but the article 80x25 presented a detailed and interesting theory. To summarize, it argued that the 80×25 display was used because it was compatible with IBM's 80-column punch cards,1 fits nicely on a TV screen with a 4:3 aspect ratio, and just fit into 2K of RAM. This led to the 80×25 size on terminals such as the DEC VT100 terminal (1978). The VT100's massive popularity led to it becoming a standard, leading to the ubiquity of 80×25 terminals. At least that's the theory.

It's true that 80-column displays were motivated by punch cards4 and the VT100 became a standard,2 but the rest of this theory falls apart. The biggest problem with this theory is the VT100's display was 80×24, not 80×25.3 In addition, the VT100 used extra bytes of storage for each line, so the display memory did not fit into 2K. Finally, up until the 1980s, most displays were 80×24, not 80×25.

The DEC VT100 terminal had an 80×24 display. Over a million of them were sold. Photo from Jason Scott, (CC BY-SA 4.0).

The DEC VT100 terminal had an 80×24 display. Over a million of them were sold. Photo from Jason Scott, (CC BY-SA 4.0).

Other theories have been expressed on Software Engineering StackExchange and Retrocomputing StackExchange, arguing that 80×24 terminals resulted from technical reasons such as TV scan rates, aspect ratios, memory sizes, typography, the history of typewriters, and so forth. There is a fundamental problem with theories that 80×24 is an inevitable consequence of technology, though: terminals in the mid-1970s had dozens of diverse screen sizes such as 31×11, 42×24, 50×20, 52×48, 81×38, 100×50, and 133×64.11 This makes it clear that technological limitations didn't force terminals into a particular size. To the contrary, as technology improved, most of these sizes disappeared and terminals were largely 80×24 by the early 1980s. This illustrates that standardization was the key factor, not the technology.

I'll briefly summarize why technical factors don't have much impact on the terminal size. Although US televisions used 525 scan lines and 60 Hz refresh,9 40% of terminals used other values.6 The display frequency and bandwidth didn't motivate a particular display size because terminals generated characters with a wide variety of matrix sizes.8 Although memory cost was significant, DRAM chip sizes quadrupled every three years, making memory only a temporary constraint. The screen's aspect ratio wasn't a big factor because the text's aspect ratio often didn't match the screen's ratio.7 Of course technology had some influence, but it didn't stop early manufacturers from creating terminal sizes ranging from 32×8 to 133×64.

The rise of CRT terminals

At this point, a bit of history of CRT terminals will help.11 Many readers will be familiar with ASCII terminals, such as stand-alone terminals like the DEC VT100, serial terminal connections via a PC, or the serial port on boards such as the Arduino. This type of terminal has its roots in teleprinters, electro-mechanical keyboard/printers that date back to the early 1900s. The best-known teleprinter is the Teletype, popular in newsrooms as well as computer systems in the 1970s. (The Linux device /dev/tty is named after the Teletype.) Teletypes typically printed 72-character lines on a roll of paper.10

A Teletype ASR33 communicated in ASCII and printed 72 characters per line. Hundreds of thousands of these were produced from 1963 to 1981. The punched tape reader and punch is on the left. Photo from Arnold Reinhold, (CC BY-SA 3.0).

A Teletype ASR33 communicated in ASCII and printed 72 characters per line. Hundreds of thousands of these were produced from 1963 to 1981. The punched tape reader and punch is on the left. Photo from Arnold Reinhold, (CC BY-SA 3.0).

In the 1970s, replacing teleprinters with CRT terminals was a large and profitable market. AT&T introduced the Teletype Model 40 in 1973, a CRT terminal with an 80×24 display.12 Many other companies introduced competing CRT terminals, and "Teletype-compatible" became a market segment. By 198111 these terminals were being used in many roles besides replacing teleprinters and the name shifted to "ASCII terminals". By 1985, CRT terminals were a huge success with 10 million terminals installed in the US.

The IBM 3270 terminal, specifically the newer 3278 model. From IBM 3270 Brochure (1977).

The IBM 3270 terminal, specifically the newer 3278 model. From IBM 3270 Brochure (1977).

But there's a parallel world of mainframe terminals, a world that may be unfamiliar to many readers. In 1965, IBM introduced the IBM 2260 Display Terminal, which placed IBM's "stamp of approval" on the CRT terminal, which had previously been "somewhat of a novelty."6 This terminal dominated the market until IBM replaced it with the cheaper and more advanced IBM 3270 terminal in 1971. Unlike asynchronous ASCII terminals that transmitted individual keystrokes, these terminals were block oriented, efficiently exchanging large blocks of characters with a mainframe. The 3270 terminal was fairly "intelligent": a 3270 user could fill in labeled fields on the screen, and then transmit all the data at once by pressing the "Enter" key. (This is why modern keyboards often still have the "Enter" key.) Sending a block of data was more efficient than sending each keystroke to the computer, and allowed mainframes to support hundreds of terminals. In the next sections, I'll discuss the 2260 and 3270 terminals in detail.

The chart below6 shows how the terminal market looked in 1974. The market was ruled by IBM's 3270 terminal, which had obsoleted IBM's 2260 terminal by this point. With 50% of the market, IBM essentially defined the characteristics of a CRT terminal. Teleprinter replacement was a large and influenetial market; the Teletype Model 40 was small but growing in importance. Although DEC would soon be a major player, it was in the small "Independent Systems" slice at this point.

In 1974, IBM dominated the terminal market; 50% of the terminals sold were IBM terminals (or compatibles). From Alphanumeric and Graphic CRT Terminals.

In 1974, IBM dominated the terminal market; 50% of the terminals sold were IBM terminals (or compatibles). From Alphanumeric and Graphic CRT Terminals.

The IBM 2260 video display terminal

The IBM 2260 was introduced in 1965 and was one of the first video display terminals.14 It filled three roles: remote data entry (in place of punching cards), inquiry (e.g. looking up records in a database), and as a system console. This compact terminal weighed 45 pounds and was sized to fit on a standard office typewriter stand. Note the thickness of the keyboard; it reused the complex keyboard mechanism of the IBM keypunch.13

IBM 2260 Display Station. Photo from IBM via Frank da Cruz.

IBM 2260 Display Station. Photo from IBM via Frank da Cruz.

You might wonder how IBM could produce such a compact terminal with 1965 technology. The trick was that the terminal held just the keyboard and CRT display; all the control logic, character generation, storage, and interfacing was in a massive 1000 pound cabinet (below).15 This cabinet contained the circuitry to handle up to 24 display terminals. It generated the pixels for these terminals and send video signals to the terminals, which could be up to 2000 feet away.

The IBM 2848 Display Control could drive up to 24 display terminals.
The cabinet was 5 feet wide and weighed 1000 pounds.

The IBM 2848 Display Control could drive up to 24 display terminals. The cabinet was 5 feet wide and weighed 1000 pounds.

One of the most interesting features of the 2260 is the sonic delay lines used for pixel storage. Bits were stored as sound pulses sent into a nickel wire, about 50 feet long. The pulses traveled through the wire and came out the other end exactly 5.5545 milliseconds later. By sending a pulse (or not sending a pulse for a 0) every 500 nanoseconds, the wire held 11,008 bits. A pair of wires created a buffer that held the pixels for 480 characters.16

Sonic delay line module from the IBM 2260 display. This module contained about 50 feet of coiled nickel wire. Image from 2260 Field Engineering Theory of Operation Manual.

Sonic delay line module from the IBM 2260 display. This module contained about 50 feet of coiled nickel wire. Image from 2260 Field Engineering Theory of Operation Manual.

The sonic delay line had several problems. First, you had to constantly refresh the data: as bits came out one end of the wire, you had to feed them back in the other end. Second, the delay line was not random access: if you wanted to update a character, you needed to wait several milliseconds for those bits to circulate. Third, the delay line was sensitive to vibration; Wikipedia says that heavy footsteps could mess up the screen. Fourth, the delay line speed was sensitive to temperature changes; it needed to warm up for two hours in a temperature-controlled cabinet before use. With all these disadvantages, you might wonder why sonic delay lines were used. The main reason was they were much cheaper than core memory. The serial nature of a delay line was also a good match to the serial nature of a raster-scan display.

The coiled nickel wire inside a sonic delay has transducers at both ends (center and bottom left, with twisted wiring attached).
To adjust the delay, the threaded rod (bottom left) moves the transducer's position along the wire. The metal boxes on the ends of the wires are
dampers to prevent reflections. Photo courtesy of Alan Parker.

The coiled nickel wire inside a sonic delay has transducers at both ends (center and bottom left, with twisted wiring attached). To adjust the delay, the threaded rod (bottom left) moves the transducer's position along the wire. The metal boxes on the ends of the wires are dampers to prevent reflections. Photo courtesy of Alan Parker.

The image below shows the screen of the 2260 Model 2, with 12 lines of 40 characters. (The Model 1 had 6 lines of 40 characters and the Model 3 had 12 lines of 80 characters.) Notice that the lines are double-spaced; this is because the control unit actually generated 24 lines of text but alternating lines went to two different terminals.20 This is a very strange approach, but it split the high cost of the control hardware across two terminals.19 Another strange characteristic was that the 2260's scan lines were vertical, unlike the horizontal scan lines in almost every video display and television.21

IBM 2260 display showing 12 lines of 40 characters. Image from 2260 Operator Manual.

IBM 2260 display showing 12 lines of 40 characters. Image from 2260 Operator Manual.

Each character was represented in 6-bit EBCDIC, giving a character set of 64 characters (no lower-case). 18 The delay lines stored the pixels to be displayed, but they also stored the EBCDIC code for each character. The trick here is the blank column of pixels between each character for horizontal spacing between characters. The system used this column to store the BCD character value but blanked the display during this column so the BCD value didn't show up as pixels on the screen. This allowed the 6-bit character value to be stored essentially for free.

The relevant question is why did the 2260 have a display with 12 lines of 80 characters?2324 The 80-character width allowed the terminals to take the place of 80-column punch cards for data entry. (In the 40-character models, a card would be split across two lines.) As for the 12 lines, that appears to be what the delay lines could support without flicker.22

Image from 2260 Operator Manual.

The IBM 2260 was a big success, and led to the popularity of the CRT terminal. The impact of the IBM 2260 terminal is shown by a 1974 report on terminals; about 50 terminals were listed as compatible with the IBM 2260. The IBM 2260 didn't have an 80×24 display (although it generated 80×24 internally), but its 40×12 and 80×12 displays made 80×24 the next step for IBM.

The IBM 3270 video display

In 1971, IBM released the IBM 3270 video display system, which proceeded to dominate the market for CRT terminals.26 This terminal supported a 40×12 display to provide a migration path from the 2260, but also supported a larger 80×24 display. The 3270 had more features than the 2260, such as protected fields on the screen, more efficient communication modes, and variable-intensity text. It was also significantly cheaper than the 2260, ensuring its popularity.25

The IBM 3270 terminal. The Selector Light Pen was used to select data fields, somewhat like a mouse. This terminal is a later model, the 3278; in the photo it is displaying 43 lines of 80 characters. From IBM 3270 Brochure (1977).

The IBM 3270 terminal. The Selector Light Pen was used to select data fields, somewhat like a mouse. This terminal is a later model, the 3278; in the photo it is displaying 43 lines of 80 characters. From IBM 3270 Brochure (1977).

The technology in the 3270 was a generation more advanced than the 2260, replacing vacuum tubes and transistors with hybrid SLT modules, similar to integrated circuits. Instead of sonic delay lines, it used 480-bit MOS shift registers.27 The 40×12 model used one bank of shift registers to store 480 characters. In the larger model, four banks of shift registers (1920 characters) supported an 80×24 display. In other words, the 3270's storage was in 480-character blocks for compatibility with the 2260, and using four blocks resulted in the 80×24 display. (Unlike RAM chips, a shift register size didn't need to be a power of 2. While a RAM chip is arranged as a matrix, a shift register has a serpentine layout (below) and can be an arbitrary size.)

Die photo of the Intel 1405 shift register. This shift register was not used in the IBM 3270 but was used in other terminals such as the Datapoint 2200.

Die photo of the Intel 1405 shift register. This shift register was not used in the IBM 3270 but was used in other terminals such as the Datapoint 2200.

IBM provided extensive software support for the 3270 terminal.28 This had an important impact on the terminal market, since it forced other manufacturers to build compatible terminals if they wanted to compete. In particular, this made 3270-compatibility and the 80×24 display into a de facto standard. In 1977, IBM introduced the 3278, an improved 3270 terminal that supported 12, 24, 32, or 43 lines of data. It also added a status line, called the "operator information area". The new 32- and 43-line sizes didn't really catch on, but the status line became a common feature on competing terminals.

Looking at industry reports61132 shows the popularity of various terminal sizes from the 1970s to the 1990s. Although there were 80×25 displays in 1970 (if not earlier), the 80×24 display was much more common. The wide variety of terminal sizes in 1974 diminished over time, with the market converging on 80×24. By 1979, the DEC VT100 (with its 80×24 display) was the most popular ASCII terminal with over 1 million sold. Terminals started supporting 132×24 for compatibility with 132-character line printers,29 especially as larger 15" monitors became more affordable, but 80×24 remained the most popular size. Even by 1991, 80×25 remained relatively uncommon.

The IBM PC and the popularity of 80×25

Given the historical popularity of 80×24 terminals, why do so many modern systems use 80×25 windows? That's also due to IBM: the 80×25 display became popular with the introduction of the IBM PC in 1981. The PC's default display card (MDA) provided 80×25 monochrome text while the CGA card provided 40×25 and 80×25 in color. This became the default size of a Windows console, as well as the typical size for PC-based terminal windows.

The IBM PC with an 80×25 display generated by the MDA (Monochrome Display Adapter) card. Photo from Boffy b (CC BY-SA 3.0).

The IBM PC with an 80×25 display generated by the MDA (Monochrome Display Adapter) card. Photo from Boffy b (CC BY-SA 3.0).

Other popular computers at the time used 24 lines, such as the Osborne 1 and Apple II, so I was curious why the IBM PC used 25 lines. To find out, I talked to Dr. Dave Bradley and Prof. Mark Dean, two of the original IBM PC engineers. They explained that the IBM PC was a follow-on to the rather obscure IBM DataMaster office computer,30 and many of the IBM PC design choices followed the DataMaster microcomputer. The IBM PC kept the DataMaster's keyboard, but detached from the main unit. Both systems used BASIC, but the decision to get the PC's BASIC interpreter from the tiny company Microsoft would change both companies more than anyone could imagine. Both systems went with an Intel processor, an 8-bit 8085 in the DataMaster and the 16-bit 8088 in the IBM PC. They also used the same interrupt controller, DMA controller, parallel port, and timer chips. The PC's 62-pin expansion bus was almost identical to DataMaster's.

The IBM DataMaster System/23 was a microcomputer announced in 1981 just a month before the IBM PC.

The IBM DataMaster System/23 was a microcomputer announced in 1981 just a month before the IBM PC.

The drawing below is part of an early design plan for the IBM PC. In particular, the IBM PC was going to use the 80×24 display of the DataMaster (codenamed LOMA), as well as 40×16 and 60×16 more suitable for televisions. The drawings also show color graphics with 280×192 pixels, the same resolution as the Apple II. But the IBM PC ended up not quite matching this plan.

Detail from an early (August 25, 1980) design plan for the IBM PC. "LOMA" is the code name for the IBM DataMaster. "18 kHz" is the 18.432 kHz horizontal scan frequency used by the MDA card, providing more resolution than the 15.750 kHz used by NTSC televisions. Scan courtesy of Dr. Dave Bradley.

Detail from an early (August 25, 1980) design plan for the IBM PC. "LOMA" is the code name for the IBM DataMaster. "18 kHz" is the 18.432 kHz horizontal scan frequency used by the MDA card, providing more resolution than the 15.750 kHz used by NTSC televisions. Scan courtesy of Dr. Dave Bradley.

The designers of the IBM PC managed to squeeze a few more pixels onto the display to get 320×200 pixels. When using an 8×8 character matrix, the updated graphics mode supported 40×25 text, while the double-resolution graphics mode with 640×200 pixels supported 80×25 text. The monochrome graphics card (MDA) matched this 80×25 size. In other words, the IBM PC ended up using 80×25 text because the display provided enough pixels, and it provided differentiation from other systems, but there wasn't an overriding motivation. In particular, the designers of the PC weren't constrained by compatibility with other IBM systems.31

Conclusion

To summarize, many theories have been proposed giving technical reasons why 80×24 (or 80×25) is the natural size for a display. I think the wide variety of display sizes in the early 1970s proves this technological motivation is mostly wrong. Instead, display sizes converged on what IBM produced, first with the punch card, then the IBM 2260 terminal, the IBM 3270, and finally the IBM PC. The 72-column Teletype had some influence on terminal sizes at first, but this size was also swept away by IBM compatibility. The result is the current situation with an uneasy split between 80×24 and 80×25 standards.

Thanks to Dr. Dave Bradley, Prof. Mark Dean, and IBM engineer Iggy Menendez for information. I announce my latest blog posts on Twitter, so follow me @kenshirriff for future articles. I also have an RSS feed.

Notes and References

  1. Punch cards have a longer history than you might think. The standard 80-column IBM punch card was introduced in 1928, improving on punch cards used for the 1890 census. Before the modern computer, punch cards were processed with electromechanical sorters and accounting machines. The punch card remained a keystone of data processing until the 1970s, and its impact still remains.

    An IBM punch card holds 80 characters, printed along the top. The hole pattern in each column encodes the character.

    An IBM punch card holds 80 characters, printed along the top. The hole pattern in each column encodes the character.

     

  2. By 1986, the DEC VT100 was "an acknowledged standard in the terminal industry" and "the most popular ASCII terminal ever produced, with 1,000,000 units sold since its introduction in 1978." 

  3. For information on the internals of the VT100 see the Technical Manual. The VT100 had 3K of memory, of which about 2.3K was used for the screen while the 8080 microprocessor used the remainder. Each line was stored in memory with 3 additional bytes on the end, used as pointers for scrolling. 

  4. It should be clear that IBM's 80-column punch cards were the motivation for 80-column displays, but I wanted to find contemporary sources to confirm that. One example is All About CRT Display Terminals (1974, page 11) stating that terminals with an 80-column line gave compatibility with punched cards while the 72-column line provided compatibility with Teletypes. Also see Big Screen, 132-Column Units Setting Trend, Computerworld, Oct 26, 1981. Although the article focuses on 132-column terminals to replace printers, the article also describes how earlier terminals had an 80-column format like the punch cards they replaced. 

  5. Controversy over the reason for 80×24 displays goes way back. An editorial in Infoworld (Nov 2, 1981) argued that microcomputers shouldn't be locked into the "arbitrary" 80×24 size. This led to angry letters to the editor in Infoworld, Nov 30, 1981, arguing that 80×24 wasn't arbitrary. Writers explained that 80-columns were motivated by punch cards, 24 (or sometimes 25) lines were motivated by tradeoffs in CRT technology, and memory size didn't have much to do with it. 

  6. A detailed source of information on terminals is a 1975 report Alphanumeric and Graphic CRT Terminals

  7. The CRT's aspect ratio matters less than people think. The first reason is that even on a CRT with a 4:3 aspect ratio, many terminals displayed text with a very different aspect ratio by leaving part of the screen blank. The second reason is that terminals could use a custom CRT size if they wanted Although most terminals used a CRT with the standard 4:3 aspect ratio, the actual text could have a very different aspect ratio. Moreover, a custom CRT wasn't out of the question. For instance, the Datapoint 2200 had an unusually wide CRT, designed to match the shape of a punch card. (Reference: Datapoint: The lost story of the Texans who invented the personal computer revolution chapter 4.) The popular Teletype Model 40 also had an unusually wide CRT, with an aspect ratio over 2:1 (photos), which was used for an 80×24 display. 

  8. A raster-scan terminal makes each character out of a matrix of dots. In 1975, a 5×7 or 7×9 matrix was most common.6 (The matrix was often padded with space between characters. For instance, the Apple II used a 5x7 dot matrix padded to a 7×8 field.) Some systems (such as IBM's CGA card) used an 8×8 matrix without padding to supporting graphical characters that touched. Other systems used a much larger character matrix; the IBM Datamaster used 7×9 characters in a 10×14 field, while the Quotron 800 had a 16×20 matrix. The point is that 80×24 terminals can require a wildly varying number of pixels, depending on the matrix selected. This is the flaw in the argument that the bandwidth and scanlines of a display motivated 80×24 terminals; you get a completely different answer depending on the matrix size you pick. 

  9. Home computers in the 1980s often used standard NTSC televisions as displays, so they had to deal with more constraints that terminals. As a result, they often had 40- or 64-character lines, rather than 80, as shown by the Wikipedia list. Also see a Retrocomputing StackExchange discussion

  10. One Retrocomputing StackExchange answer claims that terminals with 72-character lines show "the struggle for 80 characters", with 72-character terminals falling short of the 80-character goal. However, 72-character lines were a deliberate choice to capture the lucrative Teletype market; teleprinters such as the Teletype Model 33 printed 72-character lines. (The model number of the Datapoint 3300 (1969), for instance, reflects the Teletype Model 33.) 

  11. For an extremely detailed look at the terminal industry from 1974 to 1991, see the Datapro reports on Bitsavers. These reports discuss the overall market, as well as thoroughly describing every terminal being marketed. 

  12. AT&T's Teletype Model 40 is mostly forgotten now, but it had a significant impact at the time. AT&T combined the Model 40 with a new, faster communications network called "Dataspeed 40", raising fears that AT&T would monopolize data communications. It is (said that this "spread waves of apprehension that penetrated the very foundation of the communications terminal industry." AT&T targeted IBM's 3270 terminals with the Model 40/4 (which probably explains Model 40's 80×24 display). Complex antitrust litigation against AT&T resulted, which I think blunted the long-term impact of the Model 40. 

  13. The IBM 2260 terminal reused the keyboard of the IBM 26 keypunch (1949). To convert a keypress into a hole pattern, the keypunch keyboard used a complex system of pull-bars, permutation bars (which encode key values in metal tabs), bails, contacts, interlock disks, and restoring electromagnet. Each key triggers 12 contacts; in the keypunch these controlled the 12 holes in each card column, while in the terminal these encode two 6-bit codes, one for shifted and one for non-shifted. This mechanism was much more complex than a "modern" keyboard but it had the advantage of generating key codes without requiring any electronics. (I've written about keypunch internals before.) 

  14. Vector graphics displays predate video terminals by many years, used on systems such as Whirlwind (1951) and SAGE (1958) and later the IBM 2250 Graphics Display Unit (1964). These systems drew arbitrary lines on the screen, rather than pixels. Although these systems could display characters (drawn from line segments), they were very expensive and usually used for graphics, not as character-based terminals.  

  15. The CRT/keyboard unit was called the IBM 2260 Display Station, while the large cabinet with the circuitry was called the IBM 2848 Display Control. People often referred to the complete system as the 2260; I'll follow this usage. 

  16. I'll explain more about the delay line buffers in this footnote. A delay line provided a bit every 500 nanoseconds. Two delay lines were interleaved in a buffer to provide bits twice as fast: every 250 nanoseconds. Data was formatted as 256 "slots", one per vertical scan line. (These slots were purely conceptual since the delay line provided an undifferentiated stream of bits.) 240 slots held data, while 16 were blank for horizontal retrace time. Each slot held 86 bits: 7 bits for 12 rows of characters, along with two parity bits. (Since each scan line was split across two displays, the slot corresponded to 6 characters on the even display and 6 on the odd display.) Six slots made up a vertical line of characters: one slot holding the "BCD" character value, and five slots holding pixels. Thus, each buffer holds data for 480 characters and supported two 40×6 displays. Two buffers supported a pair of 40×12 displays and four buffers supported a pair of 80×12 displays. Details are in the 2260 Field Engineering Theory of Operation Manual, page 2-14. 

  17. A delay line can't be paused—the bits keep flowing, even during vertical and horizontal refresh times. The problem is that you can't display anything during refresh, since the electron beam is swinging back to the start, so what do you do with the pixels the display line provides during that time. The 2260 used two solutions. Horizontal refresh was straightforward, "wasting" delay line bits during the horizontal refresh time. Specifically, a pair of buffers held 512 scan lines; 480 were used for character data while 32 were unusable because horizontal refresh happened while they were being read.

    The interaction between the delay lines and vertical refresh is somewhat complicated. The vertical refresh time was designed to be exactly the same as the 5.5545ms time it took a buffer to fully circulate, while the time to display a vertical scan line was exactly twice this time. Two buffers were interleaved to provide the vertical scan lines. During the first time interval, the first buffer provided pixels for the top half of the line. During the second time interval, the second buffer provided pixels for the bottom half of the line. The third time interval was used for vertical refresh. This pattern continued until the end of the buffers, so every third slot in a buffer was displayed while the "unused" pixels were recirculated. This process was repeated three times, offsetting the start point in the buffer, so the buffers were displayed entirely. 

  18. Another curious feature of the IBM 2260 display is how it converted the 6-bit character code into the 5×7 block of pixels representing the character. It used a special core memory plane that only had cores for 1 bits and omitted cores for 0 bits, so it acted as a read-only memory. The result is you could actually see the characters in core plane, as illustrated below. The core plane holds nine 7-bit words for each of the 64 characters: the first five words held the pixel block, while the four other words were a lookup table to convert the EBCDIC character code (2848 code) to or from ASCII or a tilt-shift code used to control the Selectric-like printer (Model 1053).

    Part of the character generation core plane, showing the segment for the character 'A'.
The diagonal lines indicate ferrite cores; I've colored the cores storing the character image.
The core plane was a 72×56 grid in total representing 64 characters.
Image based on 2260 Field Engineering Theory of Operation Manual p2-82.

    Part of the character generation core plane, showing the segment for the character 'A'. The diagonal lines indicate ferrite cores; I've colored the cores storing the character image. The core plane was a 72×56 grid in total representing 64 characters. Image based on 2260 Field Engineering Theory of Operation Manual p2-82.

     

  19. IBM apparently liked the idea of splitting display hardware between two users, because they did that with the IBM 3742 Dual Data Station (1973). This system let two operators enter data onto 8" floppy disks. The bizarre part is that it had a single vertically-mounted CRT display. The small black box in the middle of the desk is a pair of mirrors that showed half the screen to each operator. The result was a very squat display with just three lines of 40 characters, enough for a status line and 80 characters of data.

    The IBM 3742 Dual Data Station allowed two operators to type data onto floppy disks. Image from IBM 3740 System Summary.

    The IBM 3742 Dual Data Station allowed two operators to type data onto floppy disks. Image from IBM 3740 System Summary.

     

  20. The lines of text in the screenshot appear closer together than double-spaced, even though they are double-spaced. The reason is that the dots on the screen are a bit larger than one pixel, so they encroach into the space between the lines. In other words, the display alternates 7 lines of character pixels and 7 blank lines, but it looks more like 9 lines of character pixels and 5 blank lines. 

  21. Televisions and CRT displays normally use a raster scan, scanning the electron beam across the screen in horizontal scan lines, making a series of lines from top-to-bottom. The 2260, on the other hand, has highly-unusual vertical scan lines; the scan lines are top-to-bottom, and the series of lines progressed left-to-right across the screen. I haven't been able to determine any reason why the 2260 has vertical scanlines. I assume it made the timing work out better somehow.  

  22. Here are my calculations on the maximum number of lines that could be displayed by the 2260. A 250 nanosecond pixel rate and 30 Hertz refresh give a maximum of 133,333 pixels that can be displayed on the screen. If each character is 6×7 pixels and there are 80 characters per line, 39.7 lines could be on the screen. Vertical refresh takes 1/3 of the time because of interaction with the delay lines,17 dropping this to 26.5 lines. Because the 2260 splits pixels across two displays, that yields at most 13.25 lines on the display, ignoring horizontal refresh. Therefore, 12 lines of text are about what the hardware could support. (I should point out that it's possible they decided on 12 lines first and selected the other design characteristics to fit this.) Note that the next reasonable line size would be 16 lines. The low-end model displayed 6 lines of 40 characters (i.e. 3 punch cards), so the next step for it would be 8 lines of 40 characters (four punch cards). Since the high-end model uses four buffers, that would yield 16 lines. The point is that it would have been a large jump to go beyond 12 lines. 

  23. The 2260 came in three models. Model 1 displayed 6 lines of 40 characters. Model 2 displayed 12 lines of 40 characters. Model 3 displayed 12 lines of 80 characters. The main difference in implementation was that they used 1, 2, and 4 buffers respectively. The 40-character models refreshed at 60 Hz rather than 30 Hz, since they had half the (vertical) scanlines. 

  24. The aspect ratio of the IBM 2260's text was very different from the screen's aspect ratio. With the bezel, the screen's useful display area is 9.5 by 5.7 inches (5:3 ratio). Note that the aspect ratio of the text was very different from a standard 4:3 ratio. The 40×6 display format is 6.5 by 2.25 inches (almost 3:1 ratio). The 40×12 display format is 6.5 by 4.5 inches (a bit over 4:3 ratio). The 80×12 display format is 9 by 3 inches (3:1 ratio). Information on the 2260's screen size is in the FE Manual chapter 2. 

  25. The 1974 Datapro report gives the price for the IBM 2260 system as $1270 to $2140 for the display unit and $15,715 to $86,365 for the controller. The IBM 3270 in comparison was $4,000 to $7,435 for the display unit (3277) and $6,500 to $15,725 for the controller. Note that compared to the 2260, the 3270 moved much of the complexity from the controller to the display unit, which is reflected in the prices. 

  26. The IBM 3270 was a line of terminals. Like the 2260, it consisted of a Control Unit (3271 or 3272) along with the terminals (3275 or 3277 Display Stations). These could display 40×12 or 80×24. For simplicity, I'll refer to the whole system as the 3270. Over the years, IBM introduced more models in the 3270 line, including color and graphics terminals, supporting lower case as well as display sizes such as 80×32, 80×43, and 132×27. The 3270 PC (1983) was an enhanced IBM PC that acted as a 3270 terminal. However, I'm going to focus on the original 3270 terminals, since those had the most influence. 

  27. A 480-bit shift register might seem like a strange size, since it's not a power of two. However, since shift registers don't have address bits, they can be arbitrary sizes. For instance, Collins made dual 66-bit shift registers, to support 64-bit data plus 2 parity bits. Fairchild made 480-bit shift registers for CRT displays. 500-bit shift registers were built "to operate in equipment where storage lengths in 100 bit multiples are required." Texas Instruments built dynamic bipolar shift registers in 253-bit, 349-bit, and 501-bit sizes which were useful for Digital Differential Analyzers. The point is that shift registers can be built in arbitrary sizes, so there is no need to use a power of two.

    Schematic symbol for the 480-bit shift register in the 3270. Inputs are data and the two-phase clock. "SPEC" indicates a special circuit. From the ALD, page MP151.

    Schematic symbol for the 480-bit shift register in the 3270. Inputs are data and the two-phase clock. "SPEC" indicates a special circuit. From the ALD, page MP151.

    The 3270 used banks of ten 480-bit shift registers to store 480 10-bit data words (9 bits and parity), unlike the earlier 2260 delay lines, which stored pixels. 

  28. Software support for the 3270 included DIDOCS (Device Independent Display Operator Console Support), using the 3270 as a mainframe system console; VIDEO/370 (Visual Data Entry Online), a program that allowed customers to design forms for data entry; DATA/360, a program that emulated an IBM 29 card punch, but provided editing and validation; IMS (Information Management System); (CICS) Customer Information Control System, which allowed interaction with a database; IQF (Interactive Query Facility), another database system; and TSO (Time Sharing Option). 

  29. The 132-column width for terminals was motivated by the ubiquity of IBM's 132-column printers.) 

  30. The DataMaster's influence on the IBM PC is described in two articles by Dr. Dave Bradley: The creation of the IBM PC in Byte, Sept. 1990; and A personal history of the IBM PC, IEEE Computer, Aug 2011 (paywalled). The Wikipedia article DataMaster System/23 also provides information. 

  31. Dr. Bradley explained that the designers of the IBM PC weren't concerned with compatibility with other systems. For instance, you might expect the IBM PC to be compatible with the 3270 terminal. However, the IBM PC's keyboard had 10 function keys while the IBM 3270 terminal had 12. This incompatibility was finally fixed with the IBM PS/2 keyboard (1987). 

  32. To confirm the popularity of 80×24 terminals versus 80×25 terminals, I took a look at the GNU termcap file. I counted and found there were over 5 times as many 24-line terminals as 25-line terminals, and the 25-line terminals were mostly PC-based. 80-column terminals were over 5 times as popular as 132-column terminals, the runner-up. 

How "special register groups" invaded computer dictionaries for decades

Half a century ago, the puzzling phrase "special register groups" started showing up in definitions of "CPU", and it is still there. In this blog post, I uncover how special register groups went from an obscure feature in the Honeywell 800 mainframe to appearing in the Washington Post.

While researching old computers, I found a strange definition of "Central Processing Unit" that keeps appearing in different sources. From a book reprinted in 2017:1

"Central Processor Unit (CPU)—Part of a computer system which contains the main storage, arithmetic unit and special register groups. It performs arithmetic operations, controls instruction processing and provides timing signals."

"Central Processor Unit (CPU)—Part of a computer system which contains the main storage, arithmetic unit and special register groups. It performs arithmetic operations, controls instruction processing and provides timing signals."

At first glance, this definition seems okay, but a few moments thought reveals some problems. Storage is not part of the CPU. But more puzzling, what are special register groups? A CPU has registers, but "special register groups" is not a normal phrase.

It turns out that this definition has been used extensively for over half a century, even though it doesn't make sense, copied and modified from one source to another. Special register groups were a feature in the Honeywell 800 mainframe computer, introduced in 1959. Although this computer is long-forgotten2, its impact inexplicably remains in many glossaries. The Honeywell 800 allowed eight programs to run on a single processor, switching between programs after every instruction.3 To support this, each program had a "special register group" in hardware, its own separate group of 32 registers (program counter, general-purpose registers, index registers, etc.).

Honeywell 800 computer. The Central Processing Unit (containing special register groups) is in cabinets 6 feet high and 18 feet wide along the wall. The card reader and printer are in the center of the room. A typical system rented for $25,000 a month. Photo from BRL report, 1961 courtesy of Ed Thelen.

Honeywell 800 computer. The Central Processing Unit (containing special register groups) is in cabinets 6 feet high and 18 feet wide along the wall. The card reader and printer are in the center of the room. A typical system rented for $25,000 a month. Photo from BRL report, 1961 courtesy of Ed Thelen.

Another important thing to note about that era is the central processing unit was a large physical box, also known as the "main frame". (A mainframe was not a type of computer yet.) Thus, given the characteristics of the Honeywell 800, the definition of CPU in Honeywell's glossary4 made total sense.5 Unfortunately, this definition doesn't make sense when used for computers in general, since they lack special register groups.

Honeywell's definition of main frame: "FRAME, MAIN, (I) the central processor of the computer
system. It contains the main storage, arithmetic unit and
special register groups. Synonymous with (CPU) and
(central processing unit). (2) All that portion of a computer exclusive of the input, output, peripheral and in some
instances, storage units."

Honeywell's definition of main frame: "FRAME, MAIN, (I) the central processor of the computer system. It contains the main storage, arithmetic unit and special register groups. Synonymous with (CPU) and (central processing unit). (2) All that portion of a computer exclusive of the input, output, peripheral and in some instances, storage units."

This definition apparently started with the US Department of Agriculture's Glossary of ADP Terminology (1960): "MAIN FRAME - The central processor of the computer system. It contains the main memory, arithmetic unit and special register groups". The definition then spread through the government. The Bureau of the Budget published the Automatic Data Processing Glossary in 1962 "for use as an authoritative reference by all officials and employees of the executive branch of the Government" with the definition below. The Air Force's 1966 Guide for Auditing Automatic Data Processing Systems used a similar definition as did the 1966 Navy Training Course for Machine Accountant and 1968 Air Force manual Communications-Electronics Terminology.

Bureau of the Budget's 1962 definition: "frame, main, (1) the central processor of the computer system. It contains the main storage, arithmetic unit and special register groups. Synonymous with CPU and central processing unit. (2) All that portion of a computer exclusive of the input, output, peripheral and in some instances, storage units."

Bureau of the Budget's 1962 definition: "frame, main, (1) the central processor of the computer system. It contains the main storage, arithmetic unit and special register groups. Synonymous with CPU and central processing unit. (2) All that portion of a computer exclusive of the input, output, peripheral and in some instances, storage units."

From there, the definition spread to dozens of books and dictionaries. The "special register groups" appeared in numerous computer glossaries such as the Glossary of Computing Terminology (1972), Computer Glossary for Medical and Health Sciences (1973) Computer Glossary for Engineers and Scientists (1973), Radio Shack's Dictionary of Electronics (1968, 1974-1975), and Computer Graphics Glossary (1983)

Radio Shack's New 1974-1975 Dictionary of Electronics contains the definition:
"central processing unit—Also called central processor. Part of a computer system which
contains the main storage, arithmetic unit, and special register groups. Performs arithmetic operations, controls instruction processing, and provides timing signals and other housekeeping operations."

Radio Shack's New 1974-1975 Dictionary of Electronics contains the definition: "central processing unit—Also called central processor. Part of a computer system which contains the main storage, arithmetic unit, and special register groups. Performs arithmetic operations, controls instruction processing, and provides timing signals and other housekeeping operations."

Computer manufacturers should know their systems don't have special register groups, but they still used the definition. For example, Sphere microcomputer (1976), Texas Instruments (1978), Cray (1984), Convergent Technologies (1987), and Tektronix (1989).

This definition persisted into the microcomputer age, even though storage was now clearly not part of the CPU and "special register groups" were decades in the past. A 1983 Beginner's Computer Glossary in MICRO magazine defined "CPU — Central Processing Unit. The central processor of the computer system, which contains the main storage, arithmetic unit, and special register groups." "Special register groups" also showed up in Microcomputer Dictionary (1981), and Understanding Microprocessors (1984),

Definitions with "special register groups" appeared in a dizzying array of books, such as Computer Technology in the Health Sciences (1974), College Typewriting (1975), Research Methods for Recreation and Leisure (1979), EPA's Design Automation Handbook for Automation of Activated Sludge Treatment Plants (1980), Patrick-Turner's Industrial Automation Dictionary (1996), Video Scrambling & Descrambling (1998), US Department of Transportation's Computerized Signal Systems (1979). and Traffic Control System Operations (2000).

In 1981, special register groups reached national newspapers in a Washington Post glossary: "Main-frame—central processor of computer system, containing main storage, arithmetic unit and special register groups." By 2006, even the National Fire Code6 included special register groups: "Computer. A programmable electronic device that contains a central processing unit(s), main storage, an arithmetic unit, and special register groups."

Special register groups are still being taught to the next generation of students. The following quiz question is from a 2017 book that teaches computer organization and programming:7

CPU of a computer system does not contain:
(a) Main storage
(b) Arithmetic unit
(c) Special register group
(d) None of the above

Conclusion

For some reason, a 1960 definition of "central processing unit" included "special register groups", an obscure feature from the Honeywell 800 mainframe. This definition was copied and changed for decades, even though it doesn't make sense. It appears that once something appears in an authoritative glossary, people will reuse it for decades, and obsolete terms may never die out.

"Computer operators working with tape-driven Honeywell 800 mainframe computer."
The operators in this photo from the 1960s are presumably taking advantage of the special register groups unique to the Honeywell 800 and 1800 computers.  Photo from National Library of Medicine.

"Computer operators working with tape-driven Honeywell 800 mainframe computer." The operators in this photo from the 1960s are presumably taking advantage of the special register groups unique to the Honeywell 800 and 1800 computers. Photo from National Library of Medicine.

Researching this phrase also shows how the meanings of computer terms shift greatly over time. In 1960, "main frame" and "CPU" were synonyms, but since then they have moved in opposite directions: "mainframe" is now a large computer system, while the "CPU" is usually a processor chip. (I plan to write much more about this.)

I announce my latest blog posts on Twitter, so follow me @kenshirriff for future articles. I also have an RSS feed.

Notes and References

  1. Reference: Reliability Engineering for Nuclear and Other High Technology Systems, CRC Press, 2017, reprint of a book originally published in 1985. 

  2. I happen to be familiar with the Honeywell 800 and 1800 computers because I've been studying the Apollo Guidance Computer extensively. (The Honeywell 1800 was an improved version of the Honeywell 800.) The Honeywell 1800 was used to assemble programs for the Apollo Guidance Computer using an assembler called YUL

  3. The Honeywell 800's technique of switching programs on every instruction is rather unusual. Typical multi-tasking systems let a program run for several milliseconds before switching to another program to reduce the overhead of switching between programs. The Honeywell 800 Programmers Reference Manual explains the use of special register groups for multiprogramming. 

  4. Honeywell's Glossary of Data Processing and Communications Terms was published 1964-1966. Definitions in that book are largely based on the Bureau of the Budget's Automatic Data Processing Glossary. 

  5. The Oxford English Dictionary (1989) quoted the 1964 Honeywell definition. 

  6. Reference: NFPA's Illustrated Dictionary of Fire Service Terms, published by the National Fire Protection Association in 2006. The National Fire Codes (1995) had a somewhat similar definition, but for the CPU instead of computer: "CPU. Central processing unit of the computer system. The CPU contains the main storage, arithmetic unit, and special register groups." 

  7. Reference: Computer Architecture, 2011, published by Biyani. Page 58 contains the quiz with the CPU question. The question also appears in MCS-012: Computer Organisation and Assembly Language Programming, 2017. 

A visit to the Large Scale Systems Museum

I didn't expect to find two floors filled with vintage computers in a sleepy town outside Pittsburgh. But that's the location of the Large Scale System Museum, housed in an former department store. The ground floor of this private collection concentrates on mainframes and minicomputers from the 1970s to 1990s featuring IBM, Cray, and DEC systems, along with less common computers. Amazingly, most of these vintage systems are working. Upstairs, the museum is filled with vintage home computers from the pre-PC era.

IBM

IBM set the standard for the mainframe computer with its introduction of the System/360 in 1964, a line of computers designed to support the full circle (i.e. 360°) of business and scientific applications. The System/360 evolved into the System/370 in the 1970s and the System/390 in the 1990s. Most of these mainframes filled a data center, but the museum has some smaller S/370 and S/390 mainframes designed for offices. The IBM System/370 9375 (1986; below), is described as a "baby mainframe" or "super-mini computer" for engineering or commercial applications.

IBM System/370 9375. The computer itself is in the middle rack. The left rack has a 3490E tape cartridge storage system, while the right rack holds 9335 disk controllers and disk drives (856 MB per drive).

IBM System/370 9375. The computer itself is in the middle rack. The left rack has a 3490E tape cartridge storage system, while the right rack holds 9335 disk controllers and disk drives (856 MB per drive).

The System/390 line is represented by the IBM System/390 Multiprise-2003 (1997; below). This mainframe could not boot up on its own, but required a special desktop PC called the Mainframe Service Element (photo) to initialize the mainframe with microcode and start it up.

This low-end IBM System/390 Multiprise-2003 had 1 GB of memory and supported hundreds of simultaneous database transactions.

This low-end IBM System/390 Multiprise-2003 had 1 GB of memory and supported hundreds of simultaneous database transactions.

To support smaller customers, IBM also produced minicomputers, which they called "midrange systems". The IBM System/32 (1975; below) is a minicomputer built into a desk, designed for small businesses. IBM's midrange systems evolved into the IBM AS/400 (1992; photo).

This IBM System/32 had 16 KB of memory and 13 MB of disk storage. It leased for $1200 per month.

This IBM System/32 had 16 KB of memory and 13 MB of disk storage. It leased for $1200 per month.

The museum has many disk drives and tape drives. One example is the massive 3380E disk drive (below; 1985), providing 5 gigabytes of storage. It's amazing to think that you can now hold a thousand times as much storage in your hand.

The IBM 3380E disk system stored 5 gigabytes of data. The 14-inch disk platter is in the center, labeled "E".

The IBM 3380E disk system stored 5 gigabytes of data. The 14-inch disk platter is in the center, labeled "E".

Cray

Computer designer Seymour Cray and his company Cray Research were famed for building the world's fastest supercomputers. The museum has several Cray computers from the 1990s. The Cray YMP-EL supercomputer (1992; below) was an "Entry Level" Cray, costing $300,000. It was built from CMOS chips rather than the fast but hot ECL chips in earlier Crays, allowing it to be air-cooled rather than Freon cooled. The museum also has the related, low-end Cray EL-94, packaged in an ugly box (photo; 1992);

The Cray YMP-EL supercomputer.

The Cray YMP-EL supercomputer.

The Cray J90 (1996; below) was a popular low-end Cray, an evolution of the Y-MP EL. This one holds 1 GB of memory and cost $300,000.

Cray J 90 supercomputer.

Cray J 90 supercomputer.

The Cray SV1 (1999; below) followed the J90. It introduced more high-performance features such as a vector cache and multi-streaming. This one has 16 processors and 16 GB of memory, and cost about $1 million.

The Cray SV1 supercomputer.

The Cray SV1 supercomputer.

Digital Equipment Corporation (DEC)

Dave McGuire, curator of the large systems, in front of PDP "Straight 8" minicomputers.

Dave McGuire, curator of the large systems, in front of PDP "Straight 8" minicomputers.

Digital Equipment Corporation was founded in 1957 and became the second-largest computer manufacturer, concentrating on minicomputers. DEC's PDP-8 was a very popular 12-bit minicomputer that essentially created the "minicomputer" category of computers. The first PDP-8 was the Straight-8 (1966; photos above and below), a compact all-transistor computer built from circuit cards plugged into a wire-wrapped backplane.

The "Straight 8" PDP-8 was built from transistorized circuits on small cards.

The "Straight 8" PDP-8 was built from transistorized circuits on small cards.

The PDP-8/E (1969; below) used integrated circuits (7400-series TTL) in place of discrete transistors as did the compact and cheaper PDP-8/A (1974; photo).

PDP-8/E minicomputer. The paper tape reader is at the top, above the front panel. An RK05 DECpack is at the bottom, storing 2.4 megabytes on a removable disk pack.

PDP-8/E minicomputer. The paper tape reader is at the top, above the front panel. An RK05 DECpack is at the bottom, storing 2.4 megabytes on a removable disk pack.

DEC started producing mainframes in 1966 with the PDP-10, a 36-bit computer that popularized time-sharing. The museum has a DECsystem-2020 (1978), the smallest member of the PDP-10 family.

A DECsystem-2020 mainframe next to an RM02 disk drive. The drive's removable disk packs each store 67 megabytes.

A DECsystem-2020 mainframe next to an RM02 disk drive. The drive's removable disk packs each store 67 megabytes.

In 1970, DEC introduced the 16-bit PDP-11, which became the most popular minicomputer with about 600,000 sold. The museum has many different PDP-11 models including the PDP-11/05 (1972; photo, console), the fast PDP-11/50 (1972; below, photo), the compact and popular PDP-11/34 (1976; photo), and the PDP-11/44 (1981; photo).

Console of the PDP-11/50 minicomputer.

Console of the PDP-11/50 minicomputer.

DEC's PDP-11 evolved into the VAX line of 32-bit computers. Larger and more powerful than earlier minicomputers, these systems were known as superminicomputers. The VAX-11/780 (1978; below) was the first member of the VAX family, and was implemented with TTL chips. The museum has a VAX-11/750 (1980) and the cheap single-cabinet VAX-11/730 (1982; photo), the powerful VAX-6000 (1991; photo), and top-of-the-line VAX-7000 (1992; photo). The VAXstation 4000 Model 90 (1991; photo) was a workstation implementing the VAX instruction set.

The VAX 11/780 "superminicomputer".

The VAX 11/780 "superminicomputer".

DEC started struggling in the 1990s as the market shifted to personal computers. DEC was acquired in 1998 by personal computer manufacturer Compaq, which in turn was soon acquired by Hewlett-Packard in 2002.

Other systems

The museum has systems from many other companies such as Varian, Control Data, Wang, Panasonic, Silicon Graphics, Singer, and Tektronix, but I'll just touch on some highlights.

Data General was a major producer of minicomputers, third behind DEC and IBM. The Data General Eclipse was the successor to the popular Data General Nova 16-bit minicomputer. It is represented in the museum by the Eclipse S/280 (1975; below) and Eclipse S/120 (1982; photo). Data General moved into the microcomputer market with the microNOVA (1977; photo), but it wasn't commercially successful.

Data General Eclipse S/280 minicomputer.

Data General Eclipse S/280 minicomputer.

In the late 1970s, Hewlett-Packard was the fourth-largest producer of minicomputers. The HP 2116B minicomputer (1968; photo) was part of the HP 1000 (photo) family of 16-bit minicomputers designed for instrument control and automation. The HP 2645A terminal (below) was part of HP's line of terminals.

HP 2645A terminal

HP 2645A terminal

Another interesting terminal is the Friden Flexowriter from the early 1960s (below). It has a paper tape reader and punch on the left. Flexowriters were often used as console terminals for computers.

Friden Flexowriter

Friden Flexowriter

The Burroughs B80 is a multi-user office minicomputer (1978; below). It has as dot-matrix printer above the keyboard. The computer on display was used by a funeral home, and has a paper product list taped above the keyboard with products such as "Tranquility urn", "Open/Close grave", and "Move dirt more than 25 miles".

The Burroughs B80 office minicomputer.

The Burroughs B80 office minicomputer.

The collection also includes analog computers, such as the Heathkit H-1 (1950s) which used vacuum tube amplifiers and represented values by signals from -100 to 100 volts. It could be programmed to solve differential equations by wiring the patch board. The museum also has a Comdyna GP-6 (photo), a more modern transistorized analog computer from the late 1960s.

A Heathkit H1 analog computer. Vacuum tubes are on top, the plugboard is in the middle, and potentiometer controls are in the front.

A Heathkit H1 analog computer. Vacuum tubes are on top, the plugboard is in the middle, and potentiometer controls are in the front.

Microcomputers in the Large Scale Integration Museum

Upstairs is the "Large Scale Integration Museum", a large collection of microcomputers of the 1970s and 1980s. The collection focuses on microcomputers before to the IBM PC and x86 processors. Since I'm more interested in the larger computers, I'll discuss this collection briefly, but I don't want to downplay its impressive scope.

Corey Little, curator of the microcomputer collection, in front of Imsai, ASR-33 teletype, Kenbek-1 replica, and Altair.

Corey Little, curator of the microcomputer collection, in front of Imsai, ASR-33 teletype, Kenbek-1 replica, and Altair.

The first commercial microprocessor was Intel's 4-bit 4004, introduced in 1971. The Intel Intellec 4/40 development system (below), used the 4040 microprocessor (1974), an improved version of the 4004. This system was intended for engineers to develop software for embedded systems using the 4040 chip.

Intel Intellec 4/40 development system. An EPROM socket below the key allowed software to be burned into EPROM chips.

Intel Intellec 4/40 development system. An EPROM socket below the key allowed software to be burned into EPROM chips.

The microcomputer revolution took off when Intel released the 8-bit 8080 microprocessor in 1974, leading to the first commercially successful personal computer, the MITS Altair 8800 kit (1975). In addition to the Altair 8800, the museum has the updated Altair 8800b and the more obscure Altair 680, which uses the Motorola 6800 microprocessor.

Altair 8800 (with the famous manifesto Computer Lib on top), Altair 680, Altair 8800b, and disk drive for Altair.

Altair 8800 (with the famous manifesto Computer Lib on top), Altair 680, Altair 8800b, and disk drive for Altair.

Single-board computers also helped popularize microprocessors. Companies produced development kits for engineers to experiment with new microprocessors and hobbyists often used them due to their low cost. The museum has several racks of these development boards; the rack below includes the Intel SDK-85 System Design Kit for the 8085 microprocessor, Artisan Electronics Model 85 microcalculator (a single-board scientific calculator that could be interfaced to a microcomputer), Rockwell's 6502-based AIM-65, Synertek's 6502-based SYM-1, and Transputer parallel processor boards.

A variety of development boards and single-board computers.

A variety of development boards and single-board computers.

By the late 1970s, microcomputers became mass-market products, with the introduction of home computers that were more affordable and usable by the general public. The museum has many other popular home computers from manufacturers such as Atari, Sinclair, Radio Shack, Heathkit, and Texas Instruments. The photo below shows part of the Commodore collection.

The Commodore collection includes calculators, Commodore Super PET, Educator 64, PET 4032, and PET 2001

The Commodore collection includes calculators, Commodore Super PET, Educator 64, PET 4032, and PET 2001

Early portable computers were suitcase-sized and often called luggables. The museum has a large collection including the IBM 5100 (1975; below), Osborne One (1981), Osborne Executive, Osborne Vixen, and Kaypro II, as well as more obscure machines such as the Telcon Zorba and General Electric Workmaster.

The IBM 5100 portable computer was introduced in 1975, six years before the IBM PC. Its keyboard has special characters for the APL language.

The IBM 5100 portable computer was introduced in 1975, six years before the IBM PC. Its keyboard has special characters for the APL language.

Apple is represented by a variety of Apple II, Apple III, Lisa, and Macintosh systems. The collection also includes a NeXTcube, the workstation developed by Steve Jobs in the 1980s after he was forced out of Apple. Steve Jobs returned to Apple when Apple purchased NeXT in 1997, leading to Apple's dramatic rise. The NeXTcube's operating system led to Apple's current macOS and iOS operating systems.

The NeXTcube workstation was packaged in a 1-foot magnesium cube.

The NeXTcube workstation was packaged in a 1-foot magnesium cube.

The museum has various toys and educational devices that were produced to explain computers, including the CALCULO Analog Computer (1959), Minivac 6010 (1962) created by the father of information theory Claude Shannon, Radio Shack Science Fair Digital Computer Kit (1977), and Digi-Comp 1 (1963).

The collection includes toy computers such as the CALCULO Analog Computer, MINIVAC 6010, Radio Shack ScienceFair Digital Computer, and Digi-Comp 1.

The collection includes toy computers such as the CALCULO Analog Computer, MINIVAC 6010, Radio Shack ScienceFair Digital Computer, and Digi-Comp 1.

Heathkit introduced the HERO-1 kit robot in 1982, providing a way for hobbyists to experiment with robotics. Nowadays, Arduinos and cheap servos and stepper motors make it easy to build a simple robot, but in 1982, robotics was much more difficult. The HERO-1 kit cost $1500 (equivalent to about $4000 today).

Three Heathkit HERO robots. The HERO 2000 (1986, left) included multiple processors and speech synthesis, while the older HERO-1 robots have a single 6808 processor. The "eyes" are an ultrasonic distance sensor.

Three Heathkit HERO robots. The HERO 2000 (1986, left) included multiple processors and speech synthesis, while the older HERO-1 robots have a single 6808 processor. The "eyes" are an ultrasonic distance sensor.

Conclusion

The Large Scale Systems Museum contains a remarkable collection of large computer systems and microcomputers from the 1970s to 1990s The museum, hidden behind a storefront on a quiet small-town main street, illustrates an interesting period in computer history. During this time, mainframes, minicomputers, and supercomputers reached their peak and then went into steep decline. Meanwhile, the microprocessor passed through the hobbyist phase and the home computer phase before achieving its dominance. Amazingly most of the systems at the museum are up and running, giving the visitor a feel for the computers of that era.

The museum is open by appointment only; details are here and on their Facebook page. If you ever find yourself near New Kensington, PA (half an hour outside Pittsburgh), get in touch with them. I've only presented the highlights of the museum; more photos are here. I announce my latest blog posts on Twitter, so follow me @kenshirriff for future articles. I also have an RSS feed.