6502, Z80, 8086, 68000: the four CPUs that dominated the 1980s (posted 2022-04-19)
Although there were definitely other CPUs in use in the 1980s, the vast majority of microcomputers people had at home or at the office used either a MOS 6502 or one of its variants, a Zilog Z80, an early member of the Intel 8086 family, or a Motorola 68000. Let's have a look at those four CPUs.
What does a CPU do, anyway?
The Central Processing Unit (CPU) is the part of a computer that runs programs so the computer does something useful. For this, the CPU contains hardware (in the form of collections of transistors) that can perform various (very) basic functions, such as adding and subtracting small numbers, shifting bits and do logical operations such as AND and OR. A small number of "registers" allows the CPU to hold intermediate results. Some registers are used for arithmetic and logic operations (often called "accumulator") and other registers are used as indexes to count iterations and/or point to memory locations where data is loaded from and stored to.
That's about it! From such humble building blocks it's possible to build extremely complex programs. Simple CPUs require a program with many steps to do things like multiply two 32-bit numbers, while later CPUs are able to do that in a single instruction.
The Intel 8080 and Motorola 6800
In 1971, Intel released the 4004 microprocessor. Sure, it could only power a calculator, but this was the first time a computer's CPU was entirely contained in a single integrated circuit (IC), i.e., a single chip. The 4004 was followed by the 8008 and then the 8080 in 1974, which was actually powerful enough to run CP/M, an early version of an MS-DOS like operating system.
The Motorola 6800 was also released in 1974. And prominent members both the teams that designed the 8080 and the 6800 decided to go elsewhere and build a CPU that would compete with the one they had designed earlier.
Some key Motorola 6800 designers moved to MOS Technology, where they created the 6501 and 6502, released in 1975. (Detailed story on Wikipedia.) The MOS 6501 was pin-compatible with the Motorola 6800 and was killed after Motorola sued MOS for patent infringement.
But the 6502 sold better anyway. The goal of the 6502 was to be cheap. So anything not strictly necessary was stripped away, leaving the 6502 "doing everything you need and nothing you don't". (Credit: Ken Rockwell talking about the Nikon FE camera of about the same era.)
The 6502 is probably as close as you can get to a true 8-bit CPU: it has an 8-bit data bus for reading from and writing to memory, and all of its registers (except the program counter) are 8 bits wide. However, the address bus is 16 bits wide, so the 6502 can address 216 bytes = 64 kB memory. It has about 3500 transistors.
The main registers are the accumulator (A register), which is the only one that can perform arithmetic and logical operations. The X and Y registers are index registers, which can be used as counters in loops or as indexes for successive memory access.
Compared to other CPUs of the era, the 6502 is clocked at a modest 1 or 2 MHz. However, the 6502 wastes very few clock cycles: most are used to read instructions or read/write data from/to memory. The modest number of instructions all fit in one byte (save for additional arguments/operands), allowing the 6502 to punch way above its weight for tight 8-bit code. "It's a little rocket, light and nimble." (Not sure where I heard that.) In practice, its performance was generally similar to that of a 3.5 MHz Z80.
The 6502 (and variants) were used in the Apple II, the Atari and Commodore 8-bit computers such as the Atari 2600, Commodore PET, VIC-20, C64 and C128 and the Acorn BBC.
The 8080 designers that left Intel created Zilog and released the Z80 in 1976. Rather than trying to make a cheaper alternative to their former employer's chip, the Zilog designers' goal was to make a more capable chip. And unlike the 6502 vs the 6800, the Z80 was software compatible with the 8080. As such, the Z80 had more than twice the 6502's number of transistors at 8500.
Like the 6502, the Z80 has an 8-bit data bus and a 16-bit address bus and is considered an 8-bit CPU. The Z80 has a single 8-bit accumulator, six additional 8-bit registers that can also be used as three 16-bit registers and two more 16-bit index registers. The Z80 is more flexible when it comes to which register can do what, and it can also do some 16-bit arithmetic, but there are also severe limitations.
Z80 instructions have variable lengths between 1 and 4 bytes (including any arguments/operands). The home computers of the 1980s generally used 4 MHz Z80s, but clocked a bit lower at 3.5 or 3.58 MHz, allowing the system to use the same crystal for the display pixel clock and the CPU clock.
The Z80 was used in most CP/M computers of the late 1970s and in 1980s home computers such as the TRS-80, ZX Spectrum, Amstrad CPC and MSX. The Commodore 128 had a Z80 as a second CPU in order to be able to run CP/M.
The 8086 (well, 8088, really)
In 1978, Intel released the 8086, a successor to the 8080. As such, the 8086 has some family resemblance to the Z80. However, the 8086 is a real 16-bit CPU with a 16-bit data bus, and even a 20-bit address bus. However, the data and address buses don't have their own pins, they're multiplexed. This saves pins on the chip package, but doesn't help performance. Instructions take one to six bytes and are prefetched where possible to speed things up.
The 8086 uses no fewer than 29,000 transistors. In my counting, the 8086 has six 16-bit wide more-or-less general purpose registers. Four of those can also be used as two separate 8-bit registers, not unlike the Z80 B+C, D+E and H+L registers.
But… how does a CPU with 16-bit registers address 20 bits worth of memory (= 1 MB)? This is where the fun starts: memory segmentation. Segment registers effectively provide the missing four bits to create a full 20-bit address. This means that when working with data sets that require more than 64 kilobytes of memory, the code running on the 8086 CPU needs to keep changing a segment register. This makes the code more complex and again, doesn't help performance.
The 8086 has a huge claim to fame as the first member of the x86 CPU family, which, after many improvements/transitions, is still one of the two main CPU architectures today. (The other being ARM.) However, the 8086 chip itself wasn't widely used it its day. Instead, the first IBM PC used the 8088, a version of the 8086 with an 8-bit data bus, clocked at 4.77 MHz. (Which is 4/3 of the NTSC colorburst frequency... not sure how that's relevant for a computer that didn't connect to TVs. Cost saving?)
And again, an 8-bit data bus is not something that helps performance. Those original PCs were not very fast. But in 1982, Intel released the 80286 CPU with 134,000 transistors. It would of course take a few years before most PCs used the new CPU, and, as far as I can tell, the new "protected mode" capabilities weren't really used at the time, but the 80286 would run "real mode" 8086 code a lot faster than a 8086 or 8088. The 80386 / i386 eventually provided a real 32-bit environment that, as of 1990, Windows 3.0 and later started to take advantage of.
The Motorola 68000 was introduced in 1979. It's far more advanced than the 6502, Z80 or 8086. Although the 68000 uses 16-bit instruction words, has a 16-bit data bus and uses a 16-bit arithmetic logic unit, it has eight general purpose registers that can be used as 8-bit, 16-bit or 32-bit registers plus another seven 32-bit address registers. So its instruction set architecture (ISA) is 32-bit. The 68000's address bus is 24 bits wide and it has (wait for it...) about 68,000 transistors.
After looking at the arcane register and instruction set architectures of the 6502, Z80 and 8086, the 68000 is a breath of fresh air. It's clean. It makes sense. Which is not to say it's always simple. The 68000's registers can work with 8-bit bytes, 16-bit words and 32-bit long words. And you really have to be specific about how many bits you want to use at any one time, because careless mixing will give you non-working code. But as someone who learned C programming a long time ago, it all makes a lot of sense to me. And when the time came to make the jump to real 32-bit CPUs, that was just a hardware improvement where the existing software just worked more efficiently unchanged.
The 68000 was limited to high end systems in the early 1980s, but became the CPU of choice outside of the x86 world around 1985 with the Apple Lisa and MacIntosh using it, as well as the Amiga 1000/500/2000 models and Atari ST. The Sinclair QL used the 68008, a 68000 with an 8-bit data bus. The Amiga 1200, 3000, 4000 as well as later Macs used the fully 32-bit 68020, 68030 and 68040 variants.
It's hard to be objective as someone who learned assembly programming on a C64/C128 with a 6502 CPU, and then used an Amiga for more than a decade. With that out of the way, I'd say that the 6502's laser focus on low cost made a lot of sense, as did the 68000's focus on a clean and forward-looking architecture. Both these aspects are missing missing from the Z80 and the 8086 family. Success hides many problems, but I think a cleaner design would have been helpful here.
Eventually, every architecture runs out of steam. 80386+ CPUs would still run 8086 code, but that was basically something old bolted onto something new. The 68040 and final 68060 CPUs were efficient, but couldn't reach the high clock speeds other CPU architectures were able to attain around that time. So the Mac switched to PowerPC CPUs, and emulated 68000 code in software rather than having the new CPU run the old one's code natively like the 8086 family.
Still, to me, it's pretty obvious that Motorola's forward thinking with the 68000 family made much more sense than the fairly haphazard evolution of the x86 architecture. I hope that can be a lesson to all of us: don't let the needs of the present dictate the limitations you'll have to deal with in the future.