(Originally written 2021-01-17)
The term RISC was first coined in the early 1980s. RISC architectures were a reaction to the ever-increasing complexity in architecture design (epitomized by the DEC VAX-11 architecture). It rapidly became the darling architecture of academia and almost every popular computer architecture textbook since that period has trumpeted that design philosophy. Those text books (and numerous scholarly and professional papers and articles) claimed that RISC would quickly supplant the “CISC” architectures of that area offering faster and lower-cost computer systems. A funny thing happened, though, the x86 architecture rose to the top of the performance pile and (until recently) refused to give up the performance throne. Could those academic researchers have been wrong?
Before addressing this issue, it is appropriate to first define what the acronyms “RISC” and “CISC” really mean. Most (technical) people know that these acronyms stand for “Reduced Instruction Set Computer” and “Complex Instruction Set Computer” (respectively). However, these terms are slightly ambiguous and this confuses many people.
Back when RISC designs first started appearing, RISC architectures were relatively immature and the designs (especially coming out of academia) were rather simplistic. The early RISC instruction sets, therefore, were rather small (it was rare to have integer division instructions, much less floating-point instructions, in these early RISC machines). As a result, people began to interpret RISC to mean that the CPU had a small instruction set, that is, the instruction set was reduce. I denote this interpretation as “Reduced (Instruction Set) Computer” with the parenthesis eliminating the ambiguity in the phrase. However, the reality is that RISC actually means “(Reduced Instruction) Set Computer” — that is, it is the individual instructions that are simplified rather than the whole instruction set. In a similar vein, CISC actually means “(Complex Instruction) Set Computer”, not “Complex (Instruction Set) Computer” (although the latter is often true as well).
The core concept behind RISC is that each instruction doesn’t do very much. This makes it far easier to implement that instruction in hardware. A direct result, of this, is that the hardware runs much faster because there are fewer gates decoding the instruction and acting on the semantics of the instruction. Here are the core tenets of a RISC architecture:
A load/store architecture (software only accesses data memory using load and store instructions)
A large register bank (with most computations taking place in registers)
Fixed-length instructions/opcodes (typically 32 bits)
One instruction per cycle (or better) execution times (that is, instructions cannot be so complex they require multiple clock cycles to execute)
Compilers will handle optimization tasks, so there is no worry about difficult-to-write machine code.
Early RISC designs (and, for the most part, the new RISC-V design) stuck to these rules quite well. The problem with RISC designs, just as what happened with CISC before them, is that as time passed the designers found new instructions that they wanted to add to the instruction set. The fixed-size (and very limited) RISC instruction encodings worked against them. Today’s most popular RISC CPU (the ARM) has greatly suffered from the kludges needed to handle modern software (this was especially apparent with the transition from 32 bits to 64). Just as the relatively well-designed PDP-11 architecture begat the VAX-11, just as the relatively straight-forward 8086 begat the 80386 (and then the x86-64), kludges to the ARM instruction set architecture has produced some very non-RISC-like changes. Sometimes I wonder if today’s ARM architecture would be viewed with similar disdain to the CISC architectures of yesterday by those 1980s researchers. This is, perhaps, the main reason the RISC-V architecture has given up on the fixed-instruction-length encoding tenet–it make it impossible to cleanly “future-proof” the CPU.
The original premise with RISC is that you design a new, clean, architecture (like the RISC-V) when time passes and you need something better than the 30-year-old design that you’ve been using (i.e., the ARM). Of course, the big problem with starting over (which is why the x86 has been King for so long) is that all that old, wonderful, software won’t run on the new CPUs. For all it’s advantages, it’s unlikely you’ll see too many designs flocking to the RISC-V CPU anytime soon; there’s just no software for it. Today, RISC-V mainly finds use in some embedded projects where the engineers are writing all the software for their device; they don’t depend on thousands or millions of “apps” out there for the success of their product.
When RISC CPUs first became popular, they actually didn’t outperform the higher-end CISC machines of the day. It was always about the promise of what RISC CPUs could do as the technology matured. However, those VAX-11 machines (and the Motorola 680×0 and National Semiconductor 32000 series) machines still outperformed those early RISC machines. FWIW, the 80×86 family *was* slower at the time; it wasn’t until the late 1980s and early 1990s that Intel captured the performance crown; in particular, the Alpha and Sparc CPUs were tough competitors for a while. However, once the x86 started outrunning the RISC machines (a position it’s held until some of the very latest Apple Silicon SOCs have come along), there was no looking back. RISCs, of course, made their mark in two areas where Intel’s CISC technology just couldn’t compete very well: power and price. The explosion in mobile computing gave RISC the inroads to succeed where the x86 was a non-starter (all that extra complexity costs money and watts; the poison pill for mobile systems). Today, of course, RISC owns the mobile market.
In the 1980s and 1990s, there was a big war in the technical press between believers in CISC and RISC. All the way through the 2000s (and even 2010s), Intel prowess kept the RISC adherents at bay. They could claim that the x86 was a giant kluge and its architecture was a mess. However, Intel kept eating their lunch and producing faster (if not frighteningly expensive) CPUs.
Unfortunately, Intel seems to have lost their magic touch in the late 2010s and early 2020s. For whatever reason they have been unable to build devices using the latest processes (3 to 5 nm, as I write this) and other semiconductor manufacturers (who build RISC machines, specifically ARM) have taken the opportunity to zoom past the x86 in performance. Intel’s inability to improve their die manufacturing processes likely have nothing to do with the RISC vs CISC debate, but this hiccough on their part may be the final nail in the coffin for the x86’s performance title and is likely to settle the debate, once and for all.