92

The DEC Alpha, released in 1992, seems like an early implementation of a fully 64-bit microprocessor architecture. Its release led to quite a bit of both marketing hype and genuine vendor support in the mid-1990s, which supported the conclusion that Alpha would be a serious challenger to Intel x86, at least in the high-end performance and enterprise markets. It even had support from Microsoft with a native Windows NT 4.0 release.

In the late 1990s, Alpha fizzled in the face of the HP/Intel partnership pushing their Itanium 64-bit architecture. My limited understanding of the phase-out suggests that it was not a technical matter of Itanium being better than Alpha. Rather, it was that Compaq acquired DEC, which was later acquired by HP, that killed the Alpha (by selling it to Intel!).

Assuming my background is correct and that the DEC Alpha represented a formidable technical challenge to x86 processors of the mid-1990s, what were the specific architectural features that made it better or "ahead of" the Pentium CPUs of the same era? And at what point after would Intel introduce x86-compatible processors that matched these technical architecture features already in DEC Alpha from 1992?

user3840170
  • 23,072
  • 4
  • 91
  • 150
Brian H
  • 60,767
  • 20
  • 200
  • 362
  • 26
    The Alpha wasn't really (directly) competing with the x86 arhitecture... it was more competing with the mid-to-large, minicomputers of the time (IBM's AIX, HP/UX boxes, Solaris Sparc, Sequent etc.). If you remember the fabulous AltaVista search engine (before DEC teamed up with Yahoo, and got sold to Compaq)... it was essentially created as a test-project to show-case the power of a top-end Alpha box (see https://digital.com/about/altavista/). IIRC, the original box behind it had something like 6 processors and (then almost unheard of) 12GB or RAM. – TripeHound Feb 06 '20 at 14:18
  • 5
    @TripeHound, I remember 6GB of RAM when it first appeared - probably upgraded a few times during its life, though. – Toby Speight Feb 06 '20 at 18:46
  • @TobySpeight You're almost certainly right... my memory of the figure could easily be wrong, and even more easily not the original amount. Either way, it was an awful lot for the time. – TripeHound Feb 06 '20 at 19:05
  • 1
    @TripeHound - agreed; it was absolutely mind-boggling (I was pretty impressed with a 64MB 256-colour workstation at the time - I didn't get my hands on a gigabyte until nearly a decade later). – Toby Speight Feb 06 '20 at 19:11
  • 8
    @TripeHound The Alpha most definitely was competing with the x86 architecture. When it was new, a DEC salesman came to my company to pitch it. His entire talk was comparing it to existing x86 servers running NT. It was much faster, of course, so fast it ran x86 software via an emulator at competitive speeds. But it was basically trampled by x86 inertia. Even Intel's own highish end architecture ended up being trampled by x86. – JeremyP Feb 07 '20 at 10:06
  • At one point I worked in HP with one of the engineering teams they'd inherited from DEC with the purchase of Compaq. The reason he cited for the Alpha not having succeeded was a management / marketing decision about how the Alpha was marketed... that a decision was taken to match intel clock speeds not simply go as fast as possible, the failure of the Alpha being a marketing issue rather than a technical one. – houninym Feb 07 '20 at 11:10
  • 4
    Not just NT 4.0 - all the NT 3.x releases also supported the Alpha, as did all or nearly all of the betas for Windows 2000 (Alpha support was only dropped very late in development, shortly before the final release of Windows 2000). – Vikki Feb 08 '20 at 05:01
  • 1
    I used the alpha as a part of the T3D, it's biggest issue was lack of integer divide from a scientific computing point of view. – Ian Turton Feb 08 '20 at 20:15
  • 1
    IIRC the Alpha was shipping several years ahead of any 64-bit Intel CPU, so it was something of an apples-to-oranges comparison, at least initially. – Anthony X Feb 09 '20 at 03:55
  • As with today with ARM and MIPS and RISC-V it is NOT a technology thing, we could make "better", lower power as fast computers with these competing architectures, but at the time certainly you coudnt break the wintel model even though alpha was supported by microsoft. we are still in a model where you can only survive if intel lets you, they cannot compete in phones and tablets despite having an arm. they chose to not go there (x86 couldnt work in that market), and sold that arm compatible product line. They chose to discard the alpha as well. – old_timer Feb 13 '20 at 12:50
  • Just look at how many months those arm based inexpensive laptops lasted. All this did was demonstrate there was a market there, intel underclocked some chips (and/or took piles of product that formerly failed screening parts and re-screened at lower speeds) and you got windows on a cheaper/smaller package. Same went for the sub $1000 computer before that. Its not a better/worse tech thing, it is a big company dominance thing not just intel, but all of them choose to let you play or not. – old_timer Feb 13 '20 at 12:54
  • Here I‘d like to link a article on what an Alpha system scales to: 14CPUs and 28GB of memory for almost $400k in 1999, for one node. This was the kind of premium people wanted to pay if 4 Xeons couldn't get the job done. https://www.cpushack.com/2022/03/26/the-dec-compaq-turbo-laser-6-alphaserver-kn7ch-processor/ – user3528438 Apr 22 '22 at 12:33
  • @user3528438 $400k was the base price for a GS140 in 1999; that would “only” get you 6 CPUs and 4GiB of RAM. Adding the remaining 8 CPUs and 20GiB of RAM would add $425k, and you’d need some other stuff to go with that (interface cards, various pieces of hardware, storage and software, oh and maintenance). – Stephen Kitt May 05 '22 at 21:37
  • 1
    obligatory book to read is "The Innovator's Dilemma" by Cristensen. Though it's a business/managerial book, it handles (among other things) the harddisk and computer business through its generations. He explains that disruptions come from (paraphrasing greatly here) niche and good enough products, which grow into upper segments. The pc was disruptive in this way, as it was good enough and over the years even outpaced the needs of the consumer, and could thus transition to markets where before, only mini computers like the DECs were good enough. – hbogert Jul 20 '22 at 21:57

10 Answers10

99

The Alpha team set out to create a high-performance architecture, planned to last for 25 years and allow for 1000-fold performance increase over those 25 years. So they placed some long bets, starting with the 64-bit design (which cost performance but ensured long-term viability). It wasn’t designed to compete with x86 (which wasn’t perceived as a viable long-term architecture at the time of the 486, at least not from the point of view of RISC manufacturers), but rather to be the best possible CPU for Digital, from workstations to high-end servers. Digital did understand early on that Intel’s Pentium and later CPUs would end up competing, and tried to adjust their strategy to address that, but the competitive landscape at the time was much more complex than Alpha versus x86 (see this famous issue of PC Magazine).

Here are some features present in Alpha CPUs before competing Intel x86 CPUs (Alpha wasn’t necessarily the first architecture to implement these):

  • 64-bit architecture (64-bit ALU, registers, pointers, etc.) — 2007 in Intel x86 CPUs (but the first 64-bit x86 CPU was the Opteron in 2003; other 64-bit-capable architectures were MIPS III in 1991, SPARCv9 in 1994, PA-RISC 2.0 in 1996, PowerPC 620 in 1997)
  • high clock rates (enabled by the typical RISC design, with a simplified register file, split register files, fixed instruction size, and very careful layout), 192MHz in 1992 — Intel caught up with Alpha clock rates in 1999 with the Coppermine Pentium III
  • multi-issue (superscalar) — the first Pentium was also multi-issue, but had unbalanced pipes
  • built-in multiprocessor support (albeit with a famously weak memory model) — this is difficult to compare, since Alpha and x86 have very different multiprocessing models; Intel’s CPUs supported bus sharing as early as the 8080 and locking with the 8086, multiprocessor systems using Intel CPUs have existed for a long time, and the P54C included hardware to support two-way multiprocessing in 1994
  • built-in secondary cache (starting with the 21164) — Coppermine Pentium III
  • out-of-order execution at high frequencies (starting with the 21264) — Pentium Pro
  • built-in memory controller (starting with the 21006 and 21364) — Nehalem in 2008

The instruction set was designed with many of these goals in mind, in particular high clock rates, multi-issue (the instruction set avoids instructions which typically cause dependencies), and multiprocessor support (atomic updates etc.). Intel could never replicate this with a backwards-compatible instruction set, at least not on the surface.

The amount of engineering effort which went into all the details of the architecture is easy to underestimate. For example the layout engineers used to spend ages planning simulation runs which would take days if not weeks to complete, to calculate the optimal layout for portions of the CPU (using what would be called machine learning nowadays).

At the time, Alpha systems had more of everything, even compared to competing workstations and servers, let alone PCs, in particular higher clock rates, and support for more memory (I saw the first motherboard-sized memory boards with a full gigabyte of memory in the Digital factory in Scotland), albeit for more dollars (at least compared to PCs). For some time, the fastest computers for running Windows NT x86 binaries were Alpha workstations!

Most of the Alpha niceties made it to x86, many through AMD rather than Intel (follow the engineers after Digital’s breakup). Some that didn’t include fixed-size instructions, and PALcode.

The Alpha is extensively documented; see for example this BYTE article by one of the Alpha architects, Richard L. Sites; Paul V. Bolotoff’s Alpha: The History in Facts and Comments; and Digital technical journal, in particular volume 4 number 4.

(If you want to try to get a feel for what typical PC users experienced when they spent some time on an Alpha system in the mid-90s, try to spend some time on a high-end POWER system nowadays. The price difference is also similar now to what it was then...)

Stephen Kitt
  • 121,835
  • 17
  • 505
  • 462
  • Good write up. I stumbled a bit about the 'multiprocessor support' point, as I always considered that part of the x86 architecture from the very beginning. – Raffzahn Feb 06 '20 at 14:58
  • Hmm, yes, arguably the LOCK prefix could be counted in Intel’s favour on this point. I’d have to refresh my memory on the details on the Alpha side! On Alpha, this covers features such as load-locked, conditional stores, and memory barriers (which Intel didn’t have for a long time). – Stephen Kitt Feb 06 '20 at 15:02
  • x86 does support a shared bus protocol (in fact, already 8080 does) and with the ability to issue a locked bus transaction (for RMW) is all that's needed to handle close coupled (memory sharing) CPU setups. A locked RMW enables semaphores to organize everything else. Additional extension may simplify handling/improve performance, or is needed for further optimization - like adding cache may benefit a lot from some kind of coherency protocol. – Raffzahn Feb 06 '20 at 15:14
  • Nice answer. I wonder if there is any elaboration to be made about the bullet "64-bit support - 2007"? It seems to me the first Pentium had a 64-bit data bus. making it quite a beast at slurping code/data into cache. – Brian H Feb 06 '20 at 15:44
  • 1
    @Brian I’m referring to the overall architecture (ALU width, register width, pointer width etc.). The 21064 had a 128-bit data bus... – Stephen Kitt Feb 06 '20 at 15:56
  • 12
    The Alpha architecture is the canonical example of a "really weak" memory model, which complicates life for the programmer on a multi-core system. See e.g. https://www.cs.umd.edu/~pugh/java/memoryModel/AlphaReordering.html – fadden Feb 06 '20 at 16:08
  • 1
    What is "multiprocessing in 8080"? It didn't have any thing like LOCK prefixes in 8086. Yet it was able to share the bus, but this ability alone isn't much of multiprocessing, and used mostly in a DMA way. – lvd Feb 06 '20 at 19:05
  • 12
    I joined a company that was producing, and testing, the same Windows NT software on x86 and Alpha in spring 1995. At the time, the Alpha was way faster than the Pentiums we were using. The Pentium Pro was the point at which Intel looked like they could compete with the RISC processors; it did not quite catch up with the Alphas we had, but the differences were small. By the time of the Pentium III, Alpha had been overtaken, and there was no reason left for customers to use it. – John Dallman Feb 06 '20 at 20:42
  • 4
    One, uh, feature, Alpha CPUs had were sophisticated cooling systems. I remember the Alpha system I used had foam inserts to ensure proper airflow inside the case. It would perform vacuum test at startup to ensure the case was sealed correctly and sounded like a loud vacuum cleaner when running. I didn't see anything like it until I got to use an early Itanium prototype, although that didn't have the vacuum test. –  Feb 06 '20 at 22:46
  • 1
    Re: high clock rates on Alpha: this probably merits a mention of the speed-demons vs brainiacs debate. Speed-demons attempt to be faster via high clock frequency, while brainiacs attempt to be faster via high IPC. Alpha was frequently mentioned as the archetypical speed-demon. – ninjalj Feb 07 '20 at 00:07
  • 1
    re NT on Alpha running x86 binaries - my first "PC" ever was an Alpha (Jensen, aka DECpc AXP 150). I suppose it wasn't bad, but its contemporary, the 60MHz Pentium, seemed better to me (no data here, just 'feel'). This is the fundamental problem with new architectures for the mass market -- applications, applications, applications. – dave Feb 07 '20 at 00:29
  • 3
    @fadden: Even if an architecture uses a weak memory model, an OS could emulate a stronger model by allowing programmers to specify combinations of threads for which cache consistency is required, and ensuring that such threads aren't assigned to cores with different caches. A bigger issue I suspect is that operations on adjacent bytes can interfere with each other, – supercat Feb 07 '20 at 01:05
  • PPro had on-package L2 cache (separate die), and PII and Katmai PIII had their L2 cache on the Slot-A daughter board, so it wasn't too horrible (but yeah, at half the core clock speed) . Not like P5 where L2 cache was on the motherboard. – Peter Cordes Feb 07 '20 at 07:32
  • 3
    @ninjalj: early Alpha was a speed-demon, but 3rd gen (21264) was a big change to brainiac. http://www.lighterra.com/papers/modernmicroprocessors/ has a plot (and full background on these terms and CPU architecture in general for other readers who might need that :) – Peter Cordes Feb 07 '20 at 07:36
  • @Peter I love that plot, it’s a nice illustration of the changing trade-offs as CPU designers’ and manufacturers’ abilities and skills improved (and as they saw their competitors’ attempts succeed or fail too...). – Stephen Kitt Feb 07 '20 at 07:50
  • @JohnDallman The Pentium Pro was the point at which Intel … -- I think that was the CPU that was the CPU that was the basis of the DEC lawsuit for intellectual property theft! – dave Feb 07 '20 at 12:17
  • 1
    @another-dave the lawsuit covered the Pentium, Pentium Pro, and Pentium II, and led to this settlement. – Stephen Kitt Feb 07 '20 at 12:30
  • 2
    Could have mentioned somewhere that AMD K7 and derivatives (Opteron, K8 etc.) benefited a lot from alpha technology as AMD recruited a lot of the DEC engineers that had buit it. The K7 and Opterons HyperTransport was Alphas EV7. – Patrick Schlüter Aug 26 '20 at 12:27
  • @PatrickSchlüter “Most of the Alpha niceties made it to x86, many through AMD rather than Intel (follow the engineers after Digital’s breakup).” – Stephen Kitt Aug 26 '20 at 12:28
  • @PatrickSchlüter feel free to suggest an edit ;-). My point wasn’t for you to remove your comment, only to say that it was already mentioned somewhere (albeit not in as much detail as you might like). – Stephen Kitt Aug 26 '20 at 12:36
  • 1
    Funny enough, "fixed-size instructions" is apparently key to the Apple M1 having a performance edge over x86 designs. – ssokolow May 03 '21 at 09:48
  • 2
    For FP, Alpha (and RISCs generally) had a significant advantage over 8087-based 8-register stack FP architecture. Since Alpha never adopted SIMD, x86 gained a significant advantage with SSE2 (2-wide DP execution for SIMD-friendly code and non-stack architecture). ISTR Alpha designs also had more optimized (manual/custom) physical design compared to other RISCs and coordination with owned fabs (similar to Intel); Alpha's low sales volume made both uneconomical compared to Intel (even AMD was forced to spin off its fabs). –  Jun 21 '21 at 16:25
72

Stephen Kitt has done what seems to me an excellent job of outlining features and when they were introduced. I'll take a slightly different tack, instead picking a single point in time, and pointing out differences between the two at that time.

I'm going to choose the 21164 as the Alpha to compare. It came out in January of 1995. It had a 266 MHz clock speed, and a quad-issue pipeline (i.e., could issue 4 instructions per clock). That was balanced between integer and floating point, so you could issue 2 integer instructions per clock and 2 floating point instructions per clock.

Intel's fastest processor at that time was the P54C Pentium. I believe at the time, it had a maximum clock speed of 75 MHz. It had dual pipelines, so it could issue (at best) two instructions per clock. The second pipeline was fairly restricted, and scheduling was static, so in a given clock cycle, the first instruction (almost) always went to the the first pipeline, and then the second instruction issued to the second pipeline if and only if it was one of the specific instructions that the second pipeline supported.

To get an idea of performance (well, okay, my aim was a bit more selfish: to try to justify buying an Alpha workstation) I did some simulations of running Alpha code for a program I had at the time. It averaged around 1.6 instructions per clock.

I had a Pentium at the time (a 66 MHz P5). The same code running on it ran at about 1.1 instructions per clock.

The Alpha instruction set was rather simpler, so you needed to execute more instructions to carry out a particular task with it than with the Pentium. If memory serves, this was about a 2:1 difference, but varied a fair amount.

So, at least at that point in time, the Alpha was effectively about 3 times the speed of the fastest Pentium.

I feel obliged to address one more point though. You said:

In the late 1990's, Alpha fizzled in the face of the HP/Intel partnership pushing their Itanium 64-bit architecture.

In my opinion, this is basically wrong. It wasn't Alpha that fizzled. It was DEC that fizzled. Continuing from the performance comparison outlined above: my numbers were convincing enough that I eventually got permission to buy a DEC Alpha workstation, and got funds allocated for it.

So, I went through the DEC catalog, and picked out exactly the workstation I wanted. Then I contacted DEC. The first guy I talked to was very enthused right up until he heard the size of company I worked for - we were too small a company, so he couldn't sell me anything. He gave me somebody else to talk to. So I talked to them. They were very helpful until they heard what I wanted - they weren't allowed to sell that workstation.

This went on for over a month. I spent weeks calling different people at DEC, and an almost bewildering number of VARs and VADs and god only knows what else. On essentially every call, I was very clear about exactly what I wanted, and that I had funds available to buy exactly that, immediately. I also made clear that assuming this worked out, my boss and his boss were both probably going to buy similar machines soon (there was no way a mere peon like me was going to have the fastest machine in the company for very long!).

In the end, I simply had to give up. I had the money. I had permission to spend it. But no matter how hard I tried, I couldn't get DEC to take the money.

At least in my opinion, that's why the Alpha died. I don't claim to know sales particularly well, but I'm pretty sure an effective sales strategy does not include refusing to sell your product, even when you have a legitimate customer who's not only ready and willing, but in fact downright eager to buy your product.

Jerry Coffin
  • 4,842
  • 16
  • 24
  • 9
    Yeah. There was a point in the late 80s when DEC, HP and Sun had great machines, a logical product range you could order out of a catalogue and great service and support. Then they just suicided in the face of their market expanding from technical users to corporates and SMEs. – Rich Feb 06 '20 at 23:07
  • 6
    The legal mess with Intel likely helped kill Alpha. In settlement, Intel buys the Hudson fab, Intel agrees to make Alpha chips at that fab, and DEC agrees to use Itanium. I'm sure this mess made sense to someone at the time. – dave Feb 07 '20 at 00:41
  • 1
    I remember seeing ads in (the print version of) Linux Journal for Alpha workstations way back in the day, like late 90s / early 2000. So at some point someone figured out how to sell Alpha workstations to individual buyers, but probably this was a year or two after your experience, if a 66MHz P5 was top of the line. – Peter Cordes Feb 07 '20 at 09:27
  • 3
    @Peter that was probably Microway (a long-standing LJ advertiser, who sold their own range of Alpha workstations for a while). – Stephen Kitt Feb 07 '20 at 13:32
  • 2
    @PeterCordes: All those I saw advertised and easily available used the 21164PC, which was a 21164 with the S-cache removed (and, if memory serves, the I-cache enlarged a little to try to compensate). I eventually got a chance to play with one of those too, but it wasn't really the same. – Jerry Coffin Feb 10 '20 at 05:58
  • @JerryCoffin: ah. I wasn't in the market for one at the time, and was still in the kiddie pool of cpu-architecture knowledge. Hadn't realized they were cut-down CPUs. – Peter Cordes Feb 10 '20 at 06:02
  • Probably the problem was that DEC was selling so few systems... that they couldn't afford to warehouse any ahead of time, so they were only taking large orders (like a least a wafer worth of CPUs etc....) – cb88 Sep 24 '21 at 01:00
  • @cb88: I suppose that's possible. Doesn't seem like a great fit, given that DEC had their own fab. – Jerry Coffin Sep 24 '21 at 07:17
  • @JerryCoffin well... as we know fabs are expensive endevours... perhaps owning their own fab was exactly why this came to be so, and the same reason AMD ended up giving up their fab instead of letting it pull them down. – cb88 Sep 25 '21 at 16:44
15

I was at HP when the Alpha cancellation decision was made. In fact I was part of a team that ran comparative HPC benchmarks on Alpha and x86. The fact was that by 1999 the x86 Pentium-II was matching the Alpha in floating point performance. This was reported by objective groups, e.g. Dongarra et al. Unfortunately the Alpha ecosystem was 10x more expensive than the x86 ecosystem. So there was really no choice.

  • 2
    Intel came a long way with x86 performance from 1992-2000, no doubt. Competition is important. – Brian H Feb 13 '20 at 18:06
  • And by choosing x86... you only set yourselves back by about 15-20 years on the other features that you could not compare, because x86 didn't even have them for that much longer. – cb88 Sep 25 '21 at 17:05
14

Alpha fizzled in the face of the HP/Intel partnership pushing their Itanium 64-bit architecture

I think it's important to note that during this period, there was a widespread belief that the VLIW approach was "the next RISC". Existing RISC approaches were growing into the millions of transistors and the outright performance gap that existed in the 1990s was fading. HP looked on Itanium as the future and Alpha as the past. So did a lot of people.

So Alpha wasn't competing with x86; they were totally different markets, in the same fashion that we don't have desktops based on ARM (yet). And since Itanium was going to replace both Alpha and x86, what was the point of continuing development of Alpha?

Toby Speight
  • 1,611
  • 14
  • 31
Maury Markowitz
  • 19,803
  • 1
  • 47
  • 138
  • Good point. Close comparison only works for close related application. – Raffzahn Feb 06 '20 at 14:43
  • 5
    During the middle of its life, Alpha was competing with x86, and Digital understood that quickly. – Stephen Kitt Feb 06 '20 at 14:58
  • 7
    Put another way, Alpha might have been addressing different markets compared to typical x86, but x86 was certainly targeting Alpha’s markets. – Stephen Kitt Feb 06 '20 at 15:07
  • DEC sales left room at the bottom for Windows + x86 to grow and slowly push them out of more and more of the market. Add in Microsoft's push for developers and they steamrolled from the low end on up pushing out competitor after competitor. I don't see ARM on the desktop anytime soon as it doesn't have a standard like 'IBM PC Compatible' yet making it harder to build a packaged software and hardware ecosystem like DOS/Windows has enjoyed. – Brian Feb 06 '20 at 21:51
  • 3
    I remember Andrew Orlowski illustrating the decisionary aspect by random bosses sitting on chairs they shouldn't occupy and a big lack of balls during the 2000-2001 serial buyout disasters involving DEC, Compaq, HP and whatnot (I can't remember the details) in The Register. For example: Don Capellas justifies Compaq Alphacide, Farewell then, Alpha – Hello, Compaq the Box Shifter etc. – David Tonhofer Feb 06 '20 at 22:24
  • ARM laptops are already here. Desktops are a niche. – Brian H Feb 07 '20 at 20:08
  • 2
    "we don't have desktops based on ARM (yet)" The yet part has now kicked in. – JeremyP May 04 '21 at 12:19
  • 3
    @JeremyP nonsense there have been ARM desktops from the very beginning... in fact the very first arm machine was a desktop. There were also ARM desktops throughout the 90s and 2000s. They weren't PCs but they were definitely desktops. The problem has always been compatability... and the M1 back also lacks that and will probably lead to Apple eliminating the hackintosh side of things... a bad thing for consumers that wish to get more powerful machines and not be at the behest of Apple. – cb88 Sep 25 '21 at 17:02
  • @cb88 What do you mean "nonsense"? You're agreeing with me that desktops with ARM processors in do exist. It's not nonsense just because you don't like Apple. – JeremyP Sep 28 '21 at 08:58
  • 1
    I disagree with "The yet part has now kicked in" ... ARM is still as niche as it has ever been as has no more desktop presence than it has ever had. It has always had a low level desktop presence... with commodity PC buses and what not for decades. If anything RISC-V has more potential than ARM does in this space as it is new... and ARM has been failing at it forever. – cb88 Sep 29 '21 at 15:42
  • @cb88 And now we have the Mac Studio. Probably the most powerful desktop I'll never own :) Apple is going to make the ARM desktop a commodity – Thorbjørn Ravn Andersen Apr 22 '22 at 16:18
  • @cb88 The Acorn Archimedes A5000 (ARM 3, 25 MHz) I had in 1991 was a great desktop experience, and faster than 386s at the time. – Nick Westgate Jun 22 '22 at 03:29
  • Some irony that I'm writing this on an ARM desktop now. – Maury Markowitz Jun 23 '22 at 13:02
  • 1
    My point still stands there is a total of 1 major manufacturer making ARM desktops (yes there is Microsoft but they aren't serious). x86 isn't going away by a long shot and I am fairly confident Apple won't stick with ARM long term, why pay licensing fees for a CPU that was entirely designed in house? – cb88 Jun 23 '22 at 15:28
  • @Brian I don't see ARM on the desktop anytime soon as it doesn't have a standard like 'IBM PC Compatible' yet actually the standard already existed for a long time: SBSA and SBBR. You can boot any UEFI ARM boot disks on such PCs – phuclv Jun 14 '23 at 17:41
  • @cb88 yes and hard wired memory is also anti consumer friendly.

    We have 64-core Risc-V desktop/laptop machines which are for developers but they only run Linux/Haiku/BSD.

    – Justin Goldberg Jan 14 '24 at 12:48
9

One little known fact is that what become PostgreSQL was done on Alpha workstations with 64MB of RAM. I forget the model number of the workstations but they were small desktop machines. I was the system manager for the Postgres Research Group at UC Berkeley. We also had a couple of Alpha servers.

The Alpha hardware (and software) worked quite well. We were part of a large DEC external research project (Sequoia2000) so we got everything in essence for free. I doubt that we would have used Alphas if it weren't for that.

  • 1
    I've ran PostgreSQL around 2000 on a 64MB 486/100 "server" (a beige box repainted blue IIRC). It was an excellent database back then, and so it is now. You guys and gals kicked off a dependable product, and I've used it many times over the intervening years, for various purposes. – Kuba hasn't forgotten Monica Dec 10 '22 at 21:56
9

I worked in service at DEC from the '80s to 2003. Regarding the public implementation of Alpha chips: on DEC boxes it was fine. My POV as related to Intel / Microsoft relationship: it was a deal-buster for getting 64 bit CPUs on desktops. & servers. Microsoft had had a relationship with Intel from the start and refused to jeopardize it by aligning with DEC. Wintel was doing fine so Microsoft just ignored Alpha.

DEC wrote the code for 64-bit Windows NT to prove it worked. Didn't matter. There wasn't THAT much demand from users to affect their bottom line. Engineering OEMs sold them to high-end users, but that was it for desktop workstations. Without the mass market that DEC CEO Palmer envisioned, the numbers didn't work to scale up. The Hudson plant wasn't bringing in the revenue and fizzled. Once the DEC / Intel deal hit, THEN Microsoft "liked" 64-bit Windows. By then, the valuation of Alpha was way down .. 10 yrs old?

Palmer then sold off the pieces until it was just a SERVICE company, which is what the Compaq deal was all about. They sold boxes but had no service. A merger there and Compaq had a service force and DEC was history.

DEC was great at making stuff and as long as the high-end users kept calling to buy product, things were fine. But...

Toby Speight
  • 1,611
  • 14
  • 31
Bob Schmoe
  • 91
  • 1
  • 1
  • 1
    A major problem with the Alpha, at least early versions of the architecture, was that a lot of code which was designed to run as efficiently as possible on platforms which supported byte-addressable storage and a moderately-strong memory model would not be able to run on the Alpha without it was substantially rewritten or else processed by an implementation that auto-inserted so many memory barriers as to negate any performance advantages the Alpha would have been able to offer. – supercat May 03 '21 at 17:14
  • @supercat - this sounds like it directly addresses the OP. Can you expand on this? Was it's MMU not up to scratch? – Maury Markowitz Jun 23 '22 at 13:03
  • 1
    @MauryMarkowitz: The Alpha had no instructions to write anything smaller than a 32-bit word. To access a byte, it was necessary to read a word, modify the appropriate bits in it, and write it back. The Alpha's instruction set could do this reasonably efficiently, but only if there was no possibility of two threads running on different cords trying to write different bytes within a word simultaneously. – supercat Jun 23 '22 at 13:09
8

I was working at the time porting a large system from Unix to Windows NT, we had a few Intel Windows NT machines and a DEC Alpha running Windows NT. (Along with Sun, Dec, IBM, etc Unix workstations)

We started thinking Alpha was better than Pentium as that's what the benchmarks said, but Windows NT felt slower on Alpha. When we ported the Unix software to Alpha from Sun it ran a lot slower than expected, other processors were just better at coping with code that was not written in the best possible way.

Then we needed Internet Explorer, and it turned out that Windows NT on Alpha was not fully supported by Microsoft. The PentiumPro Windows NT machines we got the next year were clearly faster than the Alpha (and our Sun workstations) but cost less than the maintenance contract on the Alpha.

It was still true that with unlimited money Alpha and Sun was faster than a top-end PC, but we could buy new PCs each year compared to having to keep Sun/Alpha machines for 3 to 5 years due to their high cost. The rate of improvement of PCs made buying a new machine each year a good option.

At the time RAM was so costly that none of our customers would consider more RAM than a 32-bit CPU could address, 128MB was a very large system.


The much higher sales of Intel PC resulted in a faster improvement cycle for all the related hardware, along with it being much quicker/easier to get PCs repaired. When we needed a new PC we could get it from a company over the road within 2 weeks, it took much longer than that to even get a price from Dec.


(Windows NT on Power PC was much the same)

hippietrail
  • 6,646
  • 2
  • 21
  • 60
Ian Ringrose
  • 425
  • 3
  • 6
6

My personal experience with the Alpha AXP was when I attended a presentation in the Detroit area during the introduction of the product many years ago. It was a video presentation and it showed the president of DEC on a stage demonstrating the capabilities of the system.

He had a projection display screen that was attached to the computer, and the computer was represented by a wire-frame head on that display.

He could speak to the computer and it would respond. It demonstrated real-time voice-recognition and real-time animation of the head.

He would get it to check his e-mails, respond to them and to contact, by telephone call, one of the senders of an email.

It was very impressive. Even today, we have no common usage of this technology, which was demonstrated decades ago.

To be clear,the feature I witnessed were exactly what put it so far ahead of even today's computers.

I would give a lot to get my hands on that demonstration video again.

KitchM
  • 61
  • 1
  • 2
  • 1
    My phone can do all that. – JeremyP May 04 '21 at 12:21
  • I don't believe that because I do not know how you display that on a large monitor, have the same security and privacy measures in place, etc.. – KitchM May 05 '21 at 17:11
  • 1
    My phone is an iPhone 8. It has voice recognition. It can send emails on command, phone people and it has very impressive graphics capabilities - animating a 3 D model of a head in real time would be trivial. In all likelihood its hardware is superior to that of the Alpha based machine you saw. It has six CPU cores, three GPU cores and a neural processor. It is, of course, several generations behind the curve now. – JeremyP May 05 '21 at 19:11
  • Well Apple is pretty good, if expensive. Now if we can only get that on our desktop where is it more useful. – KitchM May 07 '21 at 18:49
  • @KitchM It's on the desktop too. The performance of modern PCs is astounding, but it takes insane heroics on the part of Intel and AMD to design the hardware that recompiles the code on the fly. A lot of power and die area is spent on basically translating stuff in the instruction window to RISC micro-ops and running that instead, plus the insane register rename files, complex branch prediction, etc. Executing x86 code effectively is an extremely complex technical task. If you wanted to implement x86 as efficiently as Intel, you'd need a couple billion bucks and a very good team. – Kuba hasn't forgotten Monica Dec 10 '22 at 21:54
5

I'm uniquely positioned to answer this question, as I briefly had a DEC Alpha while having a PC with the same software on it!

I worked at the University Surplus and mostly we'd get thousands of Gateways and Dells. But in the early-mid 2000s I ended up throwing Linux on an old DEC MIPS, a DEC Alpha, a DEC VAX*, SGI (but I put Irix back on that), HP PA-RISC and of course an old PowerPC Mac or two.

I had a 400 MHz Alpha, which at the time was probably 6 or 7 years old (had Gentoo on there and it was a bone-standard VGA adapter like a Matrox G400 or something so X came right up...) had Gentoo on my desktop (900 MHz Duron which was maybe a year old at that point.) The Alpha destroyed it, like probably double the performance. And at video decoding it was absurd, like 20x the performance or so; Alphas had a "Motion Video Instruction" instruction set that must have had ridiculous speedups for both MPEG-2 and MPEG-4 even compared to the MMX and 3DNow I would have had on my Duron.

Why was it so fast? DEC planned on having 33% speed-up per generation from clock speed increases, 33% from design improvements to increase instructions per clock, and 33% from compiler improvements (that may even get you speed-ups on your existing chip). They would have the compiler writers and chip designers talk, so they followed RISC principles of design simplicity so it could run at very fast clock speeds for the time, while going ahead and adding some "various" instructions if the compiler team showed they'd get a nice fat speed-up from it. VERY effective! And hell, since I was running Linux I was using GCC; maybe the DEC compiler would have run stuff even faster. HP (through buying Compaq who had bought DEC) had the two fastest chips on the market, PA-RISC and Alpha, and discontinued both lines to pursue Itanium! (Alpha was also 64-bit from the start, over 10 years before the first AMD-64 CPUs came out, and PA-RISC went 64-bit in the mid 1990s too for that matter.) Alpha and PA-RISC also hit 1 GHz quite a while before any other CPU designs got there, they'd hit a clock speed a good year and a half or two years before Intel & AMD would get there just because the RISC design let them clock them up easier.

*Vaxstation 3100, a diskless one that they actually sold as an X terminal (it'd netboot and pull a VAX X server over the Ethernet with a not-quite-tftp deal that it used, but for Linux you'd netboot a kernel and ramdisk, then it'd do NFS root just like you can do on a diskless Intel box.

I actually built Gimp on this thing; it ran "a tad slow" as the 12 MHz VAX CPU ran about even with a ~66 MHz Pentium or so. The VAX got crazy "instructions per clock" but was the epitome of CISC (Complex Instruction Set Computing), the CPU was huge and they only ever got them up to about 50 MHz. (Edit: well, it got high "work per clock", the instructions per clock may have been rather low since it had plenty of complex instructions to use.)

Ignoring the extensions (of which VAX accumulated several over the years), you could have a SINGLE VAX instruction that was like "add a+b, put value in c.... but a, b, and c are 'get the value from a first register x, an offset from a second register y, look at data at location x offset y, use THAT value as a pointer to the memory location to actually get/put the value from'". So you had instructions that ran in something like 1-2000 cycles.

Toby Speight
  • 1,611
  • 14
  • 31
hwertz
  • 219
  • 2
  • 2
1

The DEC Alpha, the first mass-produced 64-bit processor, featured a pipeline architecture superior for the time. Intel contracted as a second-source manufacturer of the Alpha technology for DEC, whose manufacturing was superior, but limited. When Intel afterwards brought out the Itanium, supposedly borrowing some of Alpha's technology, DEC sued. Thereafter Intel agreed to buy DEC's Alpha technology, along with its manufacturing facility, and to pay punitive damages. DEC was being led differently anyway and no longer wanted to be in the manufacturing business. Thus most of today's 64-bit processor technology had its roots in Alpha. However, at the time, the best processor on the market was Sun Microsystem's SPARC—but only while running their SunOS/Solaris Unix.

Mike
  • 21
  • 1
  • 4
    The original poster was asking about specific architectural features of the Alpha that made it superior to x86. Could you elaborate? – Jim Nelson Feb 07 '20 at 23:51
  • 3
    DEC sued Intel over the Pentium, PPro and PII, not Itanium. – Stephen Kitt Feb 09 '20 at 14:37
  • Itanium architecture was a bit too tightly married to what could fit on a die at the time. Long-term it'd have been just slightly less expensive to keep up with better IC processes compared to x86, but it'd have still needed some heroics. – Kuba hasn't forgotten Monica Dec 10 '22 at 21:57