33

One of the big turning points in the history of the industry was IBM choosing the Intel 8088 over the Motorola 68000. Given that most people outside IBM considered the 68000 preferable, there has been much speculation about the reasons for the decision, with candidates ranging from cost to pressure from the mainframe division worried about the PC being too capable, to the Motorola support chips not being ready in time.

I just came across a very interesting paragraph in this Techspot article about the PC's history.

IBM's original plan had been to design the personal computer around Motorola's 6800 processor at its Austin, Texas research center. IBM marketing had arranged for the PC to be sold through the stores of Sears, Roebuck & Co., and the deal teetered in the balance as Motorola's 6800 along with its support chips slipped in schedule.

A contingency plan named Project Chess was set up to run concurrently with the Austin design...

Obviously 6800 is a typo for 68000; let's take that as read and look at the claim being made.

The author is not only subscribing to the 'Motorola support chips not ready in time' explanation, but claiming IBM had gone so far as to already choose the 68000 before the schedule issue scuppered their original choice.

Is that correct? Are there any historical documents that can confirm or refute the claim?

Raffzahn
  • 222,541
  • 22
  • 631
  • 918
rwallace
  • 60,953
  • 17
  • 229
  • 552
  • 2
    As you probably know, there was a chip named the M6800, but it was certainly too underpowered to have been the CPU of the IBM PC. – Davislor Nov 14 '20 at 05:24
  • 2
    I haven’t ever seen that story in a reliable source, but here is one comment from a person who says he heard it from someone at Motorola, and who shares contact information. – Davislor Nov 14 '20 at 05:31
  • 1
    Note that there are some major differences between that version and the one you cite. (Which gives no source, and gets a major, obvious detail wrong at least twice.) In particular, the comment never claims that that was “the original plan.” – Davislor Nov 14 '20 at 07:44
  • 7
    "Obviously 6800 is a typo for 68000": sorry, unsafe assumption unless you can come up with a robust source refuting the text you've quoted. – Mark Morgan Lloyd Nov 14 '20 at 17:15
  • 7
    @MarkMorganLloyd It seems a fairly safe assumption from the context that the chips were "slipping in schedule" - at the time, the 6800 had been in production for 5 years or so. – Alex Taylor Nov 14 '20 at 18:01
  • 1
    It might, or it might not. In any case, it's behaviour that is very much frowned upon in debate or argument, and I've seen it cause a great deal of trouble. Now I certainly agree that it is generally understood that the 68k was a contender, but using that particular bit of text in isolation is not good evidence for it. – Mark Morgan Lloyd Nov 14 '20 at 18:14
  • 1
    @MarkMorganLloyd You mean beside the fact, that a design goal for the PC was to use a 16 bit CPU, which the 6800 not ist? Then maybe that, at the time the IBM-PC was conceived, the 6800 was already way outdated. Comparable to today it would be like someone using a 486 for a new, non-retro design. If at all a different Motorola CPU than the 68000, then it would have been the 6809 - which BTW was a fine one and considered for the IBM-PC as well as for Apple's Mac.So yes, the mentioning of an 6800 is at least a typo, but more likely simply false – Raffzahn Nov 15 '20 at 20:57
  • 1
    @Raffzahn Broadly agreed, but it's still bad form to use a corrected version of a questionable statement as the centre of an argument. After all, it might have been that the author of that piece (as well as any editor if it was "professional") was sufficiently ignorant that he did not know the difference between 6800 and 68000... in which case should /anything/ he writes be trusted? – Mark Morgan Lloyd Nov 15 '20 at 21:07
  • 2
    @MarkMorganLloyd Agreed. Then again, it's the piece the OP found and initiated his question. I would believe that he knows well enough of the history to decide that the 6800 could not have been meant. Reading the whole article makes me believe it is written by a journalist, not a historian, who invested much time to piece it together, but may not have had the in depth knowledge of all aspects. I don't think one detected irregularity will not it invalidate the article per se and even less the question asked. Do you? – Raffzahn Nov 15 '20 at 22:12

5 Answers5

43

Several CPUs were considered. Essentially all 16 bit CPU of the time:

  • TI's 9900,
  • Motorola's 68000,
  • Zilog's Z8000 and
  • Intel 8086/88

This IEEE Spectrum article sheds some light on the development, at least for 68k, 9900 and 8088 (*1,*2).

In the end,it came down to a combination of factors:

  • TI's 9900 was single source, IBM didn't want a lock in.
  • Motorola's 68000 was simply not ready at the time - and as well without second source (at the time)
  • Zilog's Z8000 could have made it - except Zilog was (at that time) owned by Exxon, who at that time invested an extreme amount of money in what they perceived as a future without oil, creating Exxon Office System in direct competition to IBM in general and the target market of the PC in particular. So any Zilog CPU was political off limits.

Intel's 8086/88 did not bear any of these problems.

  • It was ready to be used.
  • It had plenty of second source availability.
  • There were no political reasons to not choose Intel.

In addition several 8080/85 based designs have been made at IBM for some time, so the upgrade path to 8088 was as well kind of natural.


*1 - Although the author seems to be a bit biased against his own creation :)

*2 - In addition the 15 bit address space (32 KiWords) wasn't an issue as usage of a 74LS610 mapper was default for 9900 systems, Allowing virtual memory and up to 24 Bit addressing.

chicks
  • 397
  • 1
  • 4
  • 15
Raffzahn
  • 222,541
  • 22
  • 631
  • 918
  • 3
    That's a great article. Here's an alternate list of reasons why the 68000 was not chosen: https://yarchive.net/comp/ibm_pc_8088.html – snips-n-snails Nov 14 '20 at 21:58
  • 2
    "Nobody Gets Fired For Buying IBM" — Indeed, IBM seemed to make wise decisions at that time. – scrØllbær Nov 15 '20 at 00:10
  • 4
    @scrollbear - I knew of one CTO at a former employer who was shown the door when the board of directors found out just how much his "Buy IBM" policy was costing the company. He PS/2'd it, and they PS'd on him! – Bob Jarvis - Слава Україні Nov 16 '20 at 04:32
  • 1
    The 8088 didn't really have plenty of second sources. In fact, AMD got into the x86 compatible market because IBM insisted that Intel find a second source, and AMD ended up being chosen, and getting extremely favorable terms. – Jerry Coffin Nov 17 '20 at 05:47
  • 1
    @JerryCoffin Siemens Bauelemente (Today Infineon), for example, was second source from the very first day. and so were others. Intel was quite into licencing since 8080 times. – Raffzahn Nov 17 '20 at 10:56
  • And yet IBM found it inadequate, and basically demanded that another source be added. – Jerry Coffin Nov 18 '20 at 02:06
  • I very much scratched my head about an Intel chip being 'very plenty 2nd sourced' as well. The clarifying comment down here might benefit quite a bit from being moved into the answer (with expansion preferably?) Also: "political" here means really 'not connected to a business competitot in the market'? (As with 'political' I'd at first associated ~'communist chip constructors' or sth like that.) – LаngLаngС Nov 19 '20 at 02:40
  • @LаngLаngС AMD, TI, National, Signetics, Siemens, Oki, NEC, Toshiba and more licenced the 8080/85, similar for 8086/88 and so on. Up and including the 486 Intel was very much into licencing their chips. Today's not licencing is the rather remarkable point here. And yes, Large companies, like IBM, do work much like communist states :)) – Raffzahn Nov 19 '20 at 10:46
  • I question the Intel/Exxon statement, which was claimed by Faggin. There were other problems with the Z8000 that seem much more likely to be the issue. The main one would be the less flexible memory model; the 8088 had four 64k visible regions in a 40-pin DIP, the Z8000 only one segment in a 48-pin DIP. – Maury Markowitz Nov 19 '20 at 18:50
  • @MauryMarkowitz To me it sounds quite plausible. Also, where does the idea of 'only one segment' is originated? The segmented version (wich was the 48 pin) has two designated 32 bit registers (stack in R14'/R14 and PC in R15'/R15), while all other even registers (RR2.. RR12) can hold any arbitrary segment (as 32 bis address). That's 6+2 segment registers, isn't it? I'd consider that way more flexible than with 2+2 (2 fixed for CS/SS) registers, all tied to certain operations. – Raffzahn Nov 19 '20 at 19:19
26

One reason that has not been mentioned is that the memory and speed of the PC placed it in the ballpark of CP/M systems rather than UNIX systems (already available at the time). At this time there was a reasonably thriving market of CP/M office systems, and in spite of almost all of them being run by variants of the Z80 processor, much of the application software range (including the operating system itself) would happily run on 8080 processors. At this point of time, much software was not really written in general-purpose languages but either in assembly language or something rather close to the processor.

While the 8088 was not binary compatible with the 8080, its register structure was so similar to the 8080 that assembly code could be translated mechanically into 8088 assembly without much of a performance loss (if I remember correctly, the 8086/8088 even have a few instructions that serve no purpose apart from this kind of source compatibility).

So this provided a good venue into convincing existing application software vendors to come out with a version for 8088/8086 without a lot of front-up investment, giving the platform a good start in the current staple of office computers and software.

The original entry points into the operating system were also mimicking CP/M's BDOS entry calls, with Unix-like system calls (device agnostic open/read/write working through file ids rather than device-specific data structures and separate calls) getting added soon and partly with strange restrictions.

user19890
  • 261
  • 2
  • 2
  • 3
    welcome to Retrocomputing. Superb first answer. – Jean-François Fabre Nov 14 '20 at 22:42
  • 9
    "if I remember correctly, the 8086/8088 even have a few instructions that serve no purpose apart from this kind of source compatibility"; two examples would by the SAHF and LAHF instructions, which where there to provide a way to efficiently translate the PUSH PSW/POP PSW 8080 instructions – poncho Nov 15 '20 at 02:20
  • 3
    MS-DOS was originally explicitly designed to mimick CP/M to be able to develop while waiting for CP/M-86 to come out. – Thorbjørn Ravn Andersen Nov 16 '20 at 13:18
9

The main factor could be the availability of the 8-bit data bus version of those CPUs. 68008 came to the market in 1982, but 8088 in 1979.

The 8-bit version was important because peripherals (memories etc.) had only 8-bit wide data buses those days; change to full 16-bit could be very expensive.

So those "crippled" versions were important mid-step between 8bit and 16bit systems. I believe that the availability of the 8088 was important for IBM during the consideration phase because the rest of the IBM PC could be built on less and much cheaper ICs.

Martin Maly
  • 5,535
  • 18
  • 45
  • 3
    See this Q&A for details on the advantages of the 8088 v. the 8086 in the IBM PC. – Stephen Kitt Nov 14 '20 at 12:24
  • 2
    Use of 8-bit peripherals with a 16-bit CPU isn't generally a problem. A peripheral that would normally occupy four consecutive addresses at base, base+1, base+2, and base+3 would instead occupy base, base+2, base+4, and base+6, which depending upon how code had been written might make adaptation a little difficult. but in most cases programming wouldn't be any harder than for an 8-bit system if one adjusted address offsets appropriately. – supercat Nov 14 '20 at 17:16
  • 2
    The only peripheral that would have been problematic would have been a DMA controller, since feeding bytes that are stored consecutively in memory to an 8-bit device would require a means of transferring data between the top and bottom halves of the data bus without CPU involvement, and how many DMA channels would need the ability to do that. – supercat Nov 14 '20 at 17:24
  • 2
    In addition to what @supercat mentions about general principles, all 16 bit CPU's o the time were designed to work seamless with 8 bit peripherals The 68K for example featured all signals to direct interface 6800 peripherals (E, VPA, VMA). – Raffzahn Nov 14 '20 at 21:09
3

The story comes I believe from the IBM Scientific Computer being developed by in the late 70's/early 80's at IBM Hursley Research in the UK.

This was a substantial 68000 based workhorse compared with the original IBM PC.

https://en.wikipedia.org/wiki/IBM_System_9000

Mike James
  • 41
  • 2
  • That's potentially interesting, but what you've posted here is more of a "teaser" than an answer. In particular, you don't explain why this system being 68K would lead to the PC not being, but rather leave readers to guess or follow your link, That's not how stack exchange sites work; answers here need to stand on their own, and should actually answer the question. – Chris Stratton Nov 16 '20 at 17:05
2

IBM already had an extensive history of using Intel chips in its products and had also acquired the rights to manufacture the 8086 family from Intel for its hardware. Intel wanted to build its own computers.

However, due to competition from Japanese manufacturers, who were able to undercut costs, Intel abandoned the market and focused solely on microprocessors.

TonyM
  • 3,704
  • 1
  • 19
  • 31
LazyReader
  • 121
  • 1
  • 1
    Intel was a major player all thru the 80s and 90s and does build computers up until today ... of course not exactly at the cheapnik level. – Raffzahn Nov 14 '20 at 23:59
  • 1
    @Raffzahn I never heard about intel computers. They make a lot of other parts beside the CPUs (video card, RAM, etc), they make also some software, but afaik no computers. – peterh Nov 15 '20 at 09:42
  • 1
    @peterh-ReinstateMonica: I think they left it a long time ago, but I had an Intel component catalogue in the 1980's, and they make references to computer systems from their own inside it, e.g. in the sections about 80186 and 80286. Also different electronics magazines used to have product introductions about Intel computers (always based on their own components), and I also remember an article (between 1985 and 1989) in IEEE or maybe Electronic Design, about interconnects used in multiprocessor computers, and one design of Intel was also featured. – chthon Nov 15 '20 at 12:49
  • 1
    @peterh-ReinstateMonica Intel did build boards and complete machines. In the 80s usually around their Multibus, the 'other' old bus system (S100 being hobbyist). They even ventured into supercomputing (iPSC)- except time was a bit early for their multiprocessor designs - massive CPUs still ruled. During the 90s and Naughties as well PC type boards. My first 486 (a true 50 MHz) was one of them. Most was sold as OEM so not very visible to end customers. – Raffzahn Nov 15 '20 at 13:32
  • 1
    @peterh-ReinstateMonica Intel made some very early microcomputer systems: https://en.wikipedia.org/wiki/Intellec. I did some development on an Intellec 8 in 1974/75. – John Doty Nov 15 '20 at 22:48
  • 1
    @peterh-ReinstateMonica At the time I have no idea, but nowadays (since 2013 I believe), they do sell computers: the Intel NUC. – jcaron Nov 16 '20 at 12:16