18

I was recently looking at a Motorola 68010 and 68451 that have been in some ESD foam on a shelf for a very, very long time. Now, things are all so huge in memory, but BSD4.4-Lite can run in only 256k for the kernel with networking (https://github.com/sergev/LiteBSD).

I was wondering what the simplest historical UNIX machine is that had memory management?

Thorbjørn Ravn Andersen
  • 2,262
  • 1
  • 14
  • 25
b degnan
  • 1,099
  • 7
  • 17
  • 11
    Microsoft's Xenix 286 running on an Intel 80286 based system would be one example. Available commercially and an IBM PC AT would do with some form of serial card to run multiple terminals. Available ~1984. – Brian Dec 24 '21 at 18:50
  • 3
    Both Sun and Apollo built workstations using the 68000 and 68010 in 1981-83, with Sun being an early adopter of BSD Unix. – Brian H Dec 24 '21 at 19:14
  • 6
    Another example is the Unix-like QUNIX (later renamed QNX) which was released for the Intel 8088 in 1982. Wikipedia: “in the late 1990s QNX released a demo image that included the POSIX-compliant QNX 4 OS, a full graphical user interface, graphical text editor, TCP/IP networking, web browser and web server that all fit on a bootable 1.44 MB floppy disk for the 386 PC.”. – Single Malt Dec 24 '21 at 20:45
  • I guess it depends how you define Unix ☻ – mirabilos Dec 27 '21 at 15:40
  • 1
    If there is an implied question what UNIX you could run if you were to make a computer of the chips you have today: Fuzix would be the right starting point, not ancient unices.... – rackandboneman Dec 28 '21 at 07:50

4 Answers4

25

The PDP-11/45.

Ken Thompson and Dennis Ritchie's first PDP-11 Unix system was on an 11/20 (first PDP-11; no MMU available in standard pricebook) and later an 11/45. The 11/45 was the "big, fast" follow-on machine, with 18-bit physical addresses and three CPU modes.

18-bit physical can address 256 KB; less 8 KB for the I/O page, that gives the 11/45 up to 248 KB of memory (core, MOS, or bipolar).

The classic Ken and Dennis photo has them in front of a pair of PDP-11s (the rightmost two cabinets each holds a CPU + console). The machine on the left is an 11/45, on the right an 11/20.

Thompson and Ritchie at PDP-11

Without your stipulation of an MMU, the 'simplest Unix' label would go to PDP-7 Unix, and after that to PDP-11/20 Unix.

dave
  • 35,301
  • 3
  • 80
  • 160
  • Then note that this is that machine with 1970s tech. With ~2019 era tech: https://pdp2011.sytse.net/wordpress/the-smallest-pdp-11-ever/ So yeah, very simple indeed. – The_Sympathizer Dec 28 '21 at 00:39
  • 'Small' and 'simple' are not the same, though I covet a PDP2011 (I'm currently running the Raspberry Pi + Simh variant of a PiDP-11). In reference to the question, though, the OP asked about 'historical' Unix machines. – dave Dec 28 '21 at 02:48
  • Indeed, but the point is that the different tech bases make the original PDP11 seem more complicated than it actually is, by making it physically large and imposing. This makes the comparison a bit more "obvious". Still not going to be a super accurate comparison, but that's the broad gist. Almost surely it could be made even tinier e.g. as a dedicated SoC then look at the wafer :) – The_Sympathizer Dec 28 '21 at 03:37
  • @The_Sympathizer - ok, now I get your point. Thanks. – dave Dec 28 '21 at 17:40
20

A bit hard to give a definitive answer as the term UNIX not only covers a huge variety of systems from early minis and microprocessors with a few KiB, to multi gigabyte 64 bit systems, but as well a huge range of more or less (usually less) compatible implementations. Even more, what to consider part of it? Especially the latter can be the defining moment for smaller system - which by default all early ones are. Does it need to have an IP stack, or a GUI? Which shell or editor?

A basic kernel with a few helpers (getty, shell, etc.) can already run in a few dozen KiB, even supporting multiple users. In fact, the very first implementations were as slim.

A good example for what a low end (non educational/research) system might be is Microsoft's XENIX. It's not an Unix-alike, but a fully licensed (*1) AT&T Unix. First based on genuine V7 sources, later upgraded to System III and System V. Microsoft sold it mostly to OEMs like Altos, Siemens or Tandy. A basic starter system may look like these:

  • Siemens PC-MX (~1981) an 8086-based multi-user system for up to 5 terminals (13 terminals possible), running at 8 MHz, 256KiB (up to 1 MiB possible) and a 10 MB HD. The later NS32K-based PC-MX2 brought already 1 MiB as minimum RAM. The system featured a special type of memory management. Siemens MX systems were the highest-selling Unix systems worldwide during the mid '80s to early '90s.

  • Tandy Model 16 (~1982) was essentially a Model II with a 68k subsystem running at 6 MHz fitted with 256 KiB and an 8 MiB HS. It could operate up to 9 terminals (DT-1). The Model 16 was in 1984/85 the best-selling Unix system in the US.

  • IBM PC-XT was gifted with SCO XENIX in 1983, requiring a basic 4.77 MHz 8088, 256 KiB RAM and a 10 MiB HD - although the manual mentions that some tools, like VI may need at least 384 KiB to run (*2). Also, while a PC-XT could run multi-user with terminals attached, it may not be as smooth :)) As a software package it dwarfed any other Unix sale in the US in numbers around 1986.

This may be as low as genuine Unix runs on a low-end microprocessor system - and being successful in real life applications.


*1 - Everything but the name.

*2 - PCjs shows nicely how Xenix felt on a 4.77 MHz 8088 with 640 KiB and 10 MiB HD :)

Toby Speight
  • 1,611
  • 14
  • 31
Raffzahn
  • 222,541
  • 22
  • 631
  • 918
  • 3
    Fundamentally, though, how do we measure "simplicity" of a computer? Gut feel says that an 8086 with 29000 transistors is not as simple as a TTL PDP-11. (Upvoted anyway) – dave Dec 24 '21 at 21:49
  • 1
    @another-dave Oh, I fully agree. On the other hand, machines like a PDP and how they differ from today's computers are simply not understood by most people who think of an i386 as stone age. So I tried to come up with an example that is ancient and at the same time at least remote relatable to today's readers. – Raffzahn Dec 24 '21 at 22:58
  • Note "memory management" in the last part of the original question. Only the later 80286 had this, on 68000/68010 based systems it would require adding either a 68451 or custom MMU hardware. – Brian Dec 25 '21 at 01:17
  • @Brian Memory management does not mean virtual addressing or using a (stock) MMU or hardware support at all. Why should it? Without further specification of restrictions the OP wants to apply (useful or not), any kind of memory management would qualify. In fact, the 8086 does feature a good base for such thanks to its segmentation (not perfect, but 'manageable' :)) – Raffzahn Dec 25 '21 at 02:39
  • Indeed, if your "Unix" based all of the processes on the "tiny" (i.e. everything crammed in to 64k) memory model, you could simply give 64k to each process on segment boundaries and they would run in relative safety from stepping on each other without any memory protection. The processes could even be swapped (since only the segment registers would need ot be changed). Minix 1 ran on the XT, through I don't know how it worked and finding Minix 1 info on the internet is difficult. – Will Hartung Dec 25 '21 at 05:10
  • @WillHartung The same can be done with multiple segments. In addition, what you seem to think about is memory protection, isn't it? Given, many (hardware) memory management systems come as well with memory protection making it appear mandatory, but it isn't. it's an additional feature. One not necessarily needed to to run a (Unix) system. – Raffzahn Dec 25 '21 at 14:33
  • 1
    @Raffzahn Of course, (mind, I'm not an x86 expert), you could easily use all 4 segments. I harp on memory protection because of Unixes 'C' history, which we all know is not "memory safe", and without some mechanic, it would be too simple to corrupt the system. Not so much for applications (since they're "debugged") but having to reboot a Unix system due to a broken routine during development would get tiring really quick, and frustrating to anyone else logged in. So, in practice, I would think some memory protection is almost a necessity. – Will Hartung Dec 25 '21 at 17:18
  • Contrasting to something like an Alpha Micro where the vast bulk of the development was done in BASIC, which is (can be) memory safe, so having a system without memory protection is less important. – Will Hartung Dec 25 '21 at 17:22
  • @WillHartung all true (especially the point about languages more fitted to good development:)). Then again, above examples, Xenix for the Model 16 and PC-XT, do show that wide spread Unix use was well established on machines without any memory protection scheme. [For the segments, beside Tiny (One 64 KiB segment for all), Small is the most practical one (One 64Kib code and one 64 KiB data segment (including stack)), as it still goes without any segment operations. In fact, other do not make huge sense for machines with just a few hundret KiB and multi user support.] – Raffzahn Dec 25 '21 at 19:45
  • With 8086 not having a "segment limit", you couldn't give a process access to any less than 64k of linear address space, even if the process never changed segment registers. (Tiny or small memory model). This still leaves room for bugs to scribble over the memory of other processes. (Although the kernel could use low linear / physical addresses for itself, so user-space could only affect other user-space, and pagecache if that was a thing on Xenix. I never used it.) @WillHartung. Of course a malicious program, or a buggy one that did things with segment registers, could access all RAM. – Peter Cordes Dec 26 '21 at 06:49
  • 1
    You'd learn to write programs more carefully, perhaps? :-) – dave Dec 26 '21 at 14:17
  • @PeterCordes It's in fact the only real critique I have about the 8086: no segment load trap. With such (plus a bit more, but not much) basic memory protection and user/supervisor separation could have been implemented. I guess it has not been added due the fact that the 8086 was meant as embedded CPU. – Raffzahn Dec 26 '21 at 15:40
  • @PeterCordes We're not looking for protection from malicious users, just some modicum of protection from normal developers doing normal things. As long as you don't drop in to assembly language, the C compiler is high level enough to prevent your code from stomping on anything not in your segment, since it doesn't present access to the segment registers at all. The 64k limit is just a reality. It doesn't have to be perfect, just usable. I should be able to safely write Rogue without risking cratering the kernel or someone else because of an uninitialized variable. – Will Hartung Dec 26 '21 at 16:34
  • @WillHartung: Yes, exactly, that's why I only mentioned malicious programs as an aside at the end. Like I said, you shouldn't accidentally nuke the kernel, but you might nuke another user's shell process (or your own, or a daemon), unless the kernel had a free 64k of physical memory for your process. Also, compiled C code using far pointers could accidentally load a bad seg portion of a seg:off address and write anywhere in memory; that's why it's important to specify a "small" or "tiny" memory model so the compiler-generated asm won't include any instructions that modify segment regs. – Peter Cordes Dec 26 '21 at 18:58
  • 1
    @PeterCordes yea, 64k is coincidentally the "minimum" process size and the "maximum" process size. Just the way it is. If the C compiler doesn't offer the other memory models, it doesn't become an issue. The other example of this is that you don't need a loader for your executables. Ye Olde ".COM" format will do just fine, no need to relocate code to suit the process space. Coherent used to have a 64K process limitation, dunno if it had a minimum size or not. It ran on the 286. – Will Hartung Dec 26 '21 at 21:14
15

Strictly speaking, for an OS to be called UNIX, it has to be certified to comply with the Single UNIX Specification. If we relax the requirement to include also UNIX-like systems (e.g. it took many years for Linux to be certified - and then again, only one specific distribution), then there is UZIX, a UNIX-like OS for the MSX computers (Z80 CPU), which implements kernel in as little as 32KB RAM that implements almost all of 7th Edition AT&T kernel (e.g. it can run complete Bourne shell), full mutiuser and multitasking, and TCP/IP. It is even old enough to be retrocomputing related again.

Then there is FUZIX, a multiplatform (from 8080, 6809, 6502 to esp8266) OS (developed by Alan Cox!) implementing a lot of System7 and SYS3 to SYS5 functionality. It runs even on an unmodified ZX Spectrum 128. And that is a modern impementation of the original UZI, an UNIX implementation for Z80 CP/M OS, running in 64KB (non-banked) RAM.

b degnan
  • 1,099
  • 7
  • 17
Radovan Garabík
  • 4,993
  • 1
  • 16
  • 35
  • 10
    re: it has to be certified to comply with the Single UNIX Specification -- uh, Unix as produced by Thompson and Ritchie was never certified with anything. – dave Dec 25 '21 at 12:38
  • 5
    @another-dave True, since SUS was only defined in the mid 1990, next to all classic Unix are not onl non-certified, but with high possibility not even compliant. Poor AT&T :)) – Raffzahn Dec 25 '21 at 14:39
  • 4
    Nitpick: It's not that for an OS to be called UNIX it has to be certified. The certification gives you only one right: to use the trademarked word "Unix" in your documentation and marketing material. So for an OS to be marketed and sold as UNIX it has to be certified. Users on the other hand can generally call anything anything and in fact have been known to do so and go on flamewars about it with other users on the internet :D – slebetman Dec 26 '21 at 21:56
1

What is "Unix"? What is "simple"? Fairly crucial to have definitions of these in order to properly answer the original question!

But, requiring an MMU leaves out all basic 8086 solutions I'm aware of. There were Z-80 (with MMU!) systems by Morrow that ran a thing that looked/acted like Unix, and they could run CP/M and MP/M too. Does being a Z-80 make it simple? But there was a lot of extra circuitry on its CPU card to let it do what needed to be done, in addition to the page-mapping MMU.

Is a PDP-11 really that simple? Lots of components there!

A single-chip processor like a 68010 with a simple RAM-based page mapper was actually fairly simple in that there were relatively few components involved, but they're all quite complex when compared to a PDP's parts. We designed a Macintosh-ish board that ran a Unix. Demand-paged virtual and a GUI in 512kB of RAM. I'd call it simple. You might not, if component choice knocked it off your perch.

Any later processor (x86 or 68K) with an embedded MMU might qualify, a system using one of those with a bit of RAM and a serial port ought to be pretty simple, with the same caveats.

I submit that the original question is too vague to inspire a good answer.

jimc
  • 107
  • 4
  • Well, I agree that the definition of simple is not exactly "simple". I especially disagree with some answers here that consider an x86 PC "simple" - it is plain common, wide-spread and well-known, but given the amount of chips you needed to make it run (considering times before integrated chipsets were available), it's all but simple. Something based on a M68020 with a few peripherals I'd consider "simple". – tofro May 22 '23 at 06:28
  • The 68030 was the first in the family with an integrated MMU. You could probably flange together a minimum-component-count 68030 system that had a serial port and some RAM and a boot PROM, and run a Unix-ey thing on it. But in the end it'd hardly be that much different than a Raspberry Pi A or some such, as regards simplicity. (Ignore the video port and just use serial.) Is an RPi historical enough yet? – jimc May 30 '23 at 17:22