85

So, today all major OS support multitasking and concurrency in languages like for instance threading.

The Amiga seems to be the first home computer which has advanced concepts in this area. But had any 8-bit home-computer rudimentary capabilities already before the Amiga ? Be it in the ROM or emulated by any software (additional OS or an available programming language) does not matter.

Regarding the Commodore 64 for instance there were hardware interrupts (e.g. for I/O and timer). But I'm thinking more about higher-level approaches here.

user428517
  • 103
  • 2
Marco
  • 1,387
  • 1
  • 10
  • 11
  • 17
    Note that modern operating systems enforce process isolation from one another using hardware which 8-bit systems didn't have. This meant that everything had to be much more well behaved in order not to bring anything down. With CP/M it was not unusual to reboot the machine if a program went into an infinite loop. – Thorbjørn Ravn Andersen Aug 22 '17 at 20:41
  • 7
    @ThorbjørnRavnAndersen In the modern networked age, systems which could not enforce memory isolation between tasks would be unworkable. But the late 70s, 80s, and early 90s was a different age. Cooperative multitasking + multi-user systems were quite workable and effective. The "hackers" were part of the team, and the fun was seeing what the team could accomplish together. Intentionally destroying the work of others was just rude and childish. Yes, an errant program could cause problems for everyone, but that occurred a lot less frequently than you might imagine. – RichF Aug 23 '17 at 01:56
  • @RIchF: it is worth noting that in modern systems with well-behaved programs, technically all that protection hardware doesnt do anything useful; the programs are cooperating. (OK, people use page trap to manage VM, but otherwise...) – Ira Baxter Aug 23 '17 at 04:09
  • 1
    If you go back before 8-bit architectures, you could include the DEC PDP-11, based on 16-bit words. Unix was running on this early on, and AFAIK it had multitasking although by another name. – Walter Mitty Aug 23 '17 at 07:45
  • @IraBaxter also memory mapped files, over-subscription (ZFOD), and few other similar features... – Maja Piechotka Aug 23 '17 at 08:25
  • 1
    @WalterMitty: Pretty much every 16 bit minicomputer vendor had a full RTOS with decent multithreading. Most of those machines were used to do industrial control and/or data acquisition. Being non-responsive to many realtime activities effictively meant "no sales" for a vendor. It wasn't like people didn't know how to do this, all those ideas were worked out in the late 50s. – Ira Baxter Aug 23 '17 at 08:32
  • 1
    For an early Macintosh perspective, https://www.folklore.org/StoryView.py?story=Switcher.txt is a really good read (as is everything on that site). – KlaymenDK Aug 23 '17 at 09:12
  • Yeah, my favorite machine was a PDP-10.Strange architecture, by today's standards. 36 bit word, programmable byte sizes. But TOPS-10, TENEX, and ITS ran on it. In the very early days, Bill Gates and Paul Allen got a job finding ways to crash a PDP-10. There was a PDP-10 knock off at Xerox PARC when ethernet was invented, and Smalltalk got its start. – Walter Mitty Aug 23 '17 at 11:23
  • 1
    Also,the PDP-11 ran RSTS and RSX. Dave Cutler was one of the builders of RSX. He went on to be one of the brains behind VAX/VMS and, later, the lead architect of Windows NT. – Walter Mitty Aug 23 '17 at 11:31
  • 1
    @ThorbjørnRavnAndersen: You are talking about systems like Unix, Linux, macOS, Windows NT, etc., I presume. Even more modern systems actually often use other mechanisms. E.g. Singularity uses static typing, language restrictions, and theorem proving. The program has to prove to the OS that it will not violate access restrictions, and the OS checks that proof. If the proof-checking fails, the program is rejected, if it succeeds, we don't need hardware protection, since we have just proven that the program cannot violate access restrictions anyway. Lisp and Smalltalk OS's are memory-safe by … – Jörg W Mittag Aug 23 '17 at 12:28
  • 1
    … virtue of the fact that those languages have no way of accessing memory anyway, therefore, there is no need for memory protection. – Jörg W Mittag Aug 23 '17 at 12:29
  • 10
    @IraBaxter: If you think that today's programs are any more well-behaved than "back then", you are sadly mistaken. To the contrary; just like people "back then" blithely assumed that "all the world is a VAX", today they assume that unreleased resources will be cleaned up at process termination. In a non-protected system (like the AmigaOS "back then"), every unreleased memory hunk and every unclosed file will remain that way until system reboot, because the system does not have any means to "clean up" after you. I'd daresay programs "back then" were better behaved than today. – DevSolar Aug 23 '17 at 12:56
  • 3
    Do you consider "modern" 8-bit like the AVR? There are RTOSes for it. – chrylis -cautiouslyoptimistic- Aug 23 '17 at 13:06
  • @DevSolar: Most of the assumptions implicit in "all the world's a VAX" assumption could have held just fine if compiler writers recognized that (1) Any modern machine should be able to behave in a fashion generally similar to a VAX, and (2) It's much easier and more practical to write software that would work on any computer that attempts to behave like a VAX, than one that would behave all conforming implementations including the most obscure and bizarre one. – supercat Aug 23 '17 at 22:50
  • @JörgWMittag Lots of theoretical operating systems exists. The question is about major operating systems, and here we are essentially in the C world where the abstractions grew from the CPU and up instead from the math and down. – Thorbjørn Ravn Andersen Aug 24 '17 at 12:04
  • @ThorbjørnRavnAndersen: Ironically, some of the dominant compiler vendors want to favor some of the Standard's stupidly-restricted abstractions in favor of the hardware from which it sprang, especially when it comes to "Undefined Behavior". – supercat Aug 25 '17 at 22:38
  • 2
    @supercat I have absolutely no idea what you are trying to tell me. – Thorbjørn Ravn Andersen Aug 26 '17 at 05:51
  • @chrylis I wouldn't consider an AVR an "8-bit home computer before the Amiga". In many aspects not. – tofro Aug 26 '17 at 09:46
  • 2
    @ThorbjørnRavnAndersen: With older C compilers, the behavior of things like integer overflow, inter-object address calculations, and type punning were determined by the underlying hardware. Certain kinds of code, for example, would work on linear-address machines but not segmented-architecture machines, and the advantages of being able to use such code led to linear-addressing winning out over segmented addressing (which could otherwise have some significant advantages if designed properly). Modern compilers, however, will be unreliable if code tries to do things beyond what the Standard... – supercat Aug 26 '17 at 18:44
  • ...requires them to support, even in cases where the behaviors in question would map quite naturally to the underlying hardware. C compiler behavior may have once flowed from the underlying hardware, but today it's more fashionable for compilers to optimize based upon the minimal requirements set by the Standard (which I don't think was ever intended to fully specify a complete implementation that was suitable for any particular purpose). – supercat Aug 26 '17 at 18:46
  • 4
    @RichF The 'late 70s, 80s, and early 90s was a different age' only in the microprocessor world. I was using microprocessors with memory protection in 1979, and I was using minis and mainframes with full process isolation from 1971. The micro world took several steps back, and took until at least 1990 to catch up with where everybody else had been since about 1957. – user207421 Aug 27 '17 at 08:26
  • 1
    @supercat naturally. C was designed to abstract away the CPU so naturally it was so. It was only later when C spread to other platforms that C became ANSI C. Evolution. Just like windows 10 has DOS stuff in it even if it did not descend from MS-DOS - it is not to blame. – Thorbjørn Ravn Andersen Aug 27 '17 at 09:48
  • 1
    @ThorbjørnRavnAndersen: C was designed to abstract away most aspects of the CPU, but not memory. If an implementation specified how "float" and "unsigned long" were stored, the behavior of writing an "unsigned long" and reading a "float" would be defined in terms of those storage formats. Today, even if types like "int" and "long" have the same representation, compilers are allowed to malfunction in arbitrary fashion if code writes a "long" and reads an "int", and modern implementations exploit that. – supercat Aug 27 '17 at 17:12
  • Concurrency is pretty easy/necessary in most microcontrollers as you have interrupts to help you out. Send or receive data byte-by-byte (load/read from the buffers), manage your ADCs, scan through GPIOs, you often need to do them all at the "same" time. – Nick T Aug 28 '17 at 13:33
  • For starters, the Amiga wasn't an 8 bit system. It ran on the Motorola 68000, which was a 32 bit processor internally. It had been used in Unix workstations for several years before the Amiga was introduced. The Commodore 64 used a variant of the 6502. I began learning assembly language on Atari, which used 6502s. I can still almost write 6502 machine code off the top of my head. Compared to other processors of the day, the 6502 was very out of date. It would have been very poor at multitasking. In retrospect, I'm fond of the Z80 even though I didn't own one back then. The Z80 is an enhanced I – Velek Jan 19 '20 at 03:32
  • @RichF Yes, but are we still talking 8-bit home computers then? If you need display memory as opposed to serial terminals the kilobytes eat up pretty fast. – Thorbjørn Ravn Andersen May 21 '22 at 17:44
  • You can implement multitasking on something as primitive as a MIX. This was in fact an exercise in a university course I took. While MIX is not an 8-bit micro (there is some cause for calling it a 6-bit machine), this nevertheless shows you don't need a whole lot of hardware support. – dave Oct 06 '23 at 14:36

19 Answers19

91

In fact, quite a lot did.

Ignoring the 'home computer' restriction, but going with micro processors (*1), then there is of course MP/M - the multi-user and multi-program environment for CP/M. MP/M was published in 1979 by Digital Research for 8080/85/Z80 machines. Terminals, users and programs were handled separately, thus one user could have several programs run in parallel and switch between them (called "detach" and "attach") on a single terminal, or change terminal and attach from there. Also, several users could (in sequence) share one terminal. Programs could run attached (in foreground on a terminal) or detached (in background). In addition, there was a scheduler process starting (and stopping) programs at specific times and conditions (like cron). Last but not least there was a separate spool process, so programs were not blocked while printing.

MP/M also included functions for inter-process communication (queues) and process synchronization. Soon a network level (CP/NET and CP/NOS) was added to connect multiple MP/M machines to a kind of cluster (very rough term, but it was more than a simple client/server structure and I don't know how to explain its workings in less than a few pages :))

All features could be controlled using the MP/M extension of the CP/M API.

In theory, MP/M could have been used on every computer capable of running CP/M, especially where CP/M 3.0 memory management was available (which in itself was a backport from MP/M II to CP/M), but other than some Tandy Model II and 4, I don't remember any home computer with explicit MP/M support.

Another very common multiuser/multiprocess system was OS/9 created for Motorola's 6809 CPU. The 6809 offered great support for position independent code and data, as well as OS support and module linking in Hardware. Thus it was easy to load not only several programs at once, but also to support re-entrant code, thus resulting in a great reusability for libraries (shared code).

Another multiprocess OS for the 6809 was UniFlex. Flex was originally written as a single-user single-program OS for the SWTPC 6800 machine. Later iterations included a port for the 6809 and integration of Unixoide functionality, then called UniFlex.

For home computers there have been dozens of variations of multitasking/multiprocessing environments. From 1984s M.O.S for Schneider/Amstrad computers, distributed by StarDivision to 1986s GEOS for the C64, a seemingly endless plethora of OS and OS-like environments have been created. I might need a book to list and qualify them all.

The most remarkable piece might have been the Sinclair QL from 1984. With a 68008 CPU, it might be seen somewhere at the edge, but I'd still consider it an 8-bit machine. The QL included a pre-emptive multi-tasking OS in ROM called QDOS. The built-in SuperBASIC offered the full QDOS interface for process creation and control to any BASIC program, thus allowing concurrent processes and threads. It is said that Linus Torvalds took much inspiration for Linux from QDOS, as he owned a QL before switching to a PC.

(Oh, just to brag about: ca 1979/80 I wrote a small multi-process kernel for the Apple II, able to run up to 8 tasks, but I guess that's way below the threshold the OP asked for :))


*1 - There have been of course 8 bit mini computers which not only had multi tasking operating systems, but were also designed with hardware support for such. The Dietz 600 series might give the clearest example. A TTL based CPU, with hardware support for task switching. One might compare it to a 6502 with an interrupt handling system that would, in case of an interrupt, switch ZP, Stack and a memory base address to an interrupt task. When the interrupt finishes (RTI), it doesn't return to the interrupted task, but the task with the highest priority able to execute. Very handy.

Raffzahn
  • 222,541
  • 22
  • 631
  • 918
  • 1
    I don't believe GEOS for the C64 was multitasking. At best you could freeze one application and start another. – snips-n-snails Aug 22 '17 at 20:19
  • Is the 6809 counted as 8-bit? IIRC nearly all of its registers were 16 bits (DP and CC being the only exceptions that spring to mind). – cHao Aug 22 '17 at 20:34
  • @traal - according to the github page of the open source release it supports cooperative multitasking. Whether that was in the original version or added more recently, I don't know. – Jules Aug 22 '17 at 20:57
  • OS/9 was pretty clever and handled multithreading pretty well. Didn't appear until the 6809 came out in the late 70s. 8085 and 6800 stuff clearly preceded it. – Ira Baxter Aug 22 '17 at 21:01
  • @cHao - the register model is pretty similar to the Z80's (i.e. it has 16-bit registers design to hold pointers along with 8-bit general purpose registers than can be combined for 16-bit operations), and that's universally considered 8-bit AFAIK, so I'd guess so. – Jules Aug 22 '17 at 21:03
  • 1
    @IraBaxter Multithreading is probably not the right term for OS/9's multitasking concept - That term came up much later, when Unix machines could sub-divide classical processes into more than one thread. – tofro Aug 22 '17 at 21:43
  • 1
    I'm fine with anything you mention including and after the Dragon, but anything before that, I would hardly classify as Home Computer. An MP/M machine was not something you had at home and let your kids play with. – tofro Aug 22 '17 at 21:48
  • 1
    I actually got into things with MPM-86, but while 16-bit CPUs made that a lot better, it was clearly an evolution from 8-bit MPM. And while you could argue that a computer running MPM-86 might not fit the definition of home computer, until IBM came along with the PC, many home computers certainly ran CPM. As IBM found out with the PCjr, there is no difference between a home computer and a small business computer (i.e., home users don't want junk, and small business users don't want to spend a fortune). So in my mind MPM definitely fits the definition of "home computer". – manassehkatz-Moving 2 Codidact Aug 22 '17 at 22:26
  • @tofro MP/M machines wheren't that special at all. Almost all CP/M capable machines could also run MP/M. All it needed was a timer (CTC) as well as a FDC and serial ports that could operate using interrupts. For standard Z80 machines patching the XIOS took about the same effort as for the CP/M BIOS. The most important difference to using CP/M was DRs price tag. Then again, not every private copy was licenced back then. And MP/M was realy usefull for in a singe user environment. much like having multiple windows today. So yeah, I guess there where quite a lot kids with MP/M machines :)) – Raffzahn Aug 22 '17 at 23:15
  • @traal That's why I sad 'variations of multitasking/multiprocessing' when we look at all these additions, there is a huge spectrum of abilities. And we could discus for hous and days what now consists 'real' multitasking. For example, GEOS could have (like early GEM) some special programms loaded and working in paralell. I thought to handle this question rather inclusive, than shunning 'lesser' implementations. – Raffzahn Aug 22 '17 at 23:19
  • 2
    @Raffzahn Fair enough. True multitasking, and being able to freeze one program to run another, are very similar use cases. As is a systemwide clipboard or equivalent. – snips-n-snails Aug 23 '17 at 05:56
  • 1
    Checked some prices in old computer magazines: While in 1983, CP/M 2.2. could be had for ca. $200, MP/M was $900 (InfoWorld 1/83, available at Google Books) - Not exactly Home Computer price levels. – tofro Aug 23 '17 at 12:10
  • 1
    @tofro Back then, the price of software didn't really matter, at least in the USA. All that mattered was whether you could find someone to copy from. – snips-n-snails Aug 23 '17 at 13:53
  • 1
    @traal Are you trying to tell me that in the 80s there was no distinction between a Home Computer Software and Professional Software price tag because you'd steal it anyhow? My memory is different. – tofro Aug 23 '17 at 14:00
  • @tofro Yes, that's why publishers came up with all kinds of ways to make it harder to pirate software, such as code wheels. And defeating such schemes led to the rise of demoscenes. – snips-n-snails Aug 23 '17 at 17:51
  • 5
    "With a 68008 CPU it might be seen somewhere at the edge, but I'd still consider it an 8-bit machine." Wrong. It cannot in any way be considered 8-bit, just as it cannot be considered 4-bit or cow-bit or hat-bit. It's 16-bit. There is no "consider" to it. – Rich Aug 25 '17 at 17:38
  • 8
    The 68000 has a 32-bit architecture; the fact that some 32-bit operations take longer than 16-bit operations doesn't change that. Nearly all ALU operations that can be performed upon 16-bit quantities using a single instruction can also be performed upon 32-bit ones using a single instruction. By contrast, something like the Z80 includes some 16-bit registers and some instructions that can perform 16-bit arithmetic upon them, but most operations can only be performed using an 8-bit accumulator. – supercat Aug 25 '17 at 22:47
  • 6
    @rich The 68008 had an 8-bit bus. – Thorbjørn Ravn Andersen Aug 25 '17 at 22:55
  • 12
    @ThorbjørnRavnAndersen - so did the Intel 8088, but I've never heard anyone call the IBM PC an 8-bit machine. – Jules Aug 28 '17 at 07:45
  • 2
    @Jules Even though the PC XT was an 8-bit machine with a 16-bit processor (which was heavily marketed at the time) it was just an expensive machine in a vast sea of other computers. It was not until the PC AT which was much better that the clone makers got good enough that the trend was established. So, the reason you haven't heard it is because you probably only heard marketing. – Thorbjørn Ravn Andersen Aug 29 '17 at 14:18
  • @ThorbjørnRavnAndersen - Big deal. The 68008 has 32-bit registers. – Rich Sep 12 '17 at 22:27
  • @Rich So, then the 6800 was a 16 Bit CPU, becaus of the registers (and the 8080 as well)? – Raffzahn Sep 12 '17 at 22:32
  • @JohannKlasek ??? You might want to reply to the OP here. – Raffzahn Oct 01 '17 at 16:40
  • @JohannKlasek Somehow that comment only makes some sense when read as literal translation from German, doesn't it? (Hint: read the whole thread first) – Raffzahn Oct 01 '17 at 22:24
  • 1
    @Raffzahn: The 6800 and CDP1802 had 16-bit registers for address generation, but most operations needed to be done by moving values into an 8-bit accumulator and operating upon them there. By comparison, the 68000 has 32-bit versions of almost every instruction. – supercat Dec 17 '17 at 23:57
  • @supercat And your point is? Just asking since you're addressing me here. – Raffzahn Dec 18 '17 at 00:04
  • 1
    @Raffzahn: Sorry--I either missed the /sarcasm tag, or should have addressed Rich. – supercat Dec 18 '17 at 15:38
  • 6
    wikipedia has a "hands off approach" in this discussion: "The Motorola 68008 is an 8/16/32-bit microprocessor" :) I myself wouldn't consider the 68008 an 8 bit CPU, since it's a 68000 (which is 32bit) with just smaller "doors" to the outer world – Tommylee2k Oct 03 '18 at 08:52
  • @Raffzahn: GEOS couldn't run programs "in parallel", not even special ones. What it could do was load small helpers (think calculator or stuff like that) in a memory area usually used as a cache. The main application would be halted until the little helper was terminated. – Venner May 15 '23 at 22:13
80

Back in the mid 70s I wrote, and in 1982 I stopped shipping an 8 bit OS ("SDOS") for Motorola 6800/6801/6809. Those OSes came in several flavors:

  • SDOS/RT: Real Time multithreaded (2Kb ROM + whatever small bit of RAM you needed). It was was always included as a part of the others in this list
  • (plain) SDOS: Single user Disk Operating System (64K memory max) (see SDOS in action in 2023 here: https://www.youtube.com/watch?v=8laYFP3ZbcQ)
  • SDOS/MT: Multiuser Timesharing (15 users in 1Mb of RAM using 65Kb banks of memory, one per user)
  • SDNET: Distributed OS (Single or Multiuser system with access to remote disks)

One of the earliest 6800 systems (on which SDOS was developed) was the WaveMate Jupiter II (https://www.computerhistory.org/collections/catalog/102645038). The other variants were developed on that initial SDOS base.

One of the distributed OS versions managed 256x512 bit mapped graphics on the most trivial hardware you can imagine. For the hell of it, I wrote a Chess Program in the compiled-BASIC I implemented as our application programming language.

I wrote a variety of RTOSes for other 8/16 bit computers in the 70s (including Z80 and 68000), but there were definitely others before me.

While I was at TRW Advanced Product Labs in 1974, John Liberty implemented a dual processor 6800 (each CPU used one half the 1Mhz symmetric bus clock to do its memory access); one CPU did general purpose work, the other often did real time single bit I/O streams to things like read/write magnetic tape heads. A fellow named Dick Moran wrote the multitasking, real multiprocessor "BKOS" (Basic Kernal OS) that handled both CPUs using atomic locks and the whole bit. These CPU boards went on to become the minimalist hardware core of May Company POS terminals. (My job was to use this OS to write the real time mag tape drivers and the print head drivers for a 7 pin vertical printer that swept across the paper roll to produce printed text for sales receipts. Even with holly borders at Christmas :).

IIRC, the Intel 8080 came out before the 6800. It had a truly horrible scheme requiring hardware assist to take an interrupt and most of the CPU boards didnt bother with this. Most couldn't take an interrupt to save their lives, so it didn't make much sense to build an RTOS for those... but ... some did. I'm sure some soul wrote a multitasking OS for the 8080 for conventional embedded use. I remember talking to a Brunswick ("we don't just do bowling") cruise missile engineer who had done this onboard the missile. Late 70s, I did what I thought was a pretty nice multitasking OS for the Z80 using the SDOS/RT design but by that point I was probably just one of a crowd of guys who had done that.

A good part of the reason I chose to build the SDOS systems on the Motorola family chips is because they had built-in interrupt support that the 8080 lacked.

One of the slick things I did for multitasking on these small machines was effectively expand the register set. These 8 bit machines had only a few registers; talk about register pressure! What I did was define a fixed set of addresses (on the 6800, 8 bytes down in page zero where they were easy to access) called the "context" and made them part of the CPU state that the OS thread scheduler switched. This means your thread could use the registers, and the context, safely, giving it a little a little scratchpad it could work in. That made the code actually smaller than trying save things in the stack, way safer than having global variables all over the place, and remarkably it made the programs faster. [On Windows and Linux, this is called "Thread local storage", but its stored in a place you have to access with long complicated index operations, blech.]

Programming 8 bit machines from the bare metal up was fun.

Ira Baxter
  • 1,008
  • 6
  • 12
  • 1
    "I wrote a variety of RTOS for 8 bit computers in the 70s" <-- you are referring to SDOS, right? – snips-n-snails Aug 22 '17 at 20:14
  • 3
    My company was "Software Dynamics". We used "SDOS" for all the OS variants. Frankly they were all the same architecture :-} – Ira Baxter Aug 22 '17 at 20:56
  • 1
    I really don't think any of the machines and OSs you're referring to would classify as a "Home Computer". I don't even think that term existed as early as the 70s. – tofro Aug 22 '17 at 21:45
  • 16
    I don't know what you call a computer you have at home, except a "home computer". I hand built a custom 12 bit computer in 1973 (and ran complex wire-routing problems with over 10K wires on it). I had my SDOS boxes at home in 1975. DEC's president famously asked, "Who'd need a home computer" in that same time frame, thinking the answer was nobody. DEC is dead. – Ira Baxter Aug 22 '17 at 22:29
  • 4
    @traal: Yes, that SDOS. That was me and my ragged band of 3 helpers. – Ira Baxter Aug 23 '17 at 04:06
  • 2
    I suppose that if you make "whatever RAM you wanted" more specific, the claim becomes a lot less impressive, at least for today's audience :) – Hagen von Eitzen Aug 23 '17 at 14:12
  • 3
    @HagenvonEitzen: Well, for the baseline RTOS, "Whatever RAM you wanted" tended to be 1KB to 16KBytes. Frankly, that makes these devices more impressive; its amazing what you can do with a small amount of resources if you have an organized attack. – Ira Baxter Aug 23 '17 at 15:36
  • 1
    But "Whatever you wanted" really means "whatever you wanted, as long as the total address space was ≥ 64k", right? – Duncan C Aug 23 '17 at 16:28
  • @Duncan: Uh, these are small machines. Typically they had at best 16 bit address which mean "less than 64Kb" best case. The time sharing system used bank switching to swap in chunks of 56KB preserving the lower 8Kb for the OS to live in. – Ira Baxter Aug 23 '17 at 16:33
  • 1
    Understood. That's why I said that. – Duncan C Aug 23 '17 at 17:02
  • 1
    Oh, oops! I just noticed that my comment said "≥", when I actually meant "≤". – Duncan C Aug 23 '17 at 17:03
32

Since it ran on the Tandy Color Computer and similar Dragon computers in the UK, I guess it's fair to throw OS-9 into the mix. OS-9 was originally written for 6809 CPUs (hence the name). The linked Wikipedia article begins by calling it a "family of real-time, process-based, multitasking, multi-user operating systems".

I remember before it was available for home systems, a commercial development license cost many thousands of dollars per year. It certainly wasn't cheap, and I remember the up-front costs made it a non-starter environment at the company where I worked in the early to mid-80s. I'm pretty sure I would have loved it, though. Everything I heard was pretty sweet -- except the cost.

It was later ported to other Motorola chips in the 68000 family.


Trivia - feel free to skip.

In 1999, OS-9's owner sued Apple for trademark infringement about Apple's own operating system, "OS 9" (no dash). The OS-9 trademark had existed for 19 years before Apple started using it. The case was decided in Apple's favor, with the judge saying no one would get them confused. Huh? Try doing a simple internet search for os-9. All you see are Apple-related links unless you specifically exclude Apple terms such as -Macintosh.

The OS-9 version 2.4 manual in 1991 included a glossary containing an entry for Unix:

An operating system similar to OS-9, but with less functionality and special features designed to soak up excess memory, disk space and CPU time on large, expensive computers.

RichF
  • 9,006
  • 4
  • 29
  • 55
  • FWIW, SDOS ran on the Color Computer, too. And in spite of the halt-and-wait hardware design for the floppy disk, SDOS ran the disk and the keyboard with full dynamic interrupts, meaning you could actually type ahead while the disk was running and nothing got lost. I don't think OS9 although capable of this on good hardware accomplished this on the CC; when the disk was spinning, you couldn't type and vice versa. – Ira Baxter Aug 27 '17 at 19:51
  • @IraBaxter Thank you for the info, both in your answer and this comment. I never actually used either OS-9 or SDOS, but I had a lot of hands-on experience with the CoCo and its Extended Color BASIC. I was attracted to the 6809 CPU because of its elegance. – RichF Aug 27 '17 at 20:06
  • 2
    The test for whether one trademark can be confused with another is not based on internet search results. – nekomatic Aug 23 '18 at 12:58
  • I think OS-9 is the best example of a contemporary multitasking OS for this class of machine. However, the 6809 was technically a 16-bit CPU, if you go by the width of natively supported arithmetic operations, even though the address space was just as limited as a typical 8-bit machine. – Chromatix Jan 19 '20 at 05:06
14

The Sinclair QL was most probably the first truly multi-tasking home computer. It's QDOS is a fully-featured preemptive multi-tasking OS. Whether it matches your definition of an 8-bit computer with its 68008 is , however, debatable (and was debated a lot when it entered the market)

tofro
  • 34,832
  • 4
  • 89
  • 170
  • 7
    The 68008 is a 16-bit CPU with a 32-bit command set, running on an 8-bit bus. Duh! – Zac67 Aug 22 '17 at 20:52
  • @Zac67 don't forget the various bit-size operations! – RichF Aug 22 '17 at 21:11
  • 4
    @Zac67 - right... if the 68008 is 8-bit, then so's the 8088 and therefore the IBM PC counts, so TopView and/or PC DOS 4.0 are candidates. – Jules Aug 22 '17 at 21:13
  • 1
    @Jules I intentionally left that open - I tend to use the ALU size as category which is 16 bit for the 68008. – Zac67 Aug 22 '17 at 21:18
  • 3
    Careful about using "ALU size". The Thinking Machines CM-1 parallel supercomputer had one bit wide CPUs, but it had 65536 of them. People programmed 64 bit floating point as a long string of one-bit-CPU operations. A 64x serial slowdown still meant 1000x faster overall. – Ira Baxter Aug 23 '17 at 08:05
  • 1
    My recollection is that Intel's marketing literature referred to the 8088 as "8 bit" - they seemed to be selling it as a more powerful alternative to the Z80 and other existing 8 bits. The " 8 bit" sort of raised my eyebrows at the time, but in any case the technique of selling you the manual cheaply and including a free chip to play with apparently got them the market penetration they were after. – MickeyfAgain_BeforeExitOfSO Aug 23 '17 at 11:38
  • 1
    @Zac67 - I believe at the time we called those Motorola 68K's "16/32 processors." Pretty clearly doesn't qualify. – T.E.D. Aug 24 '17 at 18:02
  • 1
    What makes you think this is debatable? The 68k8 is a 16-bit CPU, end of story. – Rich Aug 25 '17 at 17:36
  • 2
    @Rich because when the QL entered the market, Sinclair claimed it to be a 32-bit-computer, Motorola said it was a 16-bit one and some magazine claimed it was an 8-bit computer due to its data bus size. There is no fixed definition of what it really is. In case you have a citation, don't hesitate to bring it across. – tofro Aug 25 '17 at 21:31
  • 1
    The QL is one of the most influential computers ever released: it was his QL which got Linus Torvalds interested in preemptive OS design. It was an 8 bit bus (for lower-cost implementations), but this was its only 8-bit spec. – Tim Richardson Aug 26 '17 at 23:39
  • @tofro - The 68008 data bus is 16-bit internally, and only 8-bit externally. It has some 32-bit registers, but most are 16-bit (due to the internal data bus). Yeah, sure. Uncle Clive wanted to sell more so he frothed about the isolated 32-bit characteristics, which I distinctly remember him later retracting in a couple of interviews. And most 80s magazine writers were also churning out articles for horse riding, Airfix kit building, home brewery, and collectible miniatures magazines. You believe them above the manufacturer, who you yourself cite? – Rich Sep 12 '17 at 22:39
  • 3
    @Rich "most registers are 16-bit" is wrong. All 16 address and data registers in the 68008 are 32-bit. There's only one 16-bit register, the CCR. And note I don't "believe" anything over anything else. I have just stated it was debated a lot. Please read my statements with a bit more care. – tofro Sep 13 '17 at 08:07
  • 1
    Still, the discussion is somewhat valid (or useless): The Z80 has 10 16-bit registers ([BC, DE, HL] * 2, IX, IY, SP, PC) and 2 somewhat pure 8-bit registers (AF, AF'), an 8-bit data bus like the 68008 and still is commonly considered an 8-bit and not a 16-bit CPU. – tofro Sep 13 '17 at 08:12
  • @tofro Apologies about the registers. I was remembering wrong. If the Z80 had a data bus "like the 68008", it would also be treating data 16-bit internally, and only 8-bit externally. The 68008 is very clearly a 16-bit CPU built to cost, so they reduced the external data bus to 8-bit. – Rich Sep 14 '17 at 01:28
  • 2
    @rich What do you mean with "treat data 16-bit internally"? The Z80 can do that - It can add and subtract 16-bit values with only a 4-bit ALU, and has a 16-bit adder for the index registers. Should it then be called a 4-bit or a 16-bit CPU? The 68k8 works with 32-bit registers internally and has a 16-bit ALU. So if you want to call the 68k8 a 16-bit CPU, the Z80 would be a 4-bit CPU with the same argument. My point is just that once you look into the details, the whole 32/16/8 discussion becomes extremely blurry. – tofro Sep 14 '17 at 06:31
14

It wasn't multitasking in any way, but Locomotive BASIC on the Amstrad CPC series (Z80, 1984) had software interrupts for calling subroutines based on timers. There were four 50 Hz timers, 0–3, with timer 3 having the highest priority. Timers could be set one-shot (AFTER ‹time delay›[,‹timer number›] GOSUB ‹line number›), repeating (EVERY i[,t] GOSUB ‹line number›) or based on the sound queue status (ON SQ(x) GOSUB ‹line number› x1, x2, x3, x4, …). Interrupts could be disabled (DI) and re-enabled (EI). Combined with Locomotive BASIC's screen viewport definitions (WINDOW … — somewhat different from how we'd define a window today) you could have the appearance of multiple programs running at the same time.

Of course, arbitration was pretty much up to the programmer, and it was too easy to create a program that would lock up beyond the reach of even a soft reset. But as a limited form of high-level interrupt-driven code it worked quite well.

scruss
  • 21,585
  • 1
  • 45
  • 113
9

Intel's iRMX worked on the 8080 and above. We used it on GRiD Compasses in the early 1980s, though the GRiD could hardly be called a "home computer"!

iRMX (and the GRiDOS file system & GUI that GRiD built on top of it) were fully multi-tasking.

Grimxn
  • 191
  • 3
9

A great many Atari-era games ran in two threads.

The first thread attended to gameplay, listening to controller input, keeping score, arranging the playfield, cueing sounds.

The second thread was responsible for sprite-juggling to render the playfield in a more complex way than the hardware designers imagined. The purpose of the added complexity was to be more competitive to other games, or emulate the better hardware in arcade machines. This thread effectively "followed the raster".

This was preemptive/cooperative. At a certain point in the sweep/scan, the display hardware fired a hardware interrupt which (preemptively) launched the raster thread. It quit voluntarily (cooperatively) when the raster reached the last point of its concern. If it failed to quit, the game would crash.

If the gameplay module was unable to complete an entire "cycle" of tasks in a single display sweep, that wasn't the end of the world. You could intentionally run the gameplay module on a 2-frame or 3-frame cycle if that made sense, or let it roll asynchronous. Animation or polling tasks that didn't like being asynchronous could also be off-loaded onto the end of the raster thread.

During debugging, you would set the screen border color to different colors for different tasks, e.g. blue for raster-chasing work, red for gameplay tasks handled by the raster interrupt, and green or rotating colors for the main thread. You could watch the colors dance up and down the screen while you playtested, and watch for conditions that overloaded the game.

  • Nice description of VCS development. But I'm not realy sure if this Answer applies. The OP was asking for mechanics ready supplied in software with any system, not if it's possible att all, as this in principl it can be done with any machine. – Raffzahn Aug 27 '17 at 21:23
  • I was referring to the Atari 400/800 PCs as well, also the Commodore VIC, C64, Colecovision, yadayada. Those PCs pretty much dominated the market at the time. It's really about the architecture of the machine moreso than the CPU proper, obviously CPUs of that age did not contain video cards. – Harper - Reinstate Monica Aug 28 '17 at 03:14
  • 1
    On the Atari 2600, there aren't any hardware interrupts, but some games still use a cooperative multitasking approach by having the "game logic" code poll the timer to see if it's almost expired and, if it has, wait for it to expire fully and then generate a vertical sync or a display frame as appropriate. Strat-O-Gems used that technique, for example, to handle the fact that identifying all the matches might take a variable mount of time that could exceed what was available in a single vertical blank. – supercat Dec 18 '17 at 00:02
6

For the sake of completeness, there is SymbOS, which advertises itself as a graphical Z80 multitasking operating system. It didn't exist at the time, but it works for a variety of 8bit machines (MSX2 and better, Amstrad CPC, Enterprise 64/128, PCW Joyce).

guillem
  • 161
  • 2
  • SymbOS is a very nice pice of Software. Z80 users should give it a try. But as it has been said, it's contemporary, created after 2000, so not realy an answer here. – Raffzahn Aug 23 '17 at 09:35
6

In the early 90s, I worked for a division of GEC Alsthom who had based their control electronics on the transputer. The transputer was obsolete by then, but they had not yet bitten the bullet to do a full redesign of their controls.

Transputers were explicitly designed to run as a massively parallel system. Because designs had to be parallel, they also allowed multithreading within the processor as well (and in fact for efficiency there was some comms buffering which had to be coded as parallel tasks, because that's how the processor worked).

Sadly of course the transputer suffered from the same fate as most other UK technology companies, namely underinvestment. Historically, major UK investors have been extremely reluctant to fund UK technology companies because they are seen as higher-risk; but of course this is a self-fulfulling prophecy when the companies fail because they cannot get the resources they need to grow. In the case of the transputer, Intel and other US manufacturers put investment into mass-manufacture of single-core x86 processors which allowed single-threading processors to progress at a rate Inmos could not compete with.

The occam language was designed to handle multithreading and multicore processing. Because occam was tied to the transputer platform, the detailed implementation of semaphores and shared data could be delegated to the language instead of having to be set up explicitly by the coder. This made it trivial to implement multithreaded designs.

Of course the transputer was not an 8-bit processor. But your question seems more about "early" processors rather than being specifically tied to a processor word length, and "8-bit" seems to be more about the era than the processor.

Graham
  • 2,054
  • 8
  • 11
  • Serious? When the Transputers cam in like 84/85, the 8 Bit era was already ending – Raffzahn Aug 23 '17 at 11:33
  • Good one. The spirit of the Transputer lives on as there are people out there (trying to) emulate it on the Raspberry Pi. – Chenmunka Aug 23 '17 at 11:48
  • 2
    @Raffzahn The first transputers went on sale in 1984. That's right in the middle of the home computing boom, and it was all about 8-bit machines like the Spectrum and Commodore 64. The 8-bit era wasn't even close to ending at that point. – Graham Aug 23 '17 at 13:05
  • 2
    The 8 bit era began life support around 1984-1985.. – snips-n-snails Aug 23 '17 at 13:57
  • @Graham Sinclair QL 1984, IBM AT 1984 (other 286 even before that), Atari ST and Amiga 1985, Amstrad PC1512 in 1986. – Raffzahn Aug 23 '17 at 14:31
  • @Raffzahn And plenty of others. But 8-bit micros were by far the biggest sellers until the late 80s. Commodore even released the C128 and 64C in 1985-86. – Graham Aug 23 '17 at 16:26
  • 2
    @Graham sure, but claimingth 8 Bit title for a new 16 Bit CPU in 1984 is a bit off track, isn't it? All the other 16 Bit CPUs that became real machines during the mit 80s, where all developed in the 70s. Are they now also 8 bit era? Come on don't bend it too much. The Transputers where great machiens, and I had a lot fun figuring out how to programm something useful, but they are just neither 8 bit machines nor 8 bit era. – Raffzahn Aug 23 '17 at 16:52
  • 3
    I'm not claiming it's 8-bit - I even explicitly said that it isn't 8-bit. However it is very much the era in which 8-bit machines were dominant. Contrary to your assertion, 8-bit machines in 1984 were flourishing, utterly dominant in all markets except high-end number-crunching, and not even close to "life support". If you can think of a 16-bit machine in the same era which had novel multicore/multithreading capabilities, I'm sure the OP would be interested in that too. I personally can't think of one though. – Graham Aug 25 '17 at 09:08
  • @Graham The ARM1 first ran in 1985 - a 32-bit RISC CPU to replace Acorn's 8-bit platform of the time, the 6502. That's also the year the Amiga and the Atari ST arrived. The 6502 is still in use today, but it's fair to say that the 8-bit era was nearing the end in 1984 - the year in which the Macintosh 128K was released. – Chromatix Jan 19 '20 at 04:58
3

You are asking about, and the answers are covering, commercial/mass-market OS's, but it was very common for a programmer of an embedded 8-bit device to cook up either cooperative or interrupt driven multi-tasking when the device needed it. Neither is very difficult in their basic form; cooperative is especially easy. Primitives like queues and semaphores aren't terribly difficult either, so we would often just cook those up when we needed them. However, being mostly comprised of soft tissue, these one-off OS's won't show up in the fossil record.

Wayne Conrad
  • 2,706
  • 17
  • 25
3

Tom Hunt created MTOS for Atari 8-bit computers in 1987. Details here: www.umich.edu/~archive/atari/8bit/Os/mtos.doc

atariguy
  • 131
  • 2
2

What about Cromix? Cromemco had a complete multitasking system running on a Z80.

Or TurboDOS? Back in the day I had an IMS TurboDOS S-100 system. More multiprocessing than multitasking, but it supported concurrency. It even had networking (arcnet).

Marc Wilson
  • 386
  • 3
  • 4
2

When we say "8 bit microcomputer", really we might be talking about 16 bit machines, if we go by address bus size: 8080, Z80, 6502, ...

If we give ourselves some leeway to by that rather than data register size, we might include the DEC LSI-11 in the same category.

Douglas Comer's Xinu operating system first ran on the LSI-11.

Xinu supports concurrency and all that. From the old Xinu FAQ:

Xinu is a small, elegant, multitasking Operating System supporting the following features:

  • Concurrent Processing
  • Message Passing
  • Ports
  • Semaphores
  • Memory Management
  • Buffer Pools
  • Uniform Device I/O
  • Shell
  • Tcl
  • TCP/IP
Kaz
  • 1,660
  • 1
  • 7
  • 14
  • This, like some other answers, has gone beyond 8-bit computers. (I don't follow your first sentence at all...) I can help out somewhat in the "home computer" constraint of the question, though. The Heathkit H-11 was an LSI-11 kit or fully-built system sold from 1978 to 1982. // Another multi-user, multi-tasking system available on PDP-11 systems was Forth, from Forth, Inc. – RichF Aug 24 '17 at 00:45
  • @RichF Many of the so-called "8 bit" microcomputers had 16 bit address spaces. They were able to form 16 bit addresses as immediate operands, or with various addressing modes. – Kaz Aug 24 '17 at 02:58
  • 2
    Thanks for the explanation, but i don't agree that address bus size is a determining factor on what category the CPU is. I think it is more accepted to use the accumulator size in specifying the "bitness" size of a CPU. Admittedly even this could get fuzzy -- e.g. consider that the HL register pair of the Z80 could act as an extended accumulator. But even then when you compare the 1 M cycle for ADC A, B with the 4 M cycles of ADC HL, DE, it is pretty clear that 8-bit stuff is going on internally. – RichF Aug 24 '17 at 20:35
  • @Richf you made an unfair comparison. Adding an 8-bit register with A is a 1-byte command, but adding a "16-bit" register with HL takes two bytes. Better would be an immediate data ADC, such as ADC A, 17. That takes 2 M cycles, compared with the 4 for ADC HL, DE. It is still indicative that 8-bit stuff happens internally, but does so more fairly. – RichF Aug 24 '17 at 20:40
  • @RichF The thing is, the address bus size is a lot more crippling of the machine's capabilities than data bus size. Back in the day, there was something on the 6502 Apple II's called the "Sweet16" virtual machine invented by Wozniak: you could call into it from machine code and then continue executing in a 16 bit instruction set. Basically, software overcame the 8 bit limitation. Overcoming an 8 bit address size wouldn't be that easy. – Kaz Aug 24 '17 at 21:00
  • https://en.wikipedia.org/wiki/SWEET16 – Kaz Aug 24 '17 at 21:00
  • We're just going to have to agree to disagree. Neither of us is going to give ground. I'll give your answer a +1 for polite persistence, though. – RichF Aug 24 '17 at 21:10
  • 3
    I call the 6502, 6800, Z80 and the like 8-bit, because when you did a memory fetch, you got 8 bits, and the CPU was engineered around that concept with many 8-bit registers. This differs from an 8086, a true 16-bit bus and engineered around that, despite the hack of the 8088 to double-fetch on an 8-bit bus. Address bus has nothing to do with it, because a 256 byte memory space would be nigh useless. – Harper - Reinstate Monica Aug 27 '17 at 21:02
  • @Harper This also fetches 8 bits: https://en.wikipedia.org/wiki/Motorola_68008 – Kaz Aug 27 '17 at 23:20
  • @Kaz: CPU address bus size need not be crippling, if well-designed address-translation logic sits between the CPU and physical memory. In some cases, such logic may allow better performance than would be achieved with straightforward linear addressing, especially in cases where it may be useful to have several "parallel" data structures occupy the same address range. For example, one banking design I did for the Atari 2600 has a 256-byte range which is programmable to access any part of a 32K RAM. Using a (zp),y addressing mode would require five cycles, but an access to $1E00,y takes four. – supercat Aug 28 '17 at 21:58
  • In the past the bitness of a processor referred to the natural size of an integer. This is really the integer size which is fastest to manipulate. For the 6502, Z80, 8080, 6800 this was an 8 bit integer. Note that the Z80 had a 4 bit ALU, but 4 bit arithmetic was inaccessible to the assembly programmer. For the 8086, 68000 the natural integer was a 16 bit integer. Note that this is in spite of the fact that the 68000 had 32 bit registers. It took twice as long to add the full 32 bits of two registers as to add the bottom 16 bits. – JeremyP Aug 29 '17 at 09:56
  • @JeremyP GCC makes int 32 bits wide for the 68K (overridable with -mshort on the command line). The instruction set defines a family; that exact ame same machine code running on a 68040 will not take twice as long to add a 32 bit integer as 16. – Kaz Aug 29 '17 at 13:50
  • @Kaz gcc does not determine the bit width of a processor. And the 68040 is a 32 bit processor, which is why my comment did not mention the 68040. – JeremyP Aug 29 '17 at 14:21
  • Jim Mathis's MOS had similar features, but was not widely distributed outside of the DARPA TC/IP research community. See https://retrocomputing.stackexchange.com/questions/23742/mos-mini-operating-system-for-pdp-11 for more information. – John R. Strohm Oct 10 '23 at 18:29
1

One I personally had some experience with, however little, was Morrow's Micronix, a Unix that ran in 1983 on their Z-80 CPU card for the S-100 bus system.

jimc
  • 107
  • 4
1

It's important to understand from a historical perspective that early multitasking and multi-user systems were designed around that reasonably fast hard disk capacity was much cheaper than RAM. Early multi-user systems didn't keep multiple users' programs in memory at once; instead, they would run one person's program until it needed to perform I/O or it used up its time slice, swap the entire user memory space of the machine out to disk, load someone else's program, and start running that. As such, the ability to perform complex memory management tasks or support lots of memory wasn't really important. An 8-bit CPU coupled with a reasonably fast hard drive that could quickly swap 4K blocks of data between RAM and disk could be minimally adequate for multi-user applications with even 16K of RAM, though more would be better. In an era where RAM was expensive, such a design might have been practical for multi-user use. I don't know whether the cost of the CPU would have been relevant, though, given the cost of a hard drives.

supercat
  • 35,993
  • 3
  • 63
  • 159
  • Do you know of any commercial operating systems that actually worked that way in the 8-bit days? My experience was primarily with MP/M family, and they did not work that way - everything was in RAM. This sounds vaguely familiar - I think I have heard of such things in relatively early mainframe days, but that would not have been 8-bit. – manassehkatz-Moving 2 Codidact Oct 10 '23 at 01:23
  • @manassehkatz-Moving2Codidact: I think some RAM expansion packages for the Apple offered a means of switching applications by swapping the contents of main and expansion RAM, but code always being at its normal address whenever it was running, but my point was that professional computers did multitasking without any kind of virtual memory hardware, so the existence of such hardware is not a prerequisite for multitasking. Additionally, while I don't know if anyone used it in such fashion, one could have done something similar using something like The Final Cartridge for the C64. – supercat Oct 10 '23 at 15:07
  • @manassehkatz-Moving2Codidact: If you're working on one program, and want to momentarily work on something else, one could hit the "freeze" button, save a memory snapshot to disk, load the other program, and then reload the memory snapshot and resume what one was doing. I never actual owned a TFC, so I don't know if it accelerated write performance as well as read performance, but it could certainly have been designed to do so. – supercat Oct 10 '23 at 15:08
1

A niche example:

The 8-bit BBC Micro attracted a huge range of peripherals and add-ons, one of which was the Music 500 synthesiser unit.  This provided 16 digital oscillators, usually grouped into 8 voices, with features such as digital waveform generation, FM, ring mod, and oscillator sync.

The supplied software included a dedicated programming language called AMPLE (‘Advanced Music Production Language and Environment’), similar to FORTH but with many music- and synthesis-related keywords and features.  (It had a few similarities to LilyPond, though of course major differences too.)

To control all the voices, it used co-operative multitasking to provide up to 9 different ‘players’ (i.e. threads): one to handle the keyboard, screen, etc. and start the others off, and up to 8 more to make music.

This made writing complex pieces much easier: one player could set up and sequence an entire bass part, another the chords, another a lead line, and so on, independently.  (Of course, each player's code would have to use the right note durations so as not to get ahead or behind the other players — though if you used bar lines, the language would check that for you.)

The multitasking was mostly transparent.  Words that played notes, or waited for user input, would automatically pass control to the other players, so it felt like up to 9 CPUs running simultaneously. (There was also an IDLE keyword for use in e.g. tight loops so as not to ‘starve’ the other players, though that was rarely needed.)

Back in 1984, on a home micro, this all seemed like magic!

gidds
  • 479
  • 2
  • 7
1

Both Minix (1988) (and the ELKS fork of Linux though that wasn't around until 1999) ran on the 8088 Microprocessor which had an 8 bit external data bus so is at least by some definitions an 8 bit processor; many 8 bit processors such as the Z80 had one way or another 16 bit registers at least by combining registers for addressing purposes, so I don't think this is too much of a stretch. Both Linux and Minix supported preemptive multitasking and concurrency. Neither Minix nor Linux originally (from memory - though it did by the time ELKS came out) supported threading but both did at some later point and could run it on an 8086.

Did home computers that support the 8088? Well the IBM XT and some cheap PC clones were certainly sold into homes, so I would guess these count. But the IBM PCjr was 8088 based and definitely marketed towards the home ("IBM's first attempt to enter the home computer market"). See also this link.

abligh
  • 418
  • 3
  • 6
  • 2
    Linux could not run on 8088 - the minimum was 386. If you are thinking about ELKS, it's not Linux. There were other unices (or similar) capable of running on 8088 (e.g. QNX), but these were very commercial and certainly not aimed at home computer users. – Radovan Garabík Aug 25 '17 at 12:43
  • @RadovanGarabík you are entirely correct. I'd misremembered that early Linux didn't require 386 and above. I have edited the post. – abligh Aug 25 '17 at 15:04
  • Any decent multitasking OS needs a CPU capable of memory protection. Otherwise fuggedaboutit... – Harper - Reinstate Monica Aug 27 '17 at 21:06
  • 6
    @Harper I pretty much disagree - Even modern real-time multi-tasking systems (RTOS) can very well multi-task without memory protection. Memory protection is just some safety net against misbehaving applications. – tofro Aug 27 '17 at 22:26
  • 2
    The 8088 was not an 8-bit CPU. It could perform 16-bit arithmetic natively, so it should be described as a 16-bit CPU, regardless of the width of the external data bus. – Chromatix Jan 19 '20 at 05:01
0

Interrupts are co-operative multitasking - the hardware interrupts, it runs code to handle it, it returns to the (single) background loop (or a lower priority interrupt) by popping the stack. Most 8-bit home computers used this extensively - on a C-64, you could get an interrupt when two sprites in the video system collided (I think).

Operating systems like Unix have pre-emptive multi-tasking - a process gets scheduled to run when it has nothing to wait for and has reached the top of a priority/round-robin system. Under the hood, this is (usually) done by servicing a (timer or peripheral) interrupt and deciding which process gets the CPU. Mostly, each process will need its own stack.

So you need RAM for the stacks and CPU for the context switches - which is why a lot of 8-bit systems didn't bother. You can actually have a fairly sophisticated user environment with no co-operative multitasking - e.g. Windows 3.x.

Rich
  • 215
  • 1
  • 4
  • 11
    I think "Interrupts are co-operative multitasking" is pushing the definition a bit far. Having interrupts and (necessarily short/fast) ISRs allows asynchronous events to be handled more easily, but aren't in themselves sufficient for what I'd call multi-tasking: for that, you really need support from the OS to switch context between multiple tasks. Such switching might happen on interrupts (e.g. timer-driven, preemptive multitasking) or in a cooperative fashion (e.g. early Windows). – TripeHound Aug 24 '17 at 07:05
  • 3
    @TripeHound: Interrupts may be used for pre-emptive multitasking on systems which can arrange to have multiple programs loaded at different physical addresses simultaneously. The code for that isn't complicated in cases where different processes use completely disjoint resources. The only complicated part is handling shared resources. On the other hand, even if shared resources are the only complicated part, they can be a doozy. – supercat Aug 24 '17 at 19:15
  • 2
    @supercat Agree with what you say; my point is that interrupts on their own aren't sufficient for multitasking; you also need the OS to handle memory and shared resources. – TripeHound Aug 24 '17 at 19:33
  • 1
    @TripeHound: It's pretty easy to make one program multi-task with another if they use disjoint resources and address ranges; it doesn't require a full-fledged OS. On the Z80, a timer-tick interrupt would primarily have to exchange the stack pointer with a stored SP value and then return from interrupt after swapping. Less than a couple dozen bytes of code. – supercat Aug 24 '17 at 19:45
  • @supercat How do you get away without also exchanging AF, BC, DE, and HL? – Wayne Conrad Aug 28 '17 at 21:30
  • @WayneConrad: Push them on the stack before doing the stack-swap. I guess since the Z80 has the prime set of registers, pushing everything might take a little more than the couple-dozen bytes I'd been thinking, but not a huge amount (let's see... push would take 4 bytes for the primary set, 2 for reg-swaps, 4 for the secondary set, and 4 for ix/iy. Then pop would take that much again. Still, pretty small and straightforward. – supercat Aug 28 '17 at 21:52
  • See also TSRs in DOS: https://en.wikipedia.org/wiki/Terminate_and_stay_resident_program – Rich Aug 29 '17 at 04:23
  • 8
    The first sentence is wrong. Cooperative multi-tasking is when a thread voluntarily gives up the CPU (by making a system call, for example). Preemptive mutli-tasking is when an interrupt preempts a thread and switches to a different thread. The clue is in the name. – JeremyP Aug 29 '17 at 10:03
-1

If you want to count somewhat more primitive approaches, there was once a product called 'Sidekick'. This program supported what was known as 'terminate and stay resident' programs; essentially, the code for several programs would be loaded into memory at once, and each was in a partial state of execution, with one 'running' and the others temporarily paused. There was no automatic process switching as in a proper OS, but the user could switch from one to another via special keystrokes. I don't remember if this was only for the IBM PC or anything earlier, though.

PMar
  • 11