36

Example situation.

You are using a program in Windows 95 and screen goes blue only for you to restart the whole computer

You are using a program in Windows 7 and the program stops responding only to be stopped by the task manager.

Why was there a difference?

DrSheldon
  • 15,979
  • 5
  • 49
  • 113
Delta Oscar Uniform
  • 807
  • 1
  • 6
  • 12
  • 30
    Both scenarios are possible on both OSs. Could you be a little more specific? (Yes, I know, crashes were more common on Windows 95.) – Stephen Kitt Jul 28 '19 at 18:21
  • @StephenKitt thats just the way I have seen it on the net. Someone installs some program on windows 95 and after some playing the whole OS crashes. Not the best example but I think should be decent https://youtu.be/H6OJFHOTORQ?t=5649 – Delta Oscar Uniform Jul 28 '19 at 18:24
  • 4
    So you’re asking, in general, why Windows 95 is more crash-prone than Windows 7, with more crashes affecting the whole OS? – Stephen Kitt Jul 28 '19 at 18:31
  • @StephenKitt yeah. I think maybe it's because written in assembly but I would like some explanation. – Delta Oscar Uniform Jul 28 '19 at 18:32
  • 3
    https://en.wikipedia.org/wiki/Blue_Screen_of_Death – Bruce Abbott Jul 28 '19 at 19:21
  • 8
    Keep in mind that Blue Screens of Death in Windows 9x weren't necessarily fatal. A lot of errors which nowadays are simply reported to the event log or cause an error popup would throw a BSoD on Win9x - but you could still resume the system after that happened. – Maciej Stachowski Jul 29 '19 at 09:55
  • 1
    Also, further reading: https://superuser.com/questions/382890/why-did-windows-98-give-us-blue-screens-frequently – Maciej Stachowski Jul 29 '19 at 09:55
  • 2
    AFAIK all Windows applications ran in ring 0 on Windows 9x, so they could do quite some damage... – Matteo Italia Jul 29 '19 at 12:20
  • 2
    AFAIK a bluescreen on Windows 95 was a fault detected in the 16 bit subsystem (c.f. the entire GUI). If the 16 bit heap got trashed the OS is coming down. A fault in 32 bit code would just kill the single application. – Joshua Jul 29 '19 at 15:22
  • 1
    I wish my Windows 10 machine wouldn't Blue Screen all the time! – CJ Dennis Jul 30 '19 at 02:38
  • 6
    Side note; your comparison isn't really valid because it implies that Windows 7 (other operating systems in the Windows NT family) never causes a blue-screen stop of the entire computer; this is not true. There are many reasons why NT based operating systems will experience a STOP and crash the entire operating system, just like Win9x did - even though the two operating system families are massively different – Caius Jard Jul 30 '19 at 09:33
  • @CaiusJard: In the Windows NT family, only a driver failure can bring down the whole system; application and service failures are isolated to a single process. In Win95 there were numerous parts of the OS that didn't even make an attempt at isolation. – Ben Voigt Jul 31 '19 at 01:55
  • @BenVoigt I'm not sure I agree that NT family BSODs are exclusively caused by device driver failures but it would be reasonable to assert that the majority are. When I said "many reasons" I was referring to the (literally) hundreds of different error codes one could potentially see on a BSOD, each one of them being a different reason why the BSOD occurred. It doesn't detract from my observation that the OP seems under the misapprehension that NT never bluescreens. – Caius Jard Jul 31 '19 at 04:08
  • 2
    Related: https://security.stackexchange.com/questions/107546/old-os-memory-space-protection-was-it-really-that-bad – Caius Jard Jul 31 '19 at 05:00
  • 1
    And a LOT of effort has been made to ensure that drivers are of high quality and that those that aren't are properly and automatically diagnosed by the error reporting system. This has worked. It is very rare that drivers installed by Windows Update (which is a landmark in its own) crash. These are much nicer times today! – Thorbjørn Ravn Andersen May 05 '20 at 08:57
  • OP you have selected a highly biased data set, and stretched too far to extrapolate a wrong conclusion from it. And then asking us why your wrong conclusion is true. It is not. What you speak of is attributable to Windows 3.1. – Harper - Reinstate Monica May 07 '20 at 15:44

6 Answers6

85

You are comparing apples to motorcycles.

Windows 95 traces its lineage back through Windows 3.x all the way to Windows 1.x and MS-DOS/PC-DOS, themselves inspired by CP/M. It was conceived and designed as a single-user, cooperatively multitasking environment in which applications have a large degree of freedom in what to do. Windows 95 moved towards a preemptive multitasking design, but still had significant cooperative elements built-in.

The fact that it was intended as a consumer OS replacement for the combination of MS-DOS and Windows 3.1/3.11, and was to work (not necessarily provide a great user experience, but boot and allow starting applications) on as low end a system as any 386DX with 4 MB RAM and around 50 MB of hard disk space, also put huge limitations on what Microsoft could do. Not least of this is its ability to use old MS-DOS device drivers to allow interoperability with hardware which did not have native Windows 95 drivers.

So while Windows 95 provided a hugely revamped UI compared to Windows 3.x, many technical improvements and paved the way for more advanced features, a lot of it had compatibility restraints based on choices, and to support limitations in hardware, dating back over a decade. (The 386 itself was introduced in 1985.)

Now compare this to modern versions of Windows, which don't trace their lineage back to MS-DOS at all. Rather, modern versions of Windows are based on Windows NT which was basically a complete redesign, originally dubbed NT OS/2 and named Windows NT prior to release.

Windows NT was basically designed and written from the beginning with such things as user isolation (multiuser support), process isolation, kernel/userspace isolation (*), and no regard for driver compatibility with MS-DOS.

For a contemporary version, Windows NT 3.51 was released three months before Windows 95, and required at a minimum a 386 at 25 MHz, 12 MB RAM, and 90 MB hard disk space. That's quite a step up from the requirements of Windows 95; three times the RAM, twice the disk space, and quite possibly a faster CPU (the 386 came in versions clocked at 12-40 MHz over its product lifetime), and again, that's just to boot the operating system.

Keep in mind that at the time, a 486 with 8-12 MB RAM and 500 MB hard disk was a reasonably high end system. Compare Multimedia PC level 2 (1993) and level 3 (1996), only the latter of which went beyond a minimum of 4 MB RAM. Even a MPC Level 3 PC in 1996 wouldn't meet the hardware requirements of the 1995 Windows NT 3.51, as MPC 3 only required 8 MB RAM.

From a stability point of view, even Windows NT 3.51 was vastly better than Windows 95 could ever hope to be. It achieved this, however, by sacrificing a lot of things that home users would care about; the ability to run well on at the time reasonably affordable hardware, the ability to run DOS software that accessed hardware directly (as far as I know, while basic MS-DOS application compatibility was provided, there was no way other than dual-boot to run most DOS games on a Windows NT system), plug-and-play, and the ability to use hardware that lacked dedicated Windows NT drivers.

And that's what Microsoft has been building on for the last about two decades to create what we now know as Windows 10, by way of Windows NT 4.0, Windows 2000, XP, Vista, 7 and 8. (The DOS/Windows lineage ended with Windows ME.)

As another-dave said in another answer, process isolation (which is a cornerstone for, but on its own not sufficient to ensure, system stability) isn't a bolt-on; it pretty much needs to be designed in from the beginning as, if it isn't there, programmers (especially back in the day, when squeezing every bit of performance out of a system was basically a requirement) will take shortcuts which will break if you add such isolation later on. (Compare all the trouble Apple had adding even basic protections to classic Mac OS; they, too, ended up doing a complete redesign of the OS that, among other things, added such protections.) Windows 95 didn't have it, nor was the desire from Microsoft to do the work needed to add it there; Windows NT did have such isolation (as well as paid the cost for having it). So even though Windows NT was far from uncrashable, this difference in the level of process isolation provided by the operating system shows in their stability when compared to each other, even when comparing contemporary versions.


*) The idea behind kernel/userspace isolation (usually referred to as "ring 0" and "ring 3" respectively in an Intel environment) is that while the operating system kernel has full access to the entire system (it needs to, in order to do its job properly; a possible exception could perhaps be argued for a true microkernel design, but even there, some part of the operating system needs to perform the lowest-level operations; there's just less of it), normal applications generally don't need to have that level of access. In a multitasking environment, for just any application to be able to write to just any memory location, or access any hardware device directly, and so on, comes with the completely unnecessary risk of doing harm to the operating system and/or other running applications.

This isn't anywhere near as much of a problem in a single-tasking environment such as MS-DOS, where the running application is basically assumed to be in complete control of the computer anyway.

Usually, the only code (other than the operating system kernel proper) that actually needs to have such a level of access in a multitasking environment is hardware drivers. With good design, even those can usually be restricted only to the portions of the system they actually need to work with, though that does increase complexity further, and absent separate controls, a driver can always claim to need more than it strictly speaking would need.

Windows 95 did have rudimentary kernel/userspace and process/process separation, but it was pretty much trivial to bypass if you wanted to, and drivers (even old DOS drivers) basically bypassed it by design. Windows NT fully enforced such separation right from the beginning. The latter makes it much easier to isolate a fault to a single process, thereby greatly reducing the risk of an errant userspace process causing damage that cannot be known to be restricted only to that process.

Even with Windows NT, back then as well as today, if something went/goes wrong in kernel mode, it would generally cause the OS to crash. It was just a lot harder to, in software, cause something to go sufficiently wrong in kernel mode in Windows NT than in Windows 95, and therefore, it was correspondingly harder to cause the entire operating system to crash. Not impossible, just harder.

user
  • 5,286
  • 2
  • 26
  • 43
  • 4
    Its a rare occurence for an newer answer to "kill"" an older one by simply being lot better. Congratulations. – Delta Oscar Uniform Jul 29 '19 at 18:04
  • 16
    Linux was lucky in this case because it was designed from UNIX which had process isolation (in certain forms) from the beginning, as a result of its use on mainframes. That's one reason why Windows was considered so much less stable than *nix since the former was far more vulnerable to fatal crashes. Nowadays though, they both have excellent stability and a very strong multi-process isolation model. – forest Jul 30 '19 at 06:54
  • 1
    NT and OS/2 were different things. – OrangeDog Jul 30 '19 at 10:51
  • @OrangeDog why exactly? – Delta Oscar Uniform Jul 30 '19 at 10:59
  • 1
    Apparently it was called "NT OS/2" internally during design, before they knew whether it was going to get IBM or Microsoft branding. – OrangeDog Jul 30 '19 at 11:03
  • 2
    @OrangeDog See https://en.wikipedia.org/wiki/Windows_NT_3.1#As_NT_OS/2 and https://en.wikipedia.org/wiki/Windows_NT#Development I've edited the answer to clarify the naming issue. – user Jul 30 '19 at 11:17
  • I think this answer would benefit from some expansion on the kernel mode/user mode that is only briefly mentioned. Also worth pointing out (imho) is that w9x/derivatives are really only single user OS and hence we don't care so much about keeping everything running stably versus the opportunity to wring more performance, as we aren't going to be trashing multiple user's work. Programmer mindset and psychology was probably also a major factor, it having been only a few years since "load it all off a floppy, into memory and hand it over to the cpu"was de facto - nothing virtualized/shared – Caius Jard Jul 31 '19 at 04:58
  • Some problems with this answer. Windows 95 was designed from the ground up with process isolation in mind. The main problem (apart from the driver issue) is that the graphics subsystem was taken straight from Windows 3.x and thus was 16 bit and could not be run in a pre-emptive environment. So there was a global lock which caused contention problems. – JeremyP Aug 08 '22 at 16:50
  • 1
    The driver problem is a problem with any monolithic kernel. Drivers have to be able to access kernel resources and in a monolithic kernel that means they need to run in kernel mode which means they can easily crash the system. Every commercially successful operating system that runs on a PC is vulnerable. Windows 95 was particularly vulnerable only because of the wide variety of hardware it supported and its need to support 16 bit Win 3 drivers. WinNT also had a reliability problem for similar reasons. – JeremyP Aug 08 '22 at 16:56
40

The decision about whether to kill a process or crash the OS generally depends on whether the problem can be isolated to the process.

For example, if a running process in user mode attempts to read from an address that's not present in its address space, that's not going to affect anything else. The process can be terminated cleanly.

At the other extreme, if the file system running in kernel mode discovers that some data structure is not as expected, then it is wise to crash the entire system immediately, because the consequence of corrupt in-memory control structures could be loss of disk data, and that's the worst thing that could happen.

With specific respect to the Windows NT (2000, XP, 7) family: the OS was designed with good process isolation from the beginning. For Windows 9x, the heritage of Windows up through 3.x required some compromises in the name of compatibility. In particular, the first megabyte of address space is common to all processes: corruption there can kill the whole system.

TL;DR - process isolation is a day-0 design issue.

user3840170
  • 23,072
  • 4
  • 91
  • 150
dave
  • 35,301
  • 3
  • 80
  • 160
  • 25
    All these poor people that actually spent money on Windows ME instead of 2000... – Delta Oscar Uniform Jul 28 '19 at 19:25
  • 3
    What does 0-day design issue mean? – Omar and Lorraine Jul 29 '19 at 08:05
  • 25
    @Wilson it means it's basically impossible to retrofit and has to be considered on the first day of designing the operating system when drawing up the one-page or even one-sentence description of what you're intending to build. – pjc50 Jul 29 '19 at 09:21
  • 1
    Wasn't it also the case that Microsoft really ramped up their QA, especially for drivers, sometime between Windows XP and Vista? – Vilx- Jul 29 '19 at 13:09
  • 6
    @vilx Microsoft claimed (which I do believe accurate) that third party drivers were responsible for the vast majority of the blue screens in Windows back in the late Windows 98 SE era. This is the justification on why they made it mandatory to use signed drivers to pass WHQL tests and ship Windows as an OEM. – UnhandledExcepSean Jul 29 '19 at 14:17
  • 1
    ...loss of disk data, and that's the worst thing that could happen. IMO there are a number of things that could happen that are worse than loss of data. Security exploits, actual physical hardware damage, etc. Doesn't really change your answer, but I thought it was worth mentioning. – GrandOpener Jul 29 '19 at 15:15
  • Calling it a "day-0 design issue" is a bit over-stated. You could very well retrofit a fix for this just by running each application in a separate VM or under full software emulation. It's just costlier than most people would like. – R.. GitHub STOP HELPING ICE Jul 29 '19 at 17:18
  • AIUI this is something that's evolved continuously over time through the versions of windows. Even before 95, I remember being impressed by "general protection fault" and app crash in Windows 3.1 for the same circumstances that would cause "unrecoverable application error" and system crash on 3.0. – Random832 Jul 29 '19 at 17:58
  • @R.. Then you run into problems with applications that start other applications and expect to be able to access their data. – user253751 Jul 30 '19 at 04:54
  • @immibis: Some would consider the loss of that a feature.. :-) – R.. GitHub STOP HELPING ICE Jul 30 '19 at 04:55
  • 1
    @R.. The user won't. – user253751 Jul 30 '19 at 05:30
  • 6
    @UnhandledExcepSean third party drivers were responsible for... - they still are, just about a month ago I had a Windows 10 PC that would BSOD daily (sometimes twice a day) due to a faulty Intel HDD driver. Not saying things haven't improved, though - the screen now has a nicer shade of blue and a QR-code! – Headcrab Jul 30 '19 at 08:01
  • While I agree that process Isolation has a lot to answer for, even if Windows 95 was fully isolated, you still would have gotten total system crashes far more often than you do today. Every part of a modern PC has moved on in terms of stability and reliability since 1996. Today, your CPU has 99% fewer bugs in it, your graphics hardware doesn't have dozens of frame buffer crashes, your sound chip isn't attached to a magnetic speaker directly wired into the motherboard, your BIOS battery is made to a much higher spec, your RAM is fabricated using modern PCBs and chips, the list goes on and on... – Geoff Griswald Jul 31 '19 at 15:35
  • 1
    @JohnEddowes I wouldn’t buy that a newly designed CPU of today has less bugs than a CPU of that old times, as the complexity is far higher. But they also support software updates, which leads to the more important point, the possibility of downloading updates from the internet automatically… – Holger Aug 08 '19 at 09:47
  • True, 95 didn't let you update from the internet, as it was designed well before the internet was commonplace. We had to wait for Windows 98 for that! Although Win95 did get Windows Update retroactively patched in later. We take updates and patches for granted these days, but in the 90s it was extremely rare even for Windows to get an update, much less the apps or games you were running. – Geoff Griswald Aug 09 '19 at 10:20
  • @wilson What does 0-day design issue mean? It's "day 0", not "0 day". Day 0 is the first day you start work; we're programmers so we number from 0, not from 1. It means, slightly exaggerated, that something needs to be designed in from the beginning, not added on afterwards. – dave May 08 '20 at 00:33
12

Although Windows 95 introduced support for 32 bit applications with memory protection, it was still somewhat reliant on MS DOS. For example, where native 32 bit drivers were not available, it used 16 bit DOS drivers instead. Even 32 bit applications had to be synchronized with the 16 bit DOS environment.

A fault in the DOS part of the system would bring the whole thing crashing down. 16 bit DOS applications do not have any meaningful memory protection or resource management and a crash cannot be recovered in most instances. And since even 32 bit applications had to interact with DOS components, they were not entirely immune either.

Another major cause of instability was that 32 bit drivers ran inside the Windows kernel (the core of the system). That reduced the amount of memory protection they had, and meant bugs would crash or corrupt the kernel too.

By the time Windows 7 came around drivers had mostly been moved out of the kernel and faults could be recovered from similar to an application crashing. There are some exceptions such as low level storage drivers.

user
  • 15,213
  • 3
  • 35
  • 69
  • 3
    I'm not convinced by "drivers had been mostly moved out of the kernel". How many, what devices did they drive, etc? – dave Jul 30 '19 at 11:47
  • In Linux there are user mode APIs for filesystem drivers, parallel port access, USB drivers, and general purpose I/O port access. I don't think this exists on Windows! – Alex Cannon May 20 '20 at 03:01
  • 1
    @AlexCannon: There is; it's called the User Mode Driver Framework. Filesystems are an exception IIRC, the issue is that user-mode code can be swapped out. This is an obvious problem when you swap out the file system driver for the swap file ! – MSalters May 25 '20 at 10:18
4

Addendum:

Some special memory areas (eg the infamous "GDI resources") that all applications needed were extremely limited in size (due to being shared with 16-bit APIs, which needed segment size limits respected) - and very easily ran into exhaustion, with no effective safeguards present.

A lot of essential system APIs did not sanity check their parameters well - if you accidentally fed them invalid pointers or pointers to resources of a different type of resource than expected, all kinds of unwanted behaviour could happen - especially when involving something in a 16-bit-shared area. Getting GDI object handles in a twist ... ouch.

Also, the system was trusting the responses to certain messages too much. I remember you could make Windows 9x extremely hard to shut down properly by simply installing a WM_QUERYENDSESSION handler that silently FALSE'd out everytime...

16-bit apps were run with A LOT of gratuitous privileges for compatibility reasons - enough to directly access ... and in the worst case crash! ... some of the hardware.

rackandboneman
  • 5,710
  • 18
  • 23
3

The example in the original question isn't completely true. If a 32-bit Win32 application crashes in both Windows 95 or Windows NT4 / Win 7 that program will crash and nothing else will be affected. Now there surely are some bugs in the Win95 Win32 API that could be used by a program to crash the whole system, but that's not the focus here. It's not that a win32 program could crash all of Win95, it's that other types of programs and privileges were allowed on Windows 95 that aren't allowed on Windows NT through 7.

Windows 95 is just an extension of Windows 3.1 in 386 enhanced mode with Win32s installed. Windows 3.1 runs on top of DOS. DOS gives programs complete control over the computer until that program exits. If a program locked up you had to reboot. If a program crashed you should reboot because you don't know if the memory used by DOS got corrupted or not. 16-bit Windows applications use a more complicated method of sharing memory and there is no memory protection, so one program can access memory belonging to another process or Windows itself. So one program crashing meant the whole computer should be restarted.

Windows NT introduced fully virtualized DOS and 16-bit program environments. A 16-bit program can crash all of the 16-bit programs running on NT but that's it. It continues to work that way on 32-bit Windows 7. There is no more direct access to hardware or memory that is used by the oprating system. The only way to crash NT is with a driver.

In summary, it's not so much 32-bit programs on Windows 95 that crashes Win 95, it's the abundance of DOS and 16-bit programs that are running which have direct access to operating system memory for legacy compatibility. The operating system itself is largely 16-bit.

Alex Cannon
  • 303
  • 1
  • 8
  • 1
    I don't think this is really true. 32-bit process isolation wasn't very good either. E.g. the upper 2GB was common to all processes, and anything in that region that was writable by any process was writable by all – including kernel data structures, I think. Also, Win95 wasn't good at cleaning up GDI handles that weren't explicitly freed, so a 32-bit process that exited badly could deplete that limited resource. I think DOS box isolation was (somewhat) better than you suggest also. Most DOS functionality was emulated in 32-bit code unless the user loaded an unsupported driver in config.sys. – benrg Nov 29 '23 at 19:51
0

Everyone is talking about the improvements in software between Windows 95 and Windows 7, but in those 15 years there were huge advancements in hardware as well. You can run an identical copy of Linux on some consumer-grade hardware from 1996 and some hardware from 2016 and you will find a world of difference in system stability.

Older hardware simply crashed more often, and it was only really around 2003-2004 that things really changed. Manufacturers of motherboards, CPUs, RAM and various other hardware upped their game significantly as businesses and home users demanded better stability.

One of the most popular motherboard manufacturers in the 90s was a company named "PC Chips", who also traded under about 20 other names. They produced these shonky, poorly-soldered, barely-shielded motherboards at rock bottom prices. A lot of the system crashes back then were due to people running those motherboards, and not Windows.

That said, Win95 was horribly crash prone itself, and it was always a guessing game as to whether your crashes were hardware or software related.

Geoff Griswald
  • 282
  • 1
  • 6
  • Pc Chips? Did good Ole IBM computers use these ugly mother boards? – Delta Oscar Uniform Jul 31 '19 at 15:45
  • Also where I can find info about these cheapskate? Any Wikipedia pages or something like that.? – Delta Oscar Uniform Jul 31 '19 at 15:47
  • 3
    Certainly in my case, moving from Windows 9x (I think from 95 OSR2 at the time, but it might possibly have been 98) to NT 4.0 Workstation made an enormous difference in system stability, with absolutely no hardware changes. But then again, by that time I had a powerful enough system to run NT well. – user Jul 31 '19 at 15:50
  • 2
    Also, don't you contradict yourself in the last two sentences? You say that a lot of crashes were hardware related, and then immediately turn around and say it was a guessing game whether crashes were hardware or software related. While strictly speaking both statements can be true, it would seem difficult to determine whether the first statement is true in light of the second statement. Also, 2003-ish was when Windows XP (released late 2001) would be starting to make significant inroads in the consumer market. – user Jul 31 '19 at 15:53
  • 1
    @DeltaOscarUniform

    They are still around, and almost every OEM uses something from them today. The company and brand was changed to ECS, mostly because of the reputation of the "PC Chips" brand name. Speaking of which, my first decent computer that I got on my own, was a2003-ish Socket A with an Athlon Thunderbird. The motherboard was an ECS, and ... it did eat it eventually, it was like the most budget board you could imagine. I found a gutted tower in a warhouse basement, nothing mounted in it except that board and CPU.

    – J. M. Becker Aug 01 '19 at 04:32
  • 4
    Some late 90s hardware (not all, and you never knew what you got) was perfectly able to do 100s of days of uptime under either linux/unix (if the drivers you needed to use weren't bugged!) or Windows NT or when running non-bugged DOS single purpose, equipment control programs. No, it did not under most Windows 9x versions: Research what the 49 day bug was if you are curious :) ... One problem with 90s hardware was still-widespread ISA hardware - easy to misconfigure, and easy to crash the system with on a hardware level if misconfigured :) – rackandboneman Aug 01 '19 at 21:45
  • ECS=Elitegroup IIRC... Some (not all) of these boards were dodgy indeed IIRC... – rackandboneman Aug 01 '19 at 21:48
  • While it was impossible to tell in the moment whether an OS crash was software or hardware related, over a long enough time frame containing enough crashes, patterns could be detected... "oh so if I use THIS motherboard I get one crash per day, but THIS motherboard only gives me one crash per week!" – Geoff Griswald Aug 07 '19 at 08:34
  • 1
    I have obtained many old computers from the 90s and only one of them ever had a hardware issue that caused instability. It was due to a faulty motherboard connection if the board was bent a certain way. Tip: If you ever get an NMI (non maskable interrupt) fault, that's usually the chipset's way of telling you that a parity error occurred the PCI bus due to dirty contacts.Win2k will blue screen to prevent data corruption! It wasn't untill 2000 - 2006 that the bad capacitor plague and badly soldered BGA video chip issues started. – Alex Cannon May 20 '20 at 02:57
  • @AlexCannon any computer from the 90s which is still working today likely never had stability issues to begin with. the ones that were unstable got tossed in the trash years ago. The stuff that survived to 2020 is likely high-end, high-quality branded equipment. The real instability culprits were PC-Chips motherboards, low quality CPU fabrication from AMD and Cyrix, poor manufacturing techniques, low quality expansion cards with poor voltage regulation, bad power supplies, and various other issues. – Geoff Griswald May 21 '20 at 12:02