16

Grace Hopper famously used 30 cm pieces of wire as a teaching aid to show how far signals can travel in one nanosecond. Indeed, the speed of light has become a limitation for many computers. The Cray-1 supercomputer was built in a "C" shape to minimize delays and skew between signals. Modern computers minimize distances to avoid the effects of delays and skew.

Other physical factors have limited clock speed. Relay computers can switch only as fast as the contacts can physically move. Transistors (especially MOS) take time to switch. The growth of clock speed in microprocessors has slowed, due to the difficulty of dealing with the heat given off by faster circuits.

What limited the clock speeds of vacuum-tube based computers? Was it physical factors (distance, switching speed, heat dissipation) of the tubes themselves or their wiring? Or was the limit instead caused by items outside the processor, such as memory access time, input, and output?

Toby Speight
  • 1,611
  • 14
  • 31
DrSheldon
  • 15,979
  • 5
  • 49
  • 113
  • 1
    Vacuum tube circuits could easily operate at frequencies over 100 MHz. On the other hand magnetic core memory had a cycle time slower than 1 MHz. It's not very clear exactly what you mean by "physical factors" but the tubes themselves did not limit the operating speed. – alephzero Jun 21 '21 at 19:46
  • @alephzero: Question clarified. – DrSheldon Jun 21 '21 at 20:50
  • 2
    When were these 100 MHz valve circuits developeed? In the late 1940s, 1 MHz clocks were pushing the envelope; Wilkes famously chose 0,5 MHz for EDSAC since his concern was to get a working computer ASAP -- and as far as I know, that clock rate was chosen on the basis of circuit capability rather than (not yet selected) memory. And I was given to understand that Wilkes had state-of-the-art experience due to wartime work on radar. – dave Jun 21 '21 at 22:59
  • @another-dave Radio amateurs were officially permitted to operate at 28 MHz as early as 1927. WWII pushed the frequency range higher. – alephzero Jun 21 '21 at 23:15
  • 2
    ... the Royal Navy was using RDF (radio direction fInding) kit operating at 85MHz by 1940, for example.. Centimeter-band radar operated at a few GHz (the same frequencies as modern cellphones!) using magnetron tubes in WWII, but that was fairly irrelevant for developing computer circuits. – alephzero Jun 21 '21 at 23:22
  • Yeah, I can't see using a few thousand magnetrons for a computer. But how large is, and what are the power requirements for, a 28 MHz switch? It seems to me that's as much a physical issue as any other. – dave Jun 21 '21 at 23:32
  • 5
    @alephzero The ability of valves (tubes) to operate on analogue signals in the multi-MHz range has little to do with the clock frequency achievable in a computer using these same valves as digital switches. – Glen Yates Jun 22 '21 at 22:02

4 Answers4

7

I think that vacuum-tube-based computers were limited, besides storage speed etc., by the tubes' technology itself.

There do exist hundreds-MHz tubes and even GHz range ones, but they are either have specialized design like this one:

enter image description here

or they depend on the specific speed of slowly-moving electrons (like magnetrons, travelling wave or backward wave tubes). Ordinary tubes as used in '40s and '50s computer designs are neither as powerful nor as fast.

Typical tube designs of the era have large output impedance along with high output voltage swings and relatively high input capacitance (worsened by Miller effect). Those things limited the speed of tube computers.

Toby Speight
  • 1,611
  • 14
  • 31
lvd
  • 10,382
  • 24
  • 62
  • Can those valves switch at that range? A lot of the top-frequency devices (whether tubes with funny geometries or diodes with funny bandgaps) are just sources. You can't even receive with the sanely without heterodyning. It's a bit like putting a lightbulb in your circuit and saying your circuit works at 500THz. That's useless for a computer except the very specific case of clock generation. – Dannie Jul 08 '21 at 11:40
6

Before caching, pipelining, and parallel processing (etc.) became commonly used to increase the performance of computer implementations, memory access speed was a bottleneck. Therefore, there was no need to try to implement any higher speed vacuum tube logic circuits required to support very high clock rates, as that would just waste power, thus reducing reliability due to increased heating.

According to Wikipedia, the IBM Naval Ordnance Research Calculator (NORC) which IBM has claimed to be the fastest vacuum tube computer, had a memory access time of 8 uS.

The Cray-1 was limited by the available cooling as much as it was by propagation delay. Hypothetical higher clock rate vacuum tube computers would likely have the same thermal limitations on performance.

hotpaw2
  • 8,183
  • 1
  • 19
  • 46
  • Univac 1107 used in 1961 a 4µs cycle time on core (effective 2 µs due banking) and 0.67µs for data in thin film memory. By the early 1970s core reached 0.3 µs cycle time, making clock rates > 10 MHz a reality. – Raffzahn Jun 21 '21 at 21:56
  • 2
    I don't feel like the heating of computers of '50s, '60s and '70s did anyhow depend on their speed. The main reason for contemporary CMOS ICs to heat more with increased clock rate is that every clock edge causes many gates capacitances to re-charge. For the tube and transistor designs capacitance rechagre currents were negligible in comparison to static currents. Even TTL, ECL and NMOS ICs had very little effect of clock on consuming power. The things changed only when highly-integrated CMOS ICs became reality. – lvd Jun 24 '21 at 10:31
3

In 1963 or 1964 I saw a copy of the General Radio Experimenter magazine with an article discussing whether a 1 GHz computer was possible. It wasn't optimistic. It was pre-integrated circuits, so Grace Hopper's nanosecond wires (I used to have one) explained the problem, sort of.

And while I'm on the subject, I read a science fiction story a few years before that saying one of the good reasons for earth satellites would be getting away from the need to stuff everything into a glass bulb. Obviously short-sighted even then. I now wonder why integrated vacuum tube circuits were never tried. Printed circuits were available then and could be stacked in separate layers for filament, cathode, grid and plate.

stretch
  • 131
  • 3
  • 3
    "I now wonder why integrated vacuum tube circuits were never tried." -- they were, kind of. See e.g. 3NF for an early attempt: https://www.radiomuseum.org/tubes/tube_3nf.html (whole radio receiver in one, let's call it integrated, valve circuit) – Radovan Garabík Jul 08 '21 at 11:17
  • Around 1960, GE introduced the Compactron line. There were triple triode, double diode/double triode, double pentode and other combinations. They were very small, too -- about half the size of subminiature tubes. Basically the final form of improved tube technology. It's not too hard to imagine that trend continuing for some time, if transistors hadn't displaced them. – RETRAC Oct 04 '22 at 16:31
-1

Neither tube not semiconductor computers are really limited by size (line length) nor is their performance limited by switching frequency. Performance is almost entirely defined by architecture.

Architectures based on a simultaneous clock will of course be limited by clock and signal distribution, but they are not the only way to build a computer. They are only the most simple one - if at all.Computers can well be build on pipelined and/or asynchronous principles. Not to mention parallelisation.

And the same is true for external components - for one they evolved as well, but more important,they as well can be build accordingly. Not to mention that other non semiconductor memory technologies (think thin film memory) are as well available, but there is no natural reason to use core at all.

Real world limits are never about 'how fast can it be made' but always about what can be made economical. The same way no modern computer is build from single transistors (or TTL FWIW) anymore. It just doesn't pay.


To answer the inherent but off-topic what-if question:

Well even if semiconductors hadn't been invented (ignoring that tube computes use diodes as well, which are semi-conductors), it would still be a case of having an application that pays for increase of power. If there is a use casse with someone willing to pay, it will get designed and build.

Size wise tube computers would have gotten quite small, considering the micro tubes that have been developed in the 70s and 80s.


BTW: The urban myth of the Cray 1 being shaped in a cycle due signal timing has long been debunked (It did help with cooling).

Raffzahn
  • 222,541
  • 22
  • 631
  • 918
  • Of course, tube-based diodes are older than dirt so could have been used. – Jon Custer Jun 21 '21 at 21:27
  • @JonCuster True, then again, the same is true for semiconductor diodes. Both effects were described first about the same time (~1880s) and IIRC the silicon based diode patented even before a tube based on (pre WW1). – Raffzahn Jun 21 '21 at 21:36
  • Sure, but silicon diodes basically are dirt... But, for switching states a triode/transistor are the way to go. – Jon Custer Jun 21 '21 at 21:38
  • @JonCuster True, but again, using triodes for things that can be done with diodes is a bad idea - for logic functions only negation needs a triode, everything else is better off with diodes. That's why already ENIAC had more then 6000 of them. Saves cost (build and operation) and space. – Raffzahn Jun 21 '21 at 21:48
  • No arguments from me. I'm thinking more of Cockroft and Walton using a few high-voltage (350kV) vacuum tube diodes for their ion accelerators, so they certainly did exist. – Jon Custer Jun 21 '21 at 21:50
  • Diode logic is limited by RC delays, not diode switching speed. – hotpaw2 Jun 21 '21 at 21:52
  • @JonCuster THeir accelerator was build in the 1930s, right? at that time both, tube and solid state diodes were already mature. – Raffzahn Jun 21 '21 at 21:58
  • @hotpaw2 True, but that limits just the speed of a single element, not thruput of a machine a whole. – Raffzahn Jun 21 '21 at 21:58
  • These days one does use many semiconductor (silicon) diodes to make a C-W stack with many stages. At the time (1930s indeed) the available solid state diodes just weren't up to the task. Instead they developed a high-voltage tube diode. There are other tube diodes (pretty standard components once upon a time), and some are still used in other accelerator applications such as the corona current feedback circuit on Van de Graaff type systems (the original circuit from the late 1930s is still the go-to solution). – Jon Custer Jun 21 '21 at 22:01
  • @JonCuster again, no way I would argue on this. These are high power applications. Tube simply scale way beyond semiconductors. I still remember the 70s when we were happy to get of the shelf transistors able to handle a few Watts, while tubes did Kilowatts. Still, this is about computers, so not really high power - even considering the comparable high voltages. – Raffzahn Jun 21 '21 at 22:06
  • One thing I've wondered about is whether it would have been practical to design computer subsystems for operation in a "dynamic vacuum", i.e. one that would be pumped down before and during use rather than a "static vacuum" (sealed system). I would think something like a 4-bit ALU and some registers could probably fit in a reasonably small bell jar, and be constructed in a way that would allow access to internal components. It might also allow tubes of coolant to flow through it to help with heat management. – supercat Jun 23 '21 at 15:19