9

As we know, computing will (and really has) been important to research missions for space science and exploration.

I read about the Spacebourne Computer program HPE and NASA used for the ISS as proof of concept.

Now, couldn't we use this for space research on Mars and the outer planets (OP)? Sending data incurs radio wave loss (not to mention free space loss) through space. And it's often sending kbps, which lends to bigger possibilities for data corruption.

Couldn't we send orbital supercomputers to Mars and the OP?

For example, imagine we have a distributed orbital supercomputer around Mars composed of twenty or so SBC-1s modified to be self-sustaining spacecraft. They act as a massive supercomputer for Mars (or if ringed around other planets, OP) science research. You can have a bigger variety of local science experiments conducted by rovers/probes (and eventually human science researchers when they land on Mars or a OP moon). Why twenty? In case one SBC-1 orbiter is down for whatever reason.

I want other opinions on if this makes sense or not, and the costs vs. benefits of such an proposal.

Glorfindel
  • 723
  • 1
  • 9
  • 22
wonderinghuh
  • 225
  • 2
  • 3

6 Answers6

37

They act as a massive supercomputer

I think you massively underestimate how massive a massive supercomputer is, and most importantly, how massive both the power requirements and the cooling requirements are.

Remember, a computer essentially turns electrical energy into heat, so not only do you have to put an enormous amount of energy into the computer, you also have to put another enormous amount of energy into removing the energy you put in.

The biggest solar arrays ever put on a spacecraft are the ones on the ISS. When the installation of all 6 iROSAs is finished, they will generate 215 kW of power. The most advanced space-based nuclear reactor, the Kilopower project still under development, will have a 10 kW version with a mass of about 1.5 t, so you could strap together 20 of those and get a similar power output as the ISS solar arrays, which are however at least twice as heavy.

The most efficient supercomputer in the world, according to the Top500 organization's Green500 list from November 2022, is the Henri system which achieves 65.091 GFlopsW.

Assuming we could have 10 times the power of the ISS and assuming we could use their full power only for computing and not for cooling, and assuming we could build a supercomputer 10 times more efficient than Henri, that would give us a computing power of ~1.4 EFlops.

That would put the computer barely, but just barely at the top of the current Top500 list. So, you would think that putting a supercomputer in space would be quite reasonable.

However, you have to consider the following:

  1. I assumed that we could make a supercomputer that is 10 times more efficient than the current most efficient supercomputer in the world.
  2. I assumed that we could generate 10 times as much power as the current most powerful power generator in space.
  3. Supercomputers on Earth get faster over time, whereas your space-based computer can't be easily upgraded. E.g. the previous Top500 record holder only spent 2 years at the top of the list, then it was overtaken by a computer that was 2.5 times faster. It took only 3 years to get a 10-fold improvement in performance.
  4. Supercomputers on Earth get more efficient over time. It took only 8 years to go from single-digit GFlopsW to the current record of over 65 GFlopsW.

So, even with our completely unrealistic assumptions about making the computer 10 times more efficient than the current world record and generating 10 times more power than the ISS, your supercomputer will essentially be overtaken in less time than it takes to build and launch it. Based on current performance trends, even the 500th supercomputer on the Top500 list will overtake yours in the 2030s.

Just as an example: the SBC-1, when it was launched, would have placed around 130th in the Top500 list. When it returned, only 1.5 years later, it would have barely made it in at 400–450th. Your 20 SBC-1s have roughly the same computing power as two PlayStation 5s or Xbox Series X, or a single top-tier gaming GPU.

And remember, even if your completely unrealistic supercomputer is, for a brief moment, the most powerful supercomputer in the world, it is still only one. Whereas here on Earth, there are tens of thousands of supercomputers.

Also, we haven't even talked about mass yet. A 200 kW version of the Kilopower would probably have a mass of 30 t, and that's just for the power generation. We still have no computer and no cooling.

Lastly, these kinds of computers are not very small: Photo of the Fugaku supercomputer

Note that that's only the compute nodes. Not visible in the photo is the power distribution and the cooling systems.

Another problem with your idea of a "distributed" supercomputer is that one of the major problems with current supercomputers is communication bandwidth, but even more communication latency. IOW, much of the work on current supercomputers is put into how to put the compute nodes as close together as possible. Your idea of distributing the compute nodes around the orbit will devastatingly cripple the performance.

Jörg W Mittag
  • 18,489
  • 1
  • 64
  • 71
  • 25
    Cooling the computer and radiating 200 kW of heat into a vacuum is presumably left as an exercise to the reader. – Cadence Nov 27 '22 at 09:48
  • 4
    Yes, indeed. I am trying to show that even making totally unrealistic assumptions about power efficiency and power generation and magically solving cooling, it is still not a good idea. – Jörg W Mittag Nov 27 '22 at 09:50
  • 4
    Added to this would be the need to protect space borne computers from cosmic radiation & subsequent bit flips. – Fred Nov 27 '22 at 09:54
  • 2
    @Fred: Actually, the whole idea behind the SBC project is to take COTS hardware components and add resiliency in software. How well that is going to stand up outside of the Earth's magnetosphere is a different question, though. 9 of the 20 SSDs on SBC-1 failed in some fashion or another during its 1.5 years on the ISS, for example. – Jörg W Mittag Nov 27 '22 at 10:04
  • 3
    @JörgWMittag thanks for the well thought out answer. Didn't think of all these variables, and gave me a lot of insight. – wonderinghuh Nov 28 '22 at 02:33
  • 1
    On reactors: The challenge is making the reactors smaller, not larger. We already know how to make high output reactors. There are 1MW designs small enough to take into space - radiators not included however. – Therac Nov 28 '22 at 04:34
  • 1
    Although everything in this answer is true, it's missing the point. Which isn't to beat the Top500, or even get anywhere close. It is to have a computer available near Mars, that would be usable for researchers anywhere on Mars with both acceptable transfer rates and latencies, and enough power to carry out computations that can't just be done on a laptop in the habitat. Sure, power and cooling are necessary, but in that regard the are actually quite some arguments going for putting it in orbit instead of on the ground (unlike on Earth). Radiation is a bigger concern. – leftaroundabout Nov 28 '22 at 23:37
  • To put it in a different perspective, 10 kW is not enough to power a single server rack, much less cool it. In fact, if we counted compute, storage and cooling, 10 kW is probably only enough for one or two (high-end, compute) servers. We're at the point where HPC datacenters are moving towards water cooling because (conditioned and refrigerated) air is not enough. – jaskij Nov 29 '22 at 08:09
  • @jaskij Conditioned and refrigerated air is not only not enough, it's inherently inefficient: You need large surfaces to transfer the heat from metal to cooled air, and you need pretty cold air to start with. Water cooling allows the coolant to be much warmer, allowing you to skip the refrigeration machinery (and power consumption), as well as allowing for smaller recoolers or actually putting the waste heat to some beneficial use. – cmaster - reinstate monica Nov 29 '22 at 10:20
  • @leftaroundabout But there are no researchers "anywhere on Mars". And when there are, there wouldn't be much point to having their computers in orbit as opposed to being in the habitat with them, piggybacking on their life support and radiation shielding. – Cadence Nov 30 '22 at 22:46
  • @Cadence that may be true, but it's a separate argument from the one made in the answer. And, unlike the radation shielding part, “piggybacking on their life support” doesn't make much sense: computers need entirely different support than humans. If anything, I'd rather say it makes sense for the humans to piggy-back on the waste heat produced by the computers. But that may still be a bad deal: a CPU is only as efficient a source of warmth as a stupid resistive heater. Whereas heat pumps could in principle give the same warmth with less energy. ... – leftaroundabout Nov 30 '22 at 23:41
  • ...No doubt heat pumps have challenges on Mars, but so do solar cells. If you need X watts of power for some task, it'll be generally much cheaper to obtain them from orbiting solar cells (no landing required, easy automatic deployment, no atmosphere, to trouble with sand blowing over etc.) – leftaroundabout Nov 30 '22 at 23:41
  • Another thing to consider: maintenance. Most large scale computer clusters have full time staff just to remove and replace parts as they fail. IIRC several of the "worlds largest super computer (for now)" were never able to run a full capacity job because they never got 100% of the nodes and networking working at the same time. The MTBF was shorter than the time it took to replace things. – BCS Dec 03 '22 at 19:46
17

Mbps deep space data rates are becoming more common these days. Data corruption is not much of a problem with 21st century forward and backward error correction. And then, what problem are you tying to solve? The bulk of the bits represent science data. You want that minimally processed: the more sophisticated the processing, the harder it is to figure out what your instrument really detected when something surprising shows up. You don't want the "tunnel vision" of pre-programmed data reduction.

The TESS data archive is a fine example. Anybody can try out their own planet detection algorithm on the archived data. Every algorithm has (often unanticipated) capabilities and limitations, so these experiments are valuable and increase the science yield of the mission. If TESS was limited to detecting planets in on-board processing, this wouldn't be possible. Also, the archive has become a major resource for non-exoplanet astronomy (asteroseismology, supernovae, binary stars, asteroids, ...). A lot of this was not fully anticipated in the mission design, but when you downlink a lot of minimally processed data, there's plenty of stuff in it.

John Doty
  • 2,769
  • 6
  • 15
  • 3
    I think a huge benefit is not optical astronomy, but rather space VLBI radio astronomy where a tremendous amount of the data is reduced (and AFAIK discarded) by correlation. You're talking about tens of gigabits per second per station, and a data reduction of around 5 petabytes to a 1 megabyte image. – user71659 Nov 28 '22 at 20:22
  • 1
    @user71659 Never underestimate the bandwidth of a spacecraft filled with tapes dashing through space - it's sneaker net -> truck net -> rocket net ;-) – cmaster - reinstate monica Nov 29 '22 at 10:25
1

While it is technically feasible (although difficult and very expensive) to put a super computer around Mars, there are no use cases for it. Supercomputers are needed to do complex calculations on massive data sets, the limitation on Mars is not on the amount of data we can process or transmit but the amount we can collect. We have a few scientific instruments, cameras and sensors on the surface, not enough to cause the kinds of bottlenecks you are assuming in your question. What pre-processing is required is done on the sensor platform. There's no enough data for a super computer to operate on.

Scientists need the raw data, and computing resources need to be near the scientists, who are on Earth.

GdD
  • 20,226
  • 6
  • 65
  • 85
1

The idea of putting some general computing resources near areas of exploration has some merit. Whether or not it needs to be in orbit or needs to be a "super computer" is debatable.

On earth we often use a technology called "cloud computing" to offload expensive operations from small portable devices. This in turn reduces their size, weight, power, and cost. A classic example is speech recognition for small, connected devices like an Amazon Alexa or the Embodied Moxie robot.

One could envision some general-purpose processors located near or on Mars, that could serve as a computational resource for other equipment via a wireless link. This could be used to reduce the processing requirements on things like rovers or other mobile science or construction robots (which in turn reduces their size, weight, power, and cost).

On a rover for example, image data from two cameras can be used to build a 3D model of the surrounding terrain, and then that data is used to plan driving routes. That's all computationally expensive, and something that could possibly be offloaded (especially since the rovers drive very slowly).

Gains can be significant. For example, if I can reduce the computational needs of my equipment so I don't need a multi-core multi-GHz CPU or graphics chip, but instead can use a micro controller then that circuit board becomes a lot less power hungry. Now the size of my batteries and solar panels gets a lot smaller. This possibly shrinks the mechanical size of the design, and all of that adds up to a lot less weight, which in turn reduces the size of launch vehicle and amount of fuel I need to send it, which in turn might have a huge cost impact.

Obviously just putting a computer in space won't, by itself, do anything unless new mission equipment is specifically designed to take advantage of that local infrastructure.

user4574
  • 525
  • 3
  • 10
1

The difficulties in radiation hardening the computer would make this pointless.

Computers are sensitive to things like cosmic rays inducing data corruption in the CPU and memory directly, so the computers and chips have to be radiation hardened against this.

The more modern the chip, the smaller its components generally are - this comes with the added bonus of increased performance and often lower power requirements. But the smaller the component is the smaller the pathways are in that component, meaning the easier it is to induce data corruption in them.

One way around this that is used most in spaceflight is to use older designs - these are easier to harden against radiation. For example, the Curiosity and Perseverance Mars rovers has a computer which uses a CPU design first released in 2001, and only 256MB of RAM. Older designs have larger components, which are easier to harden against outside-induced data corruption

Putting a super computer in Mars orbit would mean sacrificing a lot in terms of performance simply through whats needed to ensure it can operate - you are much better off solving the bandwidth issues in transmission and doing the computer work on Earth.

Moo
  • 278
  • 2
  • 9
  • The SBC program mentioned in the OP is actually aimed at developing new, more efficient means of hardening computers - but it's important that they're targeting general computing (equivalent to a consumer-grade desktop) in a shirtsleeves environment inside the ISS, not supercomputing and not in space. – Cadence Nov 30 '22 at 23:19
0

Today's laptop computers have more computing ability than supercomputers of two decades ago and vastly more communication bandwidth. The Apple M1 chips have cut power needs from 1/2 to 1/3. Chips with dozens of ARM cores are being made that are exceptionally efficient. Mass storage is not a problem.

This suggests that the supercomputer of today won't be needed tomorrow and that distributing processing among several or many systems on a spacecraft will be more than enough. One can imagine craft that need a great deal of processing to do all kinds of synthetic sensor work, like passive and active RADAR and synthetic aperture systems aimed in all directions, communications using SDR antenna arrays (like Starlink), optical spectral and spatial analysis with very large camera chips, etc. Plus monitoring and controlling ships systems and possibly passengers. But a handful of modern laptops can do it along with some custom chips.

Realistically, these all have to be "space hardened", which means they will be slower and bigger and run hotter than current tech on Earth. The microscopic features in a chip have to be big enough to survive cosmic rays and probably need to be triple redundant to detect errors from from energetic rays. However, if you are at a stage that you CAN build a supercomputer in orbit at Mars, you can build spacecraft with enough room and mass and power sources to put it all onboard.