38

In a previous question, I learned that the JWST's solid state recorder (SSR) can hold at least 58.8 Gbytes of recorded science data. Which each day requires two, 4 hour downlinks with earth to empty its data buffers. In another question, I learn that the spacecraft's storage could fill up in as little as 160 minutes.

I'm curious to why the designers chose to give this $10 billion state-of-the-art satellite such a small amount of storage space?

I'm asking because I'm assuming there might be times where a transmission doesn't complete in its scheduled timeframe or where it might miss a day due to ground control issues. Or even a time where it has to stop collecting data and wait for its downlink window b/c the disk is full.

Wouldn't the spacecraft be more future-proof if it had a larger data buffer, so it also isn't affected by any "disk" sectors that have gone bad during its operation?

Or is there something I'm not considering, like a design constraint perhaps?

uhoh
  • 148,791
  • 53
  • 476
  • 1,473
Gabriel Fair
  • 1,331
  • 2
  • 10
  • 19
  • 67
    As an Old, the idea that 60GB of solid state storage would be considered "so small" is alternately hilarious and depressing. – Russell Borogove Jan 31 '21 at 02:38
  • 1
    Yeah some modern benchmarks to note: 1) TESS Flash Memory Card (FMC) holds 192GB of mass data storage. 2) My cell phone has 516gb of storage and a 108MP camera with 1 photo (jpg, compressed lossy) being 22.6 megabytes. Which means that the JWST's SSR would fill up with 2.6 of my cell phone's photos. – Gabriel Fair Jan 31 '21 at 02:53
  • 24
    No, 2600 of your photos. – Russell Borogove Jan 31 '21 at 02:56
  • It really depends on the instrument and mission. Sky doesn't change much anyway so you don't often take thousands of pictures per day. – user3528438 Jan 31 '21 at 03:28
  • You right, my math was wrong – Gabriel Fair Jan 31 '21 at 05:05
  • 7
    Solid state disks have a lot of “extra” capacity built in, for wear levelling and for replacing blocks which fail - the larger the “rated” size of the storage unit, the more “extra” capacity it has. For something as critical as this, it wouldnt surprise me if the actual capacity was a lot more than double the “rated” capacity, given the expected lifetime of the telescope, its working environment, usage cycles etc. If you wanted to up the “rated” capacity, you will also want to up the redundant capacity as well. I need to research some specifics here. – Moo Jan 31 '21 at 06:39
  • 1
    @RussellBorogove Even as a modern SRE/DevOps person in my late 20’s, the thought that 60GB of storage is ‘small’ sounds ridiculous. I regularly deal with systems with single-digit GB of storage, and have worked with hardware where the persistent storage was measured in single-digit MB. – Austin Hemmelgarn Jan 31 '21 at 16:10
  • 4
    @GabrielFair JWST contract was awarded in 2003. TESS contract award was 2013, a decade later. So, TESS has much more up-to-date tech. TESS is approaching three years in space, JWST remains on the ground. Go figure. – John Doty Jan 31 '21 at 16:28
  • 1
    @user3528438 A lot of the interesting stuff changes quickly. That's why TESS, based on experience with the actual limits of its data handling, is now taking "full frame" images every 10 minutes rather than every 30 minutes as planned. Its user community has been enthusiastic. – John Doty Jan 31 '21 at 16:48
  • 19
    @GabrielFair, your phone's storage isn't radiation-hardened. Put it on the JWST, and odds are it'll fail before the telescope even gets into position. – Mark Jan 31 '21 at 20:11
  • Related: https://space.stackexchange.com/q/10028/2847 – Mark Jan 31 '21 at 20:50
  • 1
    Not to mention radiation-hardened anything below 32nm node sizes are still hard to find. Yes the LEON2-FT is significantly worse than anything in 2020 cellphones, but I'd like to see the cellphone go to space... – shaunakde Feb 02 '21 at 02:39
  • Your phone's flash memory wears out over time even without radiation. Some interesting data about automotive usage – Rsf Feb 02 '21 at 12:14
  • 2
    As a former rocket engineer, one of the biggest disappointments of the job was learning on day 1 that you didn't get to play with next-gen technology. You had to use nex-gen -6 because it was the only technology that could stand up to the harsh environment of space. Bonus: there was only 1 manufacturer who still made such old hardware. Sigh. – JS. Feb 02 '21 at 17:09
  • 2
    @JS Yet some missions avoid this to some extent. I joked that the QPL for HETE-2 was the Digi-Key catalog, but it was close to the truth. The only electronic failure we had in six years of operation on orbit was our GPS receiver, which was was a space-qualified unit. Still, by the time we launched in 2000, the tech was a bit old, early 90's. This was partly due to the fact that HETE-2 was a rebuild with few changes from HETE-1, which suffered a launch vehicle failure in 1996. – John Doty Feb 02 '21 at 21:32

5 Answers5

58

Best Technology Available

By late 2002, SSDs had just reached ~80 GB capacities. Of course, JWST is not going to take a product which has never seen enterprise deployment, and plonk it into \$2 billion platform with fingers crossed, hoping the vendor did a really good job. The bleeding edge referred above is for terrestrial, consumer-grade drives. Once we factor in radiation hardening for space usage, existing product history for confidence, and whatever additional redundancy mission designers chose to hedge their $2+ billion toy for its 5-10 year lifetime, 68 GB starts to look pretty reasonable.

Timeline

Yes, it took 10 years to go from MB to GB, and another 10 to go from GB to TB. So by 2010, we already had TB standalone drives (not arrays) in a COTS 3.5" SATA form factor. But by 2007, the JWST had already passed most of the core reviews, so technology had to have been chosen years prior, using whatever was available then. By 2006, NASA had already shelled out $1 billion on the project. The prime contract was awarded in 2003. It is not hard to imagine that the data storage technology was selected within a year or two of this point in time, given that they had already burned through a billion dollars in development just 3 years later.

If we were to redesign JWST right now, we could choose something like Mercury's 440 GB space-qualified RH3440. This is bleeding-edge technology, and it's not even 1 TB. Also note that it uses more "primitive" Single-Level Cell NAND technology, rather than high-density Quad-Level Cells. Obviously, this is for robustness and part of what makes it space-qualified. This is why you cannot compare consumer-grade and space-grade products on a level basis. Obviously, we could put a few of these in the JWST, and get over 1 TB of storage, but I would imagine that putting a more than dozen of them on board would push size and power constraints. So let's say we could get up to 6 TB of storage for JWST 2.0, 2021 edition.

The exponential growth of SSD density has followed an approximate ratio of 1000x per 10 years. Rewind 10 years to 2011, and we would expect to plop a mere 6 GB on board using the cutting-edge space tech available at the time. Go back 17 years to 2003-4, and the fact that it has 60+ GB of SSD storage is actually looking pretty remarkable, given that we would extrapolate it to maybe 60 MB (space-grade!!!). There may be well more than a dozen discrete drives on board JWST (can't find design details at that level of granularity).

The real question is not: "Why is there so little storage?" but rather: "How did they get so much on board?" Perhaps they were allowed to cheat and update the storage later in the design process to newer but still mature technology. The Critical Design Review, which I imagine cemented a lot of decisions into stone, occurred in 2010. If they got to harvest 6-7 years of advances, that could explain a 100x improvement over expected capacities.

Lawnmower Man
  • 1,393
  • 8
  • 13
  • 2
    Which just goes to show that it would have been nice to design in a "plug & play" system that could easily accommodate larger memory units. So it goes. (see also 32-bit vs. 64-bit operating systems, and so on) – Carl Witthoft Feb 01 '21 at 18:33
  • 5
    @CarlWitthoft my understanding is that VPX is the mil/space version of PCI, and provides a comparable feature set: https://en.wikipedia.org/wiki/VPX. The problem is not assigning IRQs or base addresses, but rather the testing and validation required to ensure that whatever part they put on board will actually deliver the promised performance. I'm sure whatever they validated could more or less be plugged into the bus without fanfare. – Lawnmower Man Feb 01 '21 at 18:36
  • Good points. I'm painfully aware from past work projects of the effort required to qualify parts. – Carl Witthoft Feb 01 '21 at 18:37
  • 2
    I don't expect SSD to continue scaling 1000x per 10 years by the way. It seems that SSD technology was significantly behind chip manufacturing abilities and then it caught up. By now, it's finished catching up. – user253751 Feb 02 '21 at 11:04
  • 7
    I guess there's a limit to how much storage you'd actually want up there. If there are only 2 four hour contacts with earth per day, and it takes both of them to entirely empty 60GB, then having 120GB just means it'll take two whole days to empty instead of one. You'd sort of end up "constantly behind" because you were filling the buffer before emptying it entirely. So in some sense, the storage space is limited by the bandwidth back to earth. – Ralph Bolton Feb 02 '21 at 11:56
  • @user253751 mostly agree. The biggest right now is 100 TB rather than 1000, but appears to be limited more by demand than tech progress. So growth has clearly slowed, but perhaps because nobody can justify spending $40k on such a beefy piece of hardware. – Lawnmower Man Feb 02 '21 at 17:46
  • @LawnmowerMan true, you can make even bigger ones by adding more chips, depending on the form factor. – user253751 Feb 03 '21 at 17:46
33

Space hardware almost always is pretty archaic technology. The problem is the long lead time coupled with the need to certify it for flight. It wasn't archaic when it was engineered. The JWT was redesigned in 2005 so look at 2005 tech, not 2020 tech.

Loren Pechtel
  • 11,529
  • 26
  • 47
  • 14
    Look at 1995 technology instead of 2005 technology. Space avionics is at least a decade behind state of the art, and the lag is getting worse as die sizes get smaller. – David Hammen Jan 31 '21 at 08:48
  • 1
    @DavidHammen I figure the redesign probably updated the tech to 2005 standards. – Loren Pechtel Jan 31 '21 at 15:03
  • 2
    Avionics technology for use in space was ten years behind the time (behind state of the practice, let alone state of the art) back in 2005. It is even further behind the time now. – David Hammen Jan 31 '21 at 15:27
  • And I'm sure another advantage of using 2005 software in a 2021 craft would be they have had about 15 years to learn about any bugs that could occur, etc. The 5 TB SSD that company X released today hasn't been nearly as tested. – BruceWayne Jan 31 '21 at 16:13
  • My two cents: The problem it's that took years, 10 or 15 years to design, get budget and launch any project like this. So obviously the tech would be outdated by the time it is deployed. But by the other hand, it's not that problematic. Send information to earth should be done on regular basis, and even if took sometime, as long as it doesn't interferes the ability of the telescope to do amazing discoveries, scientist can bear a little bit of lag. – Negarrak Jan 31 '21 at 17:32
  • 25
    I think this answer is implicitly assuming that you could just grab a modern CPU/Flash/RAM chip and send it to orbit, if you could cut on the certification time. That is not so; electronics without radiation hardening will not live long outside of our magnetosphere, and they will be rather untrustworthy while they're still functional. Moreover, the higher the density of an IC, the more susceptible it is - this is one of the reasons for radiation-hardened electronics using much larger process nodes like 32nm (LEON) or 150nm (RAD750). – thkala Jan 31 '21 at 19:07
  • Additionally, electronics in space is exposed to considerable radiation, limiting the chances of exhausting possibilities as they are on earth. – jwdietrich Jan 31 '21 at 19:13
  • I guess that means that my plan to play the latest AAA video games on the trip to Mars is Not going to work. – ikrase Feb 01 '21 at 02:23
  • 2
    @thkala: Solid state storage nowadays is Flash NAND, and that's pretty high voltage electronics. The problem with RAM and CPU's is that they're low-voltage, and therefore even a low-energy event can cause a disruption. Flash would likely survive the transient, and it's default Error Correction Codes will catch some pretty big errors. You still wouldn't want TLC, as that manages to pack 3 bits per cell by lowering the voltage differences, but the same cell in 2 bits SLC mode would likely work well in space. – MSalters Feb 01 '21 at 12:21
  • 2
    @MSalters If you mean 2 bits per cell, that's what is commonly called MLC ("multi-level". Totally some lack of foresight there, so that "multi" now always means two, compared to tri-level and quad-level cells.) SLC is one (1) bit per cell. – TooTea Feb 01 '21 at 13:24
7

The other answers are good. I'll add that, usually due to misplaced budget constraints, project plans don't tend to expect enough iteration. A project plan is like a battle plan -- rarely survives contact with reality. Plan to throw away the first several (cheaper, partial) versions, and you'll have the flexibility of updating technology based on new information.

A great example of a project plan that had iteration built in was the Mercury/Gemini/Apollo series -- that effort would have failed horribly if we had planned to put boots on the lunar surface on the first launch.

Shuttle went the other direction -- with Enterprise the only prototype orbiter funded, we were never able to evolve that system into one that was truly safe, reliable, or cheap.

In recent years, we've finally started to see newer launch vehicles and satellite constellations that do follow more of an iterative development path -- the contrasts in schedule, cost, and capabilities are striking.

If iteration isn't built in, then early decisions get exhaustively analyzed and errors made on the side of caution, because everyone making those decisions knows there's going to be little ability to test hardware until it's too late to change. The resulting limitations have to be accepted as constraints as the project proceeds over subsequent years. These internally-generated constraints in turn tend to lead to projects running overtime, over budget, or failing completely.

It's bizarre to think that, while science is all about change based on new information, "big science" projects rarely can have that feature built into their own plans and contracts -- partners and vendors tend to be contracted to deliver complete, finished goods once, fully specified beforehand and built correctly the first time. The universe just doesn't work that way.

stevegt
  • 381
  • 1
  • 8
  • 6
    "In recent years, we've finally started to see newer launch vehicles and satellite constellations that do follow more of an iterative development path -- the contrasts in schedule, cost, and capabilities are striking." If only NASA was following such development paths... – Ian Kemp Feb 01 '21 at 13:45
4

Looking backwards at tech, it necessarily looks primitive. One quote is probably just a legend, Bill Gates allegedly said back in 1981: "640K ought to be enough for anybody." And for people back then, this probably was reasonable.

Notice that in 1991,

SanDisk ships its first solid state storage drive or SSD. The SSD had a capacity of 20MB and sold for approximately $1,000.00. It was used by IBM in the ThinkPad pen computer.

The $1,000 won't be an issue here. But 20MB was the bomb. Some years ago, I bought the smallest pen drives on the market for giving away some docs, and they were much bigger than that.

In 1999, BiTMICRO made a number of introductions and announcements about flash-based SSDs, including an 18 GB 3.5-inch SSD.

For people designing the satellite around the end of XX century, the 60GB looked like 'wow, I wish I had this at home' kind of volume.

Quora Feans
  • 375
  • 1
  • 9
  • The funny thing is ssd drives are mostly empty space. You could probably fit enough chips in a 3.5" ssd to have a petabyte capacity, but would be terribly expensive due to how many are inside – Matthew Hill May 14 '22 at 05:19
2

Because it is expected to be able to download its data between targets, so doesn't need vast amounts of space and any data it does accumulate will take time to download, so there isn't much point building up a huge buffer.

  • 2
    I'm pretty sure you have cause and effect backwards here – Carl Witthoft Feb 01 '21 at 18:34
  • 4
    @CarlWitthoft could you please clarify your comment? Amount of data that can be stored for a period of time is the same that can be downloaded for the same time... So if you can download 1GB a day there is negative value to store more than that (as storage will eventually overflow)... And note that you can just pay your ISP twice next month to get faster download speed - bigger/more powerful transmitter not going to fly itself to space... Hardware (either disks or communication) compared to data has weight and the cost per kg is not free even now (was roughly 20x more expensive back then) – Alexei Levenkov Feb 01 '21 at 21:05
  • @CarlWitthoft not quite. When you're mass and volume restricted you go with the smallest mass and volume you can get away with. If that results in a 60GB storage drive, and you can make that work within your other requirements, that's what you go for. – jwenting Feb 02 '21 at 16:13