25

The discussion on How cold is the Martian sky at night? Or the day for that matter? made me wonder. How much more expensive are scientific instruments on spacecraft or landers, compared to their ground-bound parts? Especially on the bottom of the range.

I got my pocket IR thermometer for \$5. Then Tom Spilker comments "Often it's not hard to get an instrument on a mission if the total cost will be, say, 3 megabucks." Does the price get inflated that much on regular basis? I'd expect a factor of between 1000x and 10,000x. But this is a factor of a million. And there's a whole bunch of really simplistic sensors on typical landers and probes. Temperature sensors (normal, not IR), accelerometers, hall sensors, microphones etc - stuff that normally costs well under \$1 when bought from wholesaler in bulk.

Sure integration is expensive. Testing is expensive. Delivery to destination is expensive as heck. But a factor of a million? What does that look like?

Say, I'm on the design team for Curiosity, and I want the simplest, cheapest infrared sensor that will work with accuracy of 1 Kelvin (100x better than our current estimates) attached to the robotic arm. It needs two contacts for power (to switch it on/off) and two to an ADC that will pass current measurement to the telemetry. Point the arm at whatever you want to measure (using the camera it has already), expend a couple milliamperes on the 3-second measurement, pass two bytes of data upstream. The meter or so of wires would be the most expensive part weighing maybe 3 grams, delivery of which to Mars costs more than its weight in gold. Still, I have trouble picturing going over \$50k on that. What would the cost breakdown look like in reality?

kim holder
  • 21,377
  • 9
  • 85
  • 188
SF.
  • 54,970
  • 12
  • 174
  • 343
  • 14
    Your pocket thermometer may not be as precise, reliable, able to survive vibrations due to a rocket launch, .... as the one on a spacecraft. You may also compare it to a proffesional one use in a chemistry lab. – Manu H Jun 03 '18 at 22:14
  • 3
    @ManuH: One 100x as expensive? That still leaves 4 orders of magnitude. – SF. Jun 04 '18 at 08:12
  • 1
    Keep in mind that those instruments are usually one offs, not mass produces, which is the biggest point in driving costs since you can not amortize the (much higher) development costs. – PlasmaHH Jun 04 '18 at 10:15
  • @PlasmaHH That should be added to existing answers or posted as an answer on its own. The tests described in the answers may be more expensive than those for Earth use, but not a lot of orders of magnitude more expensive. What prevents those costs to be amortized is small production. – Pere Jun 04 '18 at 13:19
  • 4
    Notice the principle of diminished returns. It's easier to improve a tool from working 50% of the time to working 75% of the time, than it is to improve a tool from working 75% of the time to working near 100% of the time. Striving for perfection is not cost effective. Consumer goods are only improved up to the point of being as cost effective as possible. Since there's no shop nearby on Mars, the impossibility of getting a replacement part dramatically shifts the priorities in terms of cost efficiency versus part reliability. – Flater Jun 05 '18 at 08:39
  • @Flater: This is why I wonder why, especially in case of very small and light instruments, we don't go for a big redundancy. Verify that the device works in Martian conditions 10% of the time, then send an array of 30 of them. – SF. Jun 05 '18 at 08:48
  • 1
    @SF.: (1) If an object is liable to break when it is shaken roughly; all of these objects are liable to break at the same time when the shaking happens. The only thing that redundancy solves are unnoticed production flaws. If production flaws were common or even just expected/accepted, there'd be a whole lot more concerns about the success rate of missions. (2) Verify that the device works in Martian conditions 10% of the time That testing is massively expensive, because it's impossible to go to Mars to test it. You can only have theoretical test scenarios until you actually go to Mars. – Flater Jun 05 '18 at 08:52
  • @SF.: For completeness' sake, redundancy can also fix issues that happen to an individual object, e.g. if you have 30 guns, they're not going to all jam at the same time (this happens on a per-gun-basis - redundancy makes sense to ensure having an unjammed gun). However, most space-related issues are environmental in nature, which would apply to all redundant objects at the same time (e.g. the guns not working underwater). – Flater Jun 05 '18 at 09:11
  • 1
    "stuff that normally costs well under $1 when bought from wholesaler in bulk" - If you have a multi-million dollar rocket capable of launching stuff into orbit (or beyond), it probably doesn't make economic sense to just stick a bunch of el-cheapo $1 sensors on it and hope for the best. The cost of a failed mission vastly exceeds the cost of getting decent sensors that are actually rated for and tested against the kind of conditions they'll need to survive. – aroth Jun 05 '18 at 14:23
  • 1
    astronomically more expensive. -rimshot- – Mr.Mindor Jun 05 '18 at 14:27
  • @aroth: Yeah, but in my opinion it makes sense to go with the $40 'industrial/hardened' version instead and try if it can survive the conditions some of the time (these are often built with outstanding reserve; rated for -40C, still works in -160) then triplicate it. I'm fully okay with the extreme rigor for components that mean end of the mission in case of failure. But I believe it's better to include a tiny instrument that has, say, 30% chance of failure, spending $20k, than not include the equivalent with <1% chance of failure because you don't have the $5mln it costs. – SF. Jun 06 '18 at 10:22
  • Also, I found a bunch of sensors are rated for specific range, because they have non-linear characteristics which deviate from linear only a little within the rated range. So instead of using a simple formula result=readout*multiplier+offset you must go with a lookup table prepared through manual calibration to get the correct result (with accuracy much better than the simple formula too!) - a thermometer chip that worked fine at -80C but would be off by ~20C at these temperatures, and would still work when the solder melted, off by another (but always same) 40C so it was only rated '+4C~+60C' – SF. Jun 06 '18 at 10:29
  • Nobody's mentioned that semiconductor technologies need to be designed to account for, or shielded from, the large amount of ionising radiation out in space, compared to on Earth. – OrangeDog Jun 06 '18 at 12:26
  • @OrangeDog: That's if you use digital integrated circuits. For analog, like most of sensors are, the radiation will introduce a specific bias/error which needs to be accounted for, but otherwise most will just work. And the rad-hardened digital circuitry will be there, in the budget and in the probe, whether you add the instrument or not. – SF. Jun 06 '18 at 12:39
  • @SF. all you did was repeat what I said - it needs accounting for – OrangeDog Jun 06 '18 at 12:53
  • @OrangeDog: You wrote "designed to account for". Analog circuitry doesn't need to be designed/redesigned to account for; it will work fine as-is - all the 'accounting for' will be done back on Earth while processing the collected data. – SF. Jun 06 '18 at 12:58
  • @SF. not if it's processed in the probe – OrangeDog Jun 06 '18 at 13:02

6 Answers6

48

Preface: I am far, far from an expert in space electronics; I don't think I can weigh in on how much these sensors actually are, which is the title question; all I can offer is an uneducated inane ramble on where that money might be going.

Let's take your analogy of a bargain-basement thermal sensor; actually, let's specify a common off the shelf Melexis part.

Firstly, you'll note that the datasheet says that the sensor is characterized between -40 and 125c. The trip to mars will have massive thermal cycles for months... will the sensor work outside that temperature range? If so, how far? What is the probability that the lens will shatter at below -40c? How accurate will it be?

Answering these questions will probably take a few months in a thermal cycling chamber and a suitably-qualified engineer or scientist, neither of which are cheap (especially for a government contractor). This test alone might cost $10-50k. One could neglect it, but you will almost certainly have a non-functional sensor upon arrival at the red planet.

Say you come up with the bright idea of keeping the temperature of the sensor actively controlled, like most spacecraft systems are.

Now you don't just have 4 wires going to your sensor; you have thermistors, heaters, and you need a temperature control system that is sufficiently hardened that a software lockup isn't going to take your poor little bolometer (and all the other sensors on that appendage) up to 600K.

For this you will need embedded software engineers, thermal FEA people to design your radiators, the involvement of industry to build you those fancy custom heaters (which now bring you into the perils of contracting overhead), etc. All of these experienced people will easily cost you >300k each; remember that an employee generally costs somewhere around twice their salary.

Now consider running these tests on:

  • Outgassing; those other scientists with other sensors might get a little annoyed if they find some of your IR detector deposited on their spectrometers and reflectors and whatnot.
  • Radiation tolerance; how will the sensor degrade under the constant bombardment of charged particles and cosmic rays in the harsh intervening void?
  • and dozens and dozens of other tedious and expensive parameters.

That's where the orders of magnitude come from. Essentially, the cost of the sensor will come from the humans required to stare at the sensor intently for a while.

On the other hand, there is certainly a burgeoning market for cheaper, simpler spacecraft, with lessened requirements for all of the above. For cheap, short distance missions, like CubeSats, where a replacement mission wouldn't be excessively objectionable, people do indeed use COTS sensors; one project even used a complete standard smartphone to power their satellite.

However, with longer distance trips, where simpler, less characterized solutions are highly likely to fail, and perhaps where the weight of the funding bureaucracy's requirements for success are greater, the designers are going to play it safe, and spend a little extra money.

Rory Alsop
  • 13,617
  • 4
  • 62
  • 90
0xDBFB7
  • 2,239
  • 14
  • 24
29

I'll chime in with the other two well-stated answers. In addition to all the testing, there is the issue of "What do you do when the instrument fails a test?"

Most COTS (Commercial Off-The-Shelf) instruments you might get at Home Depot or even Omega Engineering are designed to work in an Earth environment, with some margin. But not too much margin; that makes the instrument more expensive than the competitors', and that loses business. Note that the Melexis instrument @Giskard42 mentioned has a range of -40 to +125 C. You can get lower temperatures here on Earth's surface. Mars gets a lot colder than that at night!

The Melexis engineers, who would certainly be consulted early in the process, would immediately say that to handle Mars temperatures without adding heaters some of the components would have to be replaced with more resilient—and more expensive—parts. But the cost of those parts pales in comparison to the cost of the redesign necessary to incorporate the parts. Rarely does the more resilient part behave exactly like the original, or fit where the original did, so even if redesign winds up being unnecessary, the operating characteristics have to be reanalyzed and retested. Adding heaters would also be a redesign.

Thermal qualification is only one part of space qualification, a rather lengthy process NASA requires for hardware intended for use in all but the smallest of NASA space flight missions. But often it's not the hardest part for COTS hardware.

@Giskard42 already mentioned radiation tolerance. For interplanetary missions that is often the hardest part for COTS hardware. Modern microcircuitry (such as ADCs), with exquisitely small feature sizes, is sensitive to radiation effects from sources like primary cosmic rays and solar radiation, especially solar protons. A single hit can cause single-event upsets, bit-flips, and even the dreaded latchups. Flight-qualified hardware needs to demonstrate (via test) a certain level of tolerance to radiation, sometimes requiring redundant sub-assemblies or components, which you won't find in an off-the-shelf instrument. Unmodified COTS parts or components often fail the radiation tests and that usually means redesign, and that's expensive.

All these processes can quickly turn a 5 dollar instrument into a 50 kdollar instrument, or even 500 kdollar instrument if nobody else wants a space-qualified version of this widget.

But buying the space-qualified instrument isn't the end of the money story. You also have to pay the spacecraft engineers who have to do instrument accomodation. Is your instrument the only one requiring +9 VDC instead of the spacecraft-standard 28 VDC? Then you pay for an engineer to design a 9 V power subassembly into the spacecraft's power system, and to design and run that part of the cable harness. You'll also be paying for a thermal engineer to verify that the thermal design is adequate, even before it goes onto the shake table (as @PearsonArtPhoto mentioned) and into the thermal-vac chamber. Will your instrument generate any signals that interfere with other spacecraft systems? An engineer trained in EMI will examine this. There is an instrument team that you pay for, and a spacecraft team the project pays for, shepherding this process all the way through. For an inexpensive piece of hardware this is the most expensive part.

In my experience with Voyager, Cassini, Genesis, and Rosetta, and a lot of mission concept studies and proposals, I've seen a few instruments for interplanetary missions come in at single-digit millions of dollars, but not many. Most are tens of millions of dollars, and really complex ones can add another zero to that. I'd love to know what the Kepler instrument cost, but a PI usually holds cost breakdown figures for competed missions very close to the chest.

One final note. In the 1990's, under Dan Goldin as NASA Administrator, NASA tried the approach of flying missions on the cheap, to get more missions flown. But a series of embarrassing failures that resulted (such as Mars Polar Lander and the DS-2 instrumented impactors it carried) put an end to that approach and Dan resigned soon after. NASA is rather intolerant of failures, especially on highly visible (to the public and to Congress), big-buck missions, and is willing to spend a lot of money to prevent them.

Tom Spilker
  • 18,306
  • 1
  • 66
  • 83
  • 2
    What is a PI? ("but a PI usually holds cost breakdown figures for competed missions very close to the chest.") – ANeves Jun 04 '18 at 18:41
  • 3
    @ANeves Oops, sorry, some jargon crept in! A "PI" is a Principal Investigator. For one of NASA's competed mission programs, Like Discovery or New Frontiers, they're the scientist who bears responsibility for the success of the mission project they lead, reporting directly to NASA headquarters. On a "directed" mission, where NASA assumes that role directly, a PI is a scientist responsible for one of the instruments NASA selects for flight on the mission. In this case, the PI reports to the Project Scientist, though they often speak directly with NASA HQ people anyway. – Tom Spilker Jun 04 '18 at 19:18
  • 4
    The upside of all this serious engineering is that the instruments and spacecraft generally far exceed their planned lifetimes. The Opportunity rover was expected to last 3 months; it's still delivering after 14 years as of June 2018. – Joe McMahon Jun 05 '18 at 19:19
  • 1
    @JoeMcMahon Good point! When designed for a 95% probability of surviving until the planned end of mission, the lifetime to 50% probabiity of survival is far longer. – Tom Spilker Jun 05 '18 at 19:39
  • 3
    @ANeves+ for completeness, the model of having a Principal Investigator as the tech/science lead and manager/budgeter for a funded research project is now used across pretty much all of science (in US at least) not just space. – dave_thompson_085 Jun 05 '18 at 22:49
10

Just to give you an idea, here is how the cost might break down.

The COTS version will need to be torn apart to survive vibration testing, along with added material. It will need to have some kind of software written to send the data to the spacecraft. Some testing will need to be done to see if the instrument will work on Mars. The thermal issue will really be of some importance, it will need to be tested to ensure it works on Mars. Some heat might need to be applied, or thermal blankets, or something like that. Also, let's not forget the plastic will need to go, as it is a potential source of outgassing.

Bottom line, such an instrument could work fairly well, but it could be done. But you would probably need at least 1 man-year of engineering to be done to get the instrument on the spacecraft. At a standard rate of maybe $300K, that's where most of your money will be.

And what would you really gain? We actually have thermal maps of Mars. http://tes.asu.edu/monitoringmars/index.html . The accuracy is to within a few degrees. The temperature isn't likely to vary that much from point to point. But in theory it could be done.

PearsonArtPhoto
  • 121,132
  • 22
  • 347
  • 614
9

A "one to a million" price factor is not too crazy if you think about it : your pocket thermometer was probably manufactured in over a million copies. The space thermometer is made only once (or five times, tops). Hence the cost ratio.

Add to that the crazy quality control and the exponential paperwork it implies, and you'll get the picture.

jeancallisti
  • 231
  • 1
  • 3
7

One more factor that nobody has touched on so far. Here on Earth we aren't too concerned with power use on most instruments. Power's cheap, rarely is it worthwhile to do much to lower the power consumption.

On a rocket, though, power either comes from solar cells (and you have to pay to lift them, and the batteries for when they're shielded from the sun) or nuclear batteries (and the Pu-238 for them is in critically short supply, not to mention the weight of lifting it.)

You also have to be concerned with what becomes of that power. Here it's usually shed into the atmosphere when you're done with it. Most spacecraft aren't operating in atmosphere, getting rid of waste heat is a far bigger issue.

Loren Pechtel
  • 11,529
  • 26
  • 47
  • 1
    Very good point! One engineer goes through Hades to acquire power for people to use for their instruments or subsystems, and then once it's used, another engineer goes through Hades to get rid of it! – Tom Spilker Jun 04 '18 at 19:24
3

I just wanted to point out that the Mars Pathfinder was powered by an 8085 CPU. Which was released originally in 1976 and was used because it was a mass produced product.

There are many advantages in using mass produced technology. You gain the statistical evidence of reliability from the many customers who use it.

Engineers can more easily "space proof" an existing technology by adding/removing/updating components, then engineering a complete alternative as a one off.

Reactgular
  • 139
  • 3
  • The minimum feature size of the Intel 8085 is 3 microns [https://en.wikipedia.org/wiki/Intel_8085], very large compared to current processes that are running sub-micron. Way sub-micron! Last year Intel announced implementing 10-nanometer feature sizes [https://spectrum.ieee.org/nanoclast/semiconductors/processors/intel-now-packs-100-million-transistors-in-each-square-millimeter]. Smaller feature size is how Melexis can pack the detector and the signal processor in the TO-39 can. Unfortunately smaller feature sizes yield significantly more radiation sensitivity. – Tom Spilker Jun 06 '18 at 19:38
  • @TomSpilker true, but you can just package it inside a protective box. Can't they? – Reactgular Jun 06 '18 at 19:42
  • The key is testing in the appropriate environment: thermal, radiation, whatever. If a COTS piece can be made to work without a huge engineering "space-proofing" effort, then there is no reason not to do it. But that is usually not the case. – Tom Spilker Jun 06 '18 at 19:42
  • @TomSpilker It seems very unlikely to me that it was a COTS product, and not one of the rad hard ones made by Sandia under licence. – richardb Jun 06 '18 at 19:47
  • @TomSpilker right, but weather or not they build a custom component or buy an existing component, then costs of testing would likely be the same. You can also test a collection of components and see which one fairs the best. – Reactgular Jun 06 '18 at 19:47
  • @cgTag It depends on the device being shielded, the radiation environment and duration of exposure. An IR sensor can't be shielded all around—it has to see out! Solar protons are relatively easy to shield against, but primary cosmic rays produce bremsstrahlung [https://en.wikipedia.org/wiki/Bremsstrahlung] in the shielding, exacerbating the radiation environment. Particles in Jupiter's radiation belts produce a lot of bremsstrahlung too. Designing shielding can be a difficult engineering task in itself (but probably not for Mars missions). Shielding can also create thermal problems. – Tom Spilker Jun 06 '18 at 19:55
  • @richardb Which "it" are you referring to? – Tom Spilker Jun 06 '18 at 19:56
  • 1
    @TomSpilker The 8085 processor was a redesign by Sandia of the Intel part. (http://www.sandia.gov/media/rhp.htm) – richardb Jun 06 '18 at 20:22
  • @richardb Aha! A redesign by Sandia likely means there was DoD interest in it, so more rad-hard. Thanks much for this input, it is very helpful. – Tom Spilker Jun 06 '18 at 22:29
  • @cgTag: Adding on to what TomSpilker said, a significant source of radiation-induced errors in some chips is the radioactive decay of atoms within the chip itself, a situation which isn't going to be improved by putting a radiation shield around the chip (in fact, a shield might well make things worse in that regard, by deflecting high-energy particles generated within the chip back at the chip, when they would otherwise have escaped without harming anything). – Vikki Jun 09 '18 at 21:19
  • I gave a late downvote, so I should explain. The lander used a radiation-hardened RAD6000, while the rover used a radiation-hardened 80C85. Neither was COTS in the sense that one can order one off the shelf from Dell. Both were COTS in the sense that hundreds, perhaps even thousands have been made. But even thousands does not quite provide the mass production price reductions that make computers so cheap nowadays. For that, the numbers need to be in the hundreds of millions. – David Hammen Jan 27 '21 at 12:23