39

In software, the general process for developing anything is code, test, fix, repeat. This is easy and cheap, because running a program typically costs an incalculably small amount of money.

In rocket science, it's different. Each launch can cost tens or hundreds of millions of dollars, and scale models often have different flow and performance properties, diminishing their usefulness.

Given these constrains, what is the process by which rocket scientists do iterative development?

TheEnvironmentalist
  • 3,055
  • 1
  • 19
  • 28
  • 13
    The first step to start a disastrous software development project is to believe that development is easy and cheap. In rocket science ground tests are used before launch. – Uwe Apr 05 '18 at 07:47
  • 21
    Well, there is this. – E.P. Apr 05 '18 at 09:14
  • 11
    +1 apparently iterative selection of the optimal SE site works too! – uhoh Apr 05 '18 at 09:56
  • 1
    If at first you don't success, try, try, try, try, try, try (etc) again... https://www.youtube.com/watch?v=13qeX98tAS8 – Richard Apr 05 '18 at 12:47
  • 2
    @Richard I thought you were going to link the one E.P linked. Instead, I'll also note that they also use a "grown up version"* of https://kerbalspaceprogram.com/ - which provides plenty of opportunities for iterative development too. *by which I mean real engineering simulation software, not games – Baldrickk Apr 05 '18 at 13:11
  • 1
    Unless a software run takes months and costs millions in electricity... – PlasmaHH Apr 05 '18 at 13:15
  • 6
    "Build one to blow up. You're going to anyways..." :-) – Bob Jarvis - Слава Україні Apr 05 '18 at 13:50
  • 1
    If a software run on a supercomputer takes months, it also costs much money for hardware usage. – Uwe Apr 05 '18 at 13:51
  • 3
    Software for rockets is very expensive when done properly. According to https://www.fastcompany.com/28121/they-write-right-stuff the head lead of the team flew out before each Space Shuttle launch to sign in blood that the software would not harm the crew. This confidence could only be achieved by working very carefully - this made each code line very expensive! – Thorbjørn Ravn Andersen Apr 05 '18 at 14:46
  • 10
    We didn't land on the moon until Apollo 11. Just saying. – corsiKa Apr 05 '18 at 18:33
  • 1
    @corsiKa: Look at the number of times the US blew up rockets before the first successful satellite was launched. The Soviets blew up quite a number, too, including the R-16 and a couple of Saturn V-equivalent N1s. And SpaceX has blown up at least one... – jamesqf Apr 05 '18 at 18:53
  • Now that SpaceX can land rockets, they have the chance to take them apart and learn from used engine. – Harabeck Apr 05 '18 at 21:53
  • @Harabeck: But how many did SpaceX crash before they landed one successfully? – jamesqf Apr 06 '18 at 00:55
  • 5
    Elon Musk famously said "As long as each one blows up for a different reason, we're making progress." – A. I. Breveleri Apr 06 '18 at 02:46
  • 4
    As a software developer, I dislike your image of software development. Rocket launch is equivalent to live release of software. And we don't just grab whatever programmer hacked together and dump it into production. That would be crazy. Instead, we have testing environements that subject the software to continuously complex (automated) tests and we repeat the deployment process into more and more "production-like" environments before the code is deployed for real. When you look at it like this, there is not much different between software and rocket development. – Euphoric Apr 06 '18 at 08:25
  • ... with their finger on the abort button. The Right Stuff: Failed Launches – Mazura Apr 07 '18 at 16:01
  • @Euphoric Well, IMO your view of general SW development is too optimistic. Not every software has passed the degree of testing comparable with HW+SW combination used in space exploration (or even automotive, for that matters). You don't sign an EULA for your car's firmware when you purchased it. Cars are not sold with a "this hardware is sold AS IS" all-encompassing disclaimer on its dashboard! I hope for spaceships it's the same. :-) – LorenzoDonati4Ukraine-OnStrike Apr 07 '18 at 19:44

5 Answers5

40

Software development has to be iterative, because it's difficult to impossible to mathematically prove that a given piece of software will work as it's intended.

For physical objects, that's not the case. With modern CAD software you can design parts and be pretty sure the design will work as intended. So a lot of testing can be done before the first part is manufactured.

That said, lots of physical tests are still done in rocket development, especially on the more complex parts.

  • engines (or engine parts) are tested on engine test stands
  • structural parts are subjected to structural tests
  • electronics are subjected to environmental testing (temperature and voltage), electromagnetic compatibility and vibration tests
  • entire stages can be tested on large test stands

Iterative development has been done on rockets, esp. in the early years.

  • The V-2 underwent hundreds of test launches, with many ending in fireballs.
  • the N-1 rocket was tested iteratively. The first stage was so large, building a test stand for it would have delayed the program to a point where the space race would be lost for sure, so the Soviets decided to run a series of test launches instead. With boilerplate articles for the upper stages on the first few test flights, adding flight hardware as the tests progressed. 14 tests were planned. Only 4 were carried out, with the last launch almost making it to a full burn on the first stage.
  • The F-1 engine for the Saturn V first stage had issues with combustion instability. Because the cause of the instability was unknown, lots of test firings were done with different configurations, trying to eliminate the instability.
Hobbes
  • 127,529
  • 4
  • 396
  • 565
  • 7
    "many ending in fireballs" - ah, what we in IT call a media fault. – Whelkaholism Apr 05 '18 at 11:26
  • Of course an engine test stands were used for V-2 too, see](https://en.wikipedia.org/wiki/Test_Stand_VII). – Uwe Apr 05 '18 at 13:51
  • 1
    "First test of Mittelwerk-built rocket detonated three seconds after ignition without liftoff: "We just blew a million marks in order to guess what could have been reported accurately by an instrument probably worth the price of a small motorcycle." (Hartmut Kütchen, Engineer in Charge of Test Stand VII)[5]" Yeah I can call that iterative devlopment – jean Apr 05 '18 at 13:55
  • 1
    If they can simulate a nuclear explosion, they probably can simulate a rocket too. – Thorbjørn Ravn Andersen Apr 05 '18 at 14:47
  • 5
    @ThorbjørnRavnAndersen A rocket may actually be more difficult to simulate, depending on how detailed the simulation is, including the actual inner workings would actually be very challenging. – wedstrom Apr 05 '18 at 16:30
  • 16
    @ThorbjørnRavnAndersen In theory there's no difference between theory and practice. In practice, there is. – corsiKa Apr 05 '18 at 18:34
  • Note that (per Wikipedia: https://en.wikipedia.org/wiki/N1_(rocket)#Launch_history ) none of the four N-1s actually flew successfully. – jamesqf Apr 05 '18 at 19:02
  • 3
    "Because it's difficult to impossible to mathematically prove that a given piece of software will work as it's intended" -- I disagree; it's far easier to prove correctness of software than of hardware operating in the real world. It's simply easier still to iterate. – Russell Borogove Apr 06 '18 at 00:35
  • @Russell Borogove: I disagree. It is difficult to impossible to mathematically prove correctness of software, except in some special cases, because it is impossible to come up with a mathematical definition of "correct". – jamesqf Apr 06 '18 at 00:58
  • 2
    Semantics of "proof" and "correct" aside, surely whether it is easy or difficult to prove correctness of software depends entirely on the application domain and complexity; verifying that a simple mathematical function behaves correctly is not difficult, but proving that some sort of image recognition algorithm or a geo-distributed data storage solution is correct might be impossible. Software takes lots of forms. – Electric Head Apr 06 '18 at 11:24
  • If we have Kerbal Space Program I guess someone somewhere must have a fancier simulator... – xDaizu Apr 06 '18 at 11:33
  • it's difficult to impossible to mathematically prove that a given piece of software will work as it's intended.

    Maybe hard, but definitely not impossible. See: https://github.com/seL4/l4v, https://github.com/leanprover/lean

    – Restioson Apr 06 '18 at 15:11
  • 1
    @ElectricHead The same holds for mechanical components. It is easy to verify that for example the mass of an object is within given bounds, but proving (!) that a component does not break within an environment as complex as a rocket launch is equally hard. – koalo Apr 06 '18 at 17:49
  • I'm not so convinced that the behaviour of physical objects is deterministic. Turbulent fluid dynamics (very relevant to rocket engines) are still quite difficult to stimulate. At the same time, I disagree with the premise of this answer that "software development has to be iterative" I much prefer @BertHaddad's answer referencing offline testing. – craq Apr 06 '18 at 20:32
  • @koalo Yes, absolutely. I didn't intend to imply by omission that the same doesn't hold for systems in the physical domain - worth pointing out explicitly. – Electric Head Apr 07 '18 at 08:30
  • @craq Hmmm, [not] deterministic possibly isn't the word I would use but yeah - a statement that software is simpler than physical or vice versa is just not really meaningful when they are both part of systems. How iterative a design method is or not (all design methods are basically iterative for anything non-trivial...) is nothing to do with anything intrinsic about software vs physical, and more about system complexity. – Electric Head Apr 07 '18 at 08:40
16

Firstly, "rocket science" is a bit of a misleading term. Rockets are a type of propulsion system performing a function within larger systems such as launch vehicles, spacecraft, missile systems, etc. Engineering these complex systems is a holistic, multidisciplinary process. There's an entire discipline dedicated to this type of engineering originating from the 1930s: Systems Engineering.

Systems Engineering and modern software development practices are similar, despite their somewhat independent evolution. Design process models familiar to the IT and software engineering world such as Waterfall and Vee have played their roles, and approaches like Concurrent Engineering have more than a bit in common with Agile software development practices and are intensively iterative. See ESA's description of Concurrent Engineering as an example.

The specific nature of iteration over a system's life cycle is going to depend on the project, who's doing it, who's paying for it and what the life cycle stage is, but the "design, test, fix" paradigm isn't really much different (though, as implied, the scale might be).

Mathematical models, simulation and design patterns are important in keeping costs down, particularly in early design phases. Components and subsystems can be developed and verified against specification, with integration being tested through ground tests as covered in other answers. In some regards it could be argued the physical nature of components makes it easier to quantify and verify against specification than the abstract nature of software [systems] but -- as pointed out -- there are additional difficulties and costs associated with engineering in the physical domain.

Software engineering is a comparatively young discipline and there is a significant degree of convergence occurring as it matures (and software systems get more complex), but engineering is engineering.

Suggestions for further reading:

Electric Head
  • 261
  • 1
  • 3
  • 3
    I'd recommend Kelly's Moon Lander instead of the SEBoK. That has Tom Kelly describing how he ran the Grumman program to develop the Apollo LEM at the dawn of modern SE, and is a great read. For more depth, I would go to Morris' The Management of Projects, which covers PM more than SE, but starts from before there was a distinction. – fectin Apr 06 '18 at 03:30
  • 2
    Thank you; suggestions gratefully received! I took the liberty of adding them. I mentioned SEBoK and SWEBoK because they are resources that naturally illustrate the similarity/convergence, not because they are particularly engrossing reads... :) – Electric Head Apr 06 '18 at 09:25
12

The majority of the components of a rocket can be tested individually on the ground here's an example of a test for the RS-25 engine for the SLS for example. So the majority of the iterative development is done using tests like this.

Obviously there are somethings you only find out when you test the complete stack through a launch of course but extensive ground testing keeps that to a reasonable minimum.

motosubatsu
  • 754
  • 5
  • 12
  • The engines for a second stage may be tested on ground. But it is difficult to test a powerful rocket engine in a vacuum test chamber. There was a problem of breaking fuel lines in the second stage of Saturn V found in Apollo 6. The problem did not show in ground test with the engine working in the atmosphere. – Uwe Apr 09 '18 at 07:32
7

The short answer is that the iteration happens mostly via analysis (the industry terms of art are DAC and VAC, for Design Analysis Cycle and Verification Analysis Cycle), supported by small-scale development testing where necessary (think unit tests), culminating in a series of qualification tests (i.e., integration tests) near the end of a development project.

Tristan
  • 17,263
  • 1
  • 63
  • 83
5

The avionics/software on a rocket (one of the bits most prone to failing) can, in fact, be tested iteratively like software. SpaceX has a all the wiring/computers of their Falcon 9 laid out in their factory. They then give these components phony inputs and see how the code responds. These type of simulations are very useful for testing new innovations (like re-usable rockets).

Many, if not most, problems with rockets and spaceships have been related to software. A mars rover crashes because somebody forgot to convert feet to meters, and an Ariana rocket blew up because it went so fast that the speed value had an integer overflow, causing the rocket to think it was traveling backwards (the same kind of bug that gave us Nuclear Gandhi in Civilization).

I know the question was mostly about the physical stuff, which is usually tested in test-stands like Hobbes said in his answer. Still, just remember that modern rockets are just as much software as hardware.

Bert Haddad
  • 367
  • 1
  • 8
  • Interestingly, it's worth pointing out that these software problems can in turn be significantly attributable to communication issues between teams and inadequate processes. – Electric Head Apr 06 '18 at 12:05
  • 1
    Some relevant keywords: software-in-loop and hardware-in-loop testing. Of course there are also the extensive Continuous Integration suites which are now a critical part of software development in all fields. – craq Apr 06 '18 at 20:34