13

If one considers the languages which were considered as a basis for Ada, as listed in the "Report to the High Order Language Working Group" at e.g. http://bernd-oppolzer.de/DoD-Language-Evaluation-1977.pdf, Lisp is conspicuously absent while even COBOL gets a fair evaluation.

Can anybody account for this?

I have come across references to Lisp later being used for US-government related work, see e.g. "Red Team versus the Agents" at https://gist.github.com/fogus/4716440 (spoiler: the result was embarrassing).

Mark Morgan Lloyd
  • 2,428
  • 6
  • 22
  • 14
    Nobody in industry or government, considering programming avionics or weapons or sensors or other instruments, would have considered LISP suitable for purpose at all. Not at all. Perhaps there were some applications for it in non-production systems, but not running a fighter jet or aircraft carrier or air-traffic control system. Nope. Nothing to be learned there at all. And I'm quite serious. Literally nothing. – davidbak Feb 05 '23 at 21:33
  • Irrespective of my feelings about the languages concerned: would anybody have considered COBOL in those roles? But even COBOL was considered at at least the Tinman level, while Lisp wasn't even dignified by a list of omissions or of features which were obviously problematic. – Mark Morgan Lloyd Feb 05 '23 at 21:41
  • 4
    COBOL is more powerful than you think. It was probably evaluated at such a high level for its features, namely, what the new language needed (since Ada was supposed supposed to be a do-everything language). PL/1 had to be on the list, too – RonJohn Feb 05 '23 at 21:51
  • 1
    @MarkMorganLloyd Because COBOL does fit several basic needs embedded applications have, foremost compact code size. Not exactly what Lisp excells (remember to include the runtime) – Raffzahn Feb 05 '23 at 21:51
  • 11
    There was no way then - and nearly no practical way now - to use Lisp for hard real-time embedded systems. Or soft real-time embedded systems. GC. There has been work over time on GC that satisfies at least soft-real-time systems but such GC algorithms, though they offer latency guarantees that can help you meet real-time requirements are not highly performant. Early versions required read barriers that raised costs for all reads of memory, or required fixed size list cells to avoid fragmentation, etc. In the Ada-development timeframe: none were available. – davidbak Feb 05 '23 at 22:51
  • 3
    Seconding what @davidbak wrote, perhaps the most widely known iteration of the "GC in real-time systems" (hard or soft), is that stuttering is a prevalent issue for games written in Unity, a game engine using C#. – jaskij Feb 06 '23 at 12:49
  • @jaskij: However, the CoreCLR garbage collector is not very real-time friendly, nor does it claim or try to be. Real-time GCs with provable maximum pause times do exist, they are just not deployed in the mainstream .NET and Java VMs. – Jörg W Mittag Feb 06 '23 at 19:33
  • @JörgWMittag: Another design approach is to partition the system into real-time and non-real-time portions, and have a fallback strategy in cases where e.g. the non-real-time portion is expected put data into a buffer while the real-time portion pulls data out, but the buffer ends up running dry. – supercat Feb 06 '23 at 21:28
  • @supercat That's an interesting approach, but were there any systems in the mid-70s that realistically offered hardware-enforced partitioning? VM/370 was- to be generous- unproven, and I think it significantly predated e.g. 68k virtualisation. – Mark Morgan Lloyd Feb 07 '23 at 08:08
  • @MarkMorganLloyd: What do you mean hardware enforced? Use interrupts to operate the part of the system that have hard real time requirements, and design it in such a way that it can deal safely if the main-line code gets waylaid. I've done work on an automatic guided vehicle system, designed in the 1980s, that used that principle. The command buffer should always have at least two commands in it; if it doesn't, the vehicle will slow to a stop, and then resume operation once the buffer is full again. No hardware memory or task partitioning, since the system ran on an ordinary Z80 – supercat Feb 07 '23 at 14:40
  • @MarkMorganLloyd: Of course, that system had the advantage that when it was unable to continue behaving in useful fashion (transporting materials around a facility) there was also a tolerably useless behavior it could fall back upon (ramping speed down to zero, and then resuming useful operation when able), while many other real-world systems would be required to continuously perform "complicated" operations without any fallback available. While GC would not be suitable for the latter kind of systems, I would see nothing wrong with using it for the former kind [the AGV systems didn't... – supercat Feb 07 '23 at 16:15
  • ...use GC-managed memory, and buffer droughts would normally happen as a result of communications hiccups, but even if the high-level control system were waylaid for a second because of garbage collection, the interrupt-driven parts would continue to operate the system safely, and and only consequence of that would be that the vehicle would arrive at its destination a few seconds late.] – supercat Feb 07 '23 at 16:19
  • Possibly Knights of NIH (Not Invented Here). COBOL and Ada are products of the US DoD. LISP came from MIT. Jokes aside - LISP was possibly classified as an AI language at that time (it was when I was in Uni) - not for mainstream stuff. – cup Feb 08 '23 at 05:46
  • 1
    The answer to the new subject line is that no one should want a language with “lots of insipid stupid parentheses”. – RonJohn Feb 08 '23 at 18:31
  • 1
    hmm, "insipid stupid" or "irritating silly" ... lots of possibilities, all correctly descriptive! (And I like LISP!) – davidbak Feb 08 '23 at 19:05
  • 1
    @davidbak I know I'm being a pain with this question, but for the record I /don't/ like Lisp. It's just that I'm surprised that reasons for its being ignored were not given somewhere, bearing in mind that it had significant traction at least in academia. – Mark Morgan Lloyd Feb 09 '23 at 09:09
  • @RonJohn I'm afraid that I've rolled the subject edit back, since "candidate basis" makes no sense. – Mark Morgan Lloyd Feb 09 '23 at 09:11
  • @MarkMorganLloyd - I don't think you're being a "pain" with this question. No need to think that at all just because you got a lot of pushback from it. It's just ... back in the day ... well, LISP was just an outlier. Just like ... well, yesterday I was having lunch at a local cafe and there was a guy there working on his tablet/keyboard setup and he looked perfectly fine except he was wearing Crocs - of two different bright colors. Yellow on one foot, red on the other. Nothing wrong with that. He was an outlier. Kind of like LISP. – davidbak Feb 09 '23 at 16:13
  • Perhaps more to the point, when Multics got started at MIT - and there's a lot of Q&A on this site about Multics - LISP was right there too. But it wasn't considered for programming any part of the system. (Though Multics hosted a very good LISP implementation.) It just didn't fit. You might have thought that at least some of the commands would be written in it, though not the actual OS. But no. Near as I can tell from their documentation and Project MAC memos and TRs .. never even considered. And they felt no need to explain why ... – davidbak Feb 09 '23 at 16:17
  • What's it going to take to answer this question to the OP's satisfaction? To me, it's self-evident that a dynamic language with no syntax to speak of (and I note parenthetically that I have used SNOBOL) doesn't fulfil the specified requirements for a general-purpose language for embedded systems: which answer has already been given at length. – dave Feb 09 '23 at 23:59
  • 1
    "note parenthetically" in a discussion of LISP - ha! I get it! – davidbak Feb 10 '23 at 00:11
  • @another-dave "what will it take..." we'll know it when we see it :-) I'll probably credit Raffzahn's answer since it appears to be viewed favourably, but what I was hoping was that somebody could find some reference to Lisp at an early stage of the evaluation process plus the reason it was eliminated. I note that it is /mentioned/ in some of the early documents because its ability to handle a variable number (i.e. a list) of parameters was considered useful, so at least that demonstrates that the people involved were aware of it: even if they didn't formally add it to the list of candidates. – Mark Morgan Lloyd Feb 10 '23 at 08:30
  • 3
    There was a joke that went around usenet those days that someone hacked the central source code for the Strategic Defensive Initiative (SDI), that it was in LISP, and to prove it they published the last page of the code. It was a page of nothing but )))))).... – Alan Feb 24 '23 at 08:00

3 Answers3

18

Lisp is conspicuously absent while even COBOL gets a fair evaluation.
Can anybody account for this?

I would say the very first paragraph of the introduction on page 8 sets a foundation that pretty much excludes Lisp on the spot:

First Paragraph of the Introduction

A quick search will reveal that the word 'Embedded' shows up in every chapter for a total of 31 times. A strong hint that the embedded theme was the main target for Ada.

COBOL is, unlike Lisp, suitable for embedded applications, not at least due to its ability to generate extremely compact code. Of course it will fail utterly when looking at other requirements (see below).

Lisp may be great to handle complex data relations in highly variable data sets, but I doubt it is first choice for embedded applications. Especially not back in the 1970s when even high-end control systems had at best a few dozen KiB of code space.

When looking at the 'general requirements' noted on page 12, it can easy be seen that Lisp fails in essentially all categories named.

List of Requirements: Simplicity, Reliability, Readability, Maintainability, Efficiency, Implementability, Machine Independence, Portability, Definition.

Looking at those criteria helps to understand why several other languages are also not included. FORTH, for example, a language often praised for its great performance in embedded designs, holds only barely better than Lisp with a single check mark at efficiency.

Looking at the list of languages evaluated does suggest that a good number (including COBOL) may have not been added because of them fitting well, but rather due them being already used in DoD related applications before.

have come across references to Lisp later being used for US-government related work

I'm not entirely convinced that such an anecdotal story makes a good argument. While it misses any placement on the continuum of designated applications, I'm pretty sure it's not about an embedded one :)

Raffzahn
  • 222,541
  • 22
  • 631
  • 918
  • 6
    You can point also to the Steelman requirements which guided the final language selection (in the contest), especially requirement 1A - the very first requirement: "Generality. The language shall provide generality only to the extent necessary to satisfy the needs of embedded computer applications. Such applications involve real time control, self diagnostics, input-output to nonstandard peripheral devices, parallel processing, numeric computation, and file processing." LISP just doesn't fit there. – davidbak Feb 05 '23 at 23:00
  • 5
    P.S. Note that that "Language Evaluation" report was co-authored by Peter Wegner. He certainly knew about LISP and would have included it in the report if there was anything to consider! – davidbak Feb 05 '23 at 23:05
  • 1
    I'd suggest that all of this is patently obvious in hindsight to anybody who knows anything at all about computer systems- embedded or otherwise. However it was not obvious in the mid-'70s when the evaluation was being done, and in view of the political clout that Lisp backers at MIT and Stanford had I'd have expected a paragraph stating that it was not suitable /because/ it relied on garbage collection etc. – Mark Morgan Lloyd Feb 07 '23 at 08:04
  • @raffzahn But in the 1970s "embedded" included IBM mainframe-derivatives in e.g. the Apollo spacecraft, DEC systems used for process control, and Whirlwind (and others) in national defense. Not to mention the growth of airline booking systems, air traffic control and so on. The field wasn't limited to e.g. computers in fighter aircraft, and technology was still a long way short of embedding a computer in e.g. an antiaircraft missile or an engine control system. – Mark Morgan Lloyd Feb 07 '23 at 08:14
  • 2
    @MarkMorganLloyd Whirlwind was already obsolete and out of service at the time. Also, citing highest end systems isn't really paking a point for a language that should be usable on all systems, the majority of them way below anything mentioned. Even an extreme high end system like the Apollo had only 2 KiWord RAM (and 36 Ki ROM), so not exactly a LISP environment. Computers in Missiles and engines was already a thing when above WG was active. And airline booking is neither embedded nor military use. Last but not least, I citing CG as reason would not only be cheap but also misleading. – Raffzahn Feb 07 '23 at 11:09
  • 3
    @MarkMorganLloyd - sorry, disagree: it was totally obvious in the mid-70s to anyone working in embedded systems or anyone familiar with the language landscape at all. We weren't clueless back then. In fact, it was even more obvious back then in an era of extremely limited resources. These days you could kid yourself that it was possible and then attempt an evaluation; back then: no way. Ada itself, as it finally turned out, was already quite a stretch, requiring, as it did, dynamically sized records with full constraint checking, records that could contain tasks, etc. etc. etc – davidbak Feb 07 '23 at 17:12
  • COBOL generated compact code on systems like the VAX and IBM mainframes because the CISC processors were designed with many opcodes that matched perfectly with COBOL and FORTRAN language statements. – RonJohn Feb 08 '23 at 06:50
  • @RonJohn That' s a bold statement. Beside that a VAX and a /370 has about as much in common as as /370 and a Z80, the fact that a CPU has less capable instructions only means that it needs more of them to do the same. This depends only on the CPU, not the language used. A COBOL compiler will still produce compact code on any CPU. – Raffzahn Feb 08 '23 at 07:01
  • @Raffzahn: Instead of saying a machine would need ore instructions to accomplish a task, I think it would be better to say that more tasks would require using library routines. Replacing a call to a move-memory function with LDIR may make code slightly more compact, but not much. Maybe a byte per call site (though if code never needed to copy more than 256 bytes, the need to load both halves of BC when using LDIR would offset that), plus a few bytes for the routine itself. Hardware instructions may improve performance if implemented well, but as exmplified by LDIR, ... – supercat Feb 08 '23 at 15:19
  • ...it's possible for hardware instructions to be implemented in ways that would make them slower than an unrolled loop to accomplish the same task. – supercat Feb 08 '23 at 15:20
  • @supercat Nop. The point made is simply that an 8 bit CPU will need 6 instructions to add a 16 bit number vs. two for a /370. But it will need them no matter in what language, so language is not a valid argument. Neither will packing them into some runtime change that. Functions vs. inline only trades speed vs. space, but will do so an each and every platform. – Raffzahn Feb 08 '23 at 15:31
  • 2
    @Raffzahn: COBOL supports base-10 arithmetic at the language level. If a processor has an instruction to add two N-digit decimal numbers, a COBOL compiler given ADD LINEPRICE TO ORDERTOTAL would likely generate code to do that directly, while a compiler on a more modern platform would likely invoke a library function to do that. If the fields are eight digits each, a COBOL compiler for the 8088 might possibly generate a loop or a 32-instruction sequence that exploits the processor's AAA instruction, but a compiler for the ARM could do the task with a call to a library function. – supercat Feb 08 '23 at 16:06
  • 1
    @supercat it doesn't change the principle, no matter at what code you look. a given task will require x instruction on machine A and Y on machine B. Packaging them in function will not change it in any way. The issue you are working around is machine related, not language related. A machine with native support for a certain data type will always excel at that data type, independent of the language used. – Raffzahn Feb 08 '23 at 17:36
  • @Raffzahn: By what means could a machine with native support for a decimal data type excel at processing such data, if the languages with which it is programmed only support binary data types? – supercat Feb 08 '23 at 17:40
  • @supercat It doesn't help in any way if you twist arguments out of scope. – Raffzahn Feb 08 '23 at 18:00
  • @Raffzahn: To clarify my qualm with your original statement, you said that the lack of specialized instructions for a task (e.g. add two unpacked decimal numbers) would merely make it necessary for a program to use "more" instructions, suggesting that a machine-code program task to accomplish such a task on the ARM would be much larger than on a machine with specialized instructions. Adding two 8-digit decimal numbers might take one instruction on some machines but 56 on the arm, but a practical COBOL implementation for the ARM would probably use a helper library routine to reduce code bloat. – supercat Feb 08 '23 at 18:00
  • @supercat please stop twisting my words by interpreting them out of context. Point is that number of instructions and their performance does depend on the CPU, not the language. therefor it can not be credited for or against a language. putting them into some runtime function or not will not change this. Doing so is a way of optimization trading sped for size. An operation independent of language. Likewise is the usage of a certain data type application specific, not language or CPU related. Thus it can as well not be used to make a point. – Raffzahn Feb 08 '23 at 18:10
  • My comment only equates VAX and S/360-370 insofar as they are both CISC. (x86 is also CISC but that doesn’t mean it has the same opcodes as a VAX.) – RonJohn Feb 08 '23 at 18:26
  • @RonJohn which is in itself a problem, as they are total not only different classes (mainframe vs. mini) but as well complete different architectures. Calling them in retrospect CISC is not helpful either. – Raffzahn Feb 08 '23 at 18:30
9

(Here's something nobody has mentioned:)

The DoD's goal wasn't to get the best possible language for programming their embedded systems.

It was to get the best possible common language for programming their embedded systems, and for two reasons:

  1. So that they didn't have to let a contract to some compiler house to write yet another compiler just because they spun up a new project.

    • e.g., some piece of avionics gets an upgrade and the 10yr old code base needs to be ported to it. The upgrade includes a newer machine architecture (which were created every other week or so). So: old language - probably one of the dozen dialects of Jovial! - for new machine.
  2. So that the really limited critical resource - programmers! - could work on different projects without learning new languages, especially as these projects lasted years even before you considered life cycle maintenance (which, for these applications, was a long life cycle) - and programmers and entire contracting companies would come and go.

They wanted a language that was just like all their other languages except good enough that no contractor was going to spend time and effort and lobbyist influence trying to get a waiver away from it from their DoD program manager, and it had to be enough like all their other languages - all standard procedural (except for the various assembly languages, of course) - that the army of programmers doing military projects would still be able to do them.

(FYI: None of those guys doing the programming for these systems had gone to MIT and taken 6.001.)

davidbak
  • 6,269
  • 1
  • 28
  • 34
  • 2
    BTW for those of you who are not sure that procedural programming languages were just the right thing for those military projects - consider that all DoD projects used the waterfall method of project management, and though they recognized it had some minor issues, that was the way it was done. I myself wrote B specs and C specs and D specs for a major Ada compiler project that was cancelled, after several years of work by several dozen people, before the D specs were done. And of course, before a single line of code was written because: waterfall! – davidbak Feb 07 '23 at 23:26
3

Lisp has its niches. NASA uses Spike to schedule observations by space observatories (ISAS in Japan has also used it). Spike is written in Lisp. One of its defining requirements was that it had to be efficient enough to deal with the intricacies of Hubble scheduling faster than real time.

T-LogoQube was a (tiny) satellite satellite that successfully used Logo for its (very simple) embedded avionics programming. Logo is Lisp with some "syntactic sugar".

However, Mark Johnston, designer of Spike, once told me that if he were writing it all over from scratch he would not use Lisp. The Sonoma State group behind T-LogoQube seems to be pivoting toward embedded Python for the future.

John Doty
  • 2,344
  • 6
  • 12
  • yes Lisp has its niches, but even Spike doesn't run with the resource constraints of a modern embedded system, much less on one in the '80s. – davidbak Feb 07 '23 at 17:09
  • 1
    P.S. - where do you get the idea that a Spike requirement was to be "scheduling faster than real time"? Interesting, I'd like to know more - their own page (you link) seems to show that in product the "fastest" it is run is to produce plans nightly. With an 8-day horizon, given an already scheduled long-range plan. – davidbak Feb 07 '23 at 17:20
  • @davidbak I got it from a conversation with Mark. The whole reason that HST wound up being scheduled by an astronomer's "skunk works" project was that the contractor software couldn't finish the nightly run in 24 hours... – John Doty Feb 07 '23 at 17:24
  • 3
    Ah, so "faster than real time" doesn't mean as the telescope is slewing around from one target to another they're choosing the next optimal move - it means the thing has to run faster than the earth revolves .... seems like a reasonable goal! – davidbak Feb 07 '23 at 17:26
  • @JohnDoty That's pure gold, I love it. We should create a dictionary with such 'almost' definitions. – Raffzahn Feb 07 '23 at 18:15
  • 3
    'sidereal-time reponse' – dave Feb 07 '23 at 18:26
  • 2
    @Raffzahn - The Computer Contradictionary, a little outdated now but still funny. Alas, no entry for 'real time'. – dave Feb 07 '23 at 18:30
  • @Raffzahn I think "real time" simply refers to the actual physical time of the real-world phenomenon. – John Doty Feb 07 '23 at 18:31
  • 1
    @JohnDoty yes, I did understand that. And in some way understandable. It's just extreme funny when used in that context. Real time usually means reaction to sensor input right away to produce control output. Spike is simply not a control system, but a production planning system - an area where the term 'real time' is usually not seen :) – Raffzahn Feb 07 '23 at 19:15