47

There seems to be some confusion what a 'Dumb' terminal is as visible in answers and comments to this question about portable dumb terminals. So:

  • What is a Dumb Terminal?

Points that would help to distinguish might be:

  • What features or missing thereof make it dumb or not?
  • What is a non-dumb/intelligent terminal?
  • Where does the term originate?
  • When was this term coined?
  • Examples for dumb/non-dumb terminals

Please note: This question is about terminals. Terminals are by definition devices that work only connected to and operated by a host. Anything able to work on its own is not a terminal, but something different (usually a computer) acting as a terminal. Further it's about the (historical) use of dumb or non-dumb in this context, not application for other areas (like programs or computers) or in a more general definition of what's dumb.

Raffzahn
  • 222,541
  • 22
  • 631
  • 918
  • 5
    Another wonderful canonical question that will eventually get mostly lost in the clutter but which I think is valuable (even if I disagree on the answer!) for the next generation to learn. – manassehkatz-Moving 2 Codidact Jan 11 '22 at 02:01
  • 1
    @manassehkatz-Moving2Codidact Well, I know, It will attract a lot of me-too , anecdotal and even more off topic answers. Not everyone is into finding viable definitions and/or adding historic context. But lets see who takes on the challenge :)) – Raffzahn Jan 11 '22 at 03:30
  • 3
    There was never any such thing as a "dumb" terminal until somebody made a "smart" terminal. So, the question should be, what makes a "smart" terminal smart? Unfortunately, the answer will depend on the time and the place and the application that you are asking about. – Solomon Slow Jan 11 '22 at 15:43
  • 1
    @manassehkatz-Moving2Codidact is it worth having [tag:cannonical] for posts like this ? – Criggie Jan 11 '22 at 20:52
  • 2
    @Criggie Personally, I think so. Terminals (dumb, smart and in between) are a core part of computing history. (Well not as core as memory...) But the problem is that a Canonical tag really doesn't make things discoverable for the people who really need to know. It is a fundamental StackExchange design issue. One we're working on at Codidact (shameless plug) but there is no retrocomputing over there. – manassehkatz-Moving 2 Codidact Jan 11 '22 at 21:07
  • 1
    @manassehkatz-Moving2Codidact :)) The old issue of moving to a new/different software. Noone really wants until there's some hefty pressure :)) (Beside that the new has to be really address shortfalls without opening new issues). Like the same reason why some still run WinXP – Raffzahn Jan 11 '22 at 21:26
  • 1
    @Raffzahn I get some flack at work for still running TWM as a window manager. But that's on topic here ? – Criggie Jan 11 '22 at 21:49
  • 2
    @Criggie What's wrong with TWM? AFAIK it's still active maintained. Way more than just the usual security blanket :)) And no, it's off topic because of being still active mainteines. – Raffzahn Jan 11 '22 at 21:55
  • 3
    What is a Dumb Terminal? That's easy. Any terminal at which I sit. ;) – End Anti-Semitic Hate Jan 12 '22 at 23:17
  • @Criggie No, [[tag:canonical]] is a "meta" tag. Rather than describing the content of the question itself, it provides some meta information about the status or milieu of the question. Such tags should not exist on any Stack Exchange site, as they run contrary to the purpose and use of our tagging system. – Cody Gray - on strike Jan 13 '22 at 05:40
  • 2
    After all these years, I finally twigged why somebody started a company called Wyse Technology. – Paul_Pedant Jan 13 '22 at 11:20
  • 1
    The term "Dumb" is very much open to interpretation. For me, a dumb terminal sends its keystrokes directly to the machine that it's connected to, whereas a smarter one might have optional line editing (the ability to build a line in the terminal and only send it when the user presses enter.} This makes it easier on programs wanting to read a line of data, because they get sent it one line at a time. – Lennon McLean Jan 13 '22 at 14:56

6 Answers6

47

Like any non-hard term, the label 'dumb' terminal is not only open to interpretation, but also used in different ways over time. Even more so for making others look bad (dumb) or one's own products better (non-dumb) was always part of marketing spin.


TL;DR:

Dumb terminals as best described what abilities they do not offer:

  • use of complex encodings to reduce output data,
  • enabling generation of rich content,
  • offloading low level operations, like editing.

Thus

  • Dumb Terminals are essentially Glass TTY. They only display what is sent without any processing. Only minimal commands are interpreted; the host has little to no ability to modify displayed content.

  • Smart Terminals offer a wide variety of control over the way content is displayed (fonts, colour, size) and allow the host to manipulate content already displayed at the terminal with subsequent output. This enables dynamic content and complex screen designs like (text-)windows. Top models provide local edit features, offloading many low-level functions from the host to the terminal. This greatly reduces CPU load while at the same time speeding up user feedback.

Of course all of this is a continuum with the (Glass-) TTY at the Dumb end and everything past that becoming more and more Smart, all the way to stations like the IBM 3270 or Hazeltine 1500.


In the beginning ...

... terminal and printing terminal were redundant terms; it wasn't until the late 1960s that CRT-based terminals became a thing at all. They were essentially 1:1 replacements for existing printing terminals, in the most simple case the classic Type 33 Teletype. Such terminals only supported a very basic set of control functions, barely modified from TTY usage. Hence the name 'Glass TTY'.

  • CR and/or LF to move the cursor to the next line
  • BS to erase the previous character
  • BEL to give some sound
  • FF to clear the screen/reposition the cursor to home

Some terminals did split up Home and Clear Screen into two functions - which already started the development of 'less-dumb' terminals. Additions like basic cursor functions (CTRL-H/J/K/L) followed soon. A real arms race started; each manufacturer with its own, usually incompatible(*1), extensions. A look at mid-1980s termcap with collections of hundreds of differing terminals sheds some light.

In the late 1960s and early 1970s 'smarter' terminals began to appear, eventually cumulating in 1971 with block mode terminals like the IBM 3270 and independent operating stations like the Cogar 4 or Datapoint 2200. While the 3270 offered a great way to offload low-level interactivity (aka editing) without leaving the basic terminal paradigm, later development stopped them being terminals but rather stations with local programming and local data processing. Essentially the point when the modern PC was born.

<Insert> In some way this can be compared to the development of the web:

  • A dumb terminal is like displaying a text file in a browser.
  • A smart one uses HTML instead to format and beautify the output.
  • A block mode terminal (like 3270) is like having cgi-forms.
  • A RJE station (like a Datapoint 2200) is like Web 2.0 with all its local execution of JS, i.e. a computer. </Insert>

Getting Smarter

The way terminals developed at DEC gives a compact example for the gradual 'smartification'.

(The same can be seen when looking at a timeline of many more models from various manufacturers. Narrowing this to a single manufacturer and simplifying the points made should show the overall development)

Their 1970 VT05 already improved over a basic dumb terminal with

  • cursor home
  • directional cursor movement
  • cursor positioning
  • erase to end of line
  • erase to end of screen
  • horizontal tab stops

These additions already allowed improving the output stream by not having to blank out every character at the end of a line or at the end of the output when redrawing a screen. This may sound primitive to today's eyes, but it resulted in much higher output speed - especially considering the slow line speeds of that time.

While the VT05 and its variants were still using ASCII control characters like CTRL-H/J/K/L to handle the extended functionality, it was the VT50/52 of 1974 that broke off from ASCII use and moved all control functions into ESCape sequences. This followed the same line as standardisation efforts of ECMA for smart terminals, but was only partly compatible.

vt52 ad

Function-wise, the VT50/52 added more codes to manipulate existing screen content, such as insert/delete line/char. Together with a hold mode, that prevented scrolling, host-controlled update could be made in many ways without redrawing the whole screen. The VT52 enjoyed widespread use, with several competing manufacturers adding compatible modes to their terminals.

By 1978 DEC introduced the VT100. This time fully compatible to ECMA-48, or as we call them now ANSI sequences (*2). But the VT100 also added private sequences to improve content manipulation - like requesting cursor position from the host. Quite handy when multiple applications share a screen.

By 1983 the VT220 further extended abilities for a host to control content, including now being able to configure character sets on the fly. Options allowed the addition of graphic output. The VT220 had much impact(*3), and more than a million were sold.

In 1987 the VT320 marks essentially the last update DEC did to classic terminals (*4). It added even more character and graphics abilities and functions for local editing - much like IBM or Hazeltine decades before. Perhaps most remarkably, a way to handle multiple sessions over a single line was added.

Each of these generations got 'smarter' than the one before and as usual advertisement was made by noting how much better it was than any competing 'dumb' model.


Enter 'Dumb' Again

Like so often when a derogatory term is repeated often enough, people go ahead and redefine it in a positive way. That's what Lear-Siegler did in 1975 when marketing their new ADM-3 explicitly as a dumb terminal.

adm3 ad

With that move, the terms Dumb Terminal and Glass TTY finally became indistinguishable.

The ADM-3 went on to become quite successful, becoming a main replacement for printing terminals (like the Teletype Model 33) and an icon for terminals in general, not least due to the low price making it pop up an many places with mini computers and even more micros. Albeit usually with the lower caps add on :))

It wasn't until a year later that Lear-Siegler introduced the ADM-3A with improvements like default lower char display, many display attributes, basic cursor movement and cursor positioning. With these additions the ADM-3A was put on par with the DEC VT-05 and stayed firmly on the simple side compared to a DEC VT52 with all its edit commands.

A good contrast of the same time might be the 1977 Hazeltine 1500 'smart terminal'. It featured not only quite comprehensive edit functions for dynamic content, but also a block mode, much like the IBM 3270, but still keeping compatibility with 'standard' terminals, all the way back to Teletype-like devices. Applications could send out format definitions (think of them like a HTML Form) and let all intermediate editing be done locally with zero interaction needed from the host. All entered data is sent back at the end as one large transmission.

The same way the ADM-3(A) captured large part of the dumb(er) end, the Hazeltine 1500 became extremely successful in the business world. Usage of a 1500 resulted, despite its high price, in notable savings by enabling to serve more terminals without bigger computers - the same way IBM mainframes kept an advantage.


Bottom Line

  • Dumb Terminals are terminals without much control over content format, attributes or handling. Usually barely more than Glass-TTY.

  • Smart Terminals allow the host wide control over content display and direct manipulation of existing content.

  • Devices that can download programs and execute them are no longer terminals, but computers.


*1 - Incompatible in every way - not even the use of ESCape was considered settled, as the 1977(!) Hazeltine 1500 used '~' as escape character for control sequences.

*2 - Much loved from DOS times they are :))

*3 - It's said the IBM-AT keyboard is shaped after the VT220 keyboard

*4 - The 1990 VT420 is barely more than a cost improved variant of the VT320.

Raffzahn
  • 222,541
  • 22
  • 631
  • 918
  • 1
    I wholeheartedly disagree with your definition of smart terminal. Block mode terminals (like but certainly not limited to the 3270) are smart terminals. Character addressable terminals are still classified “dumb”. (I programmed DOS/VSE on a 3270, and OpenVMS on a range on VTxxx terminals, and stand by my comment.) – RonJohn Jan 11 '22 at 02:05
  • 3
    @RonJohn Your comment puzzles me a bit. Didn't I use 3270 type block mode terminals as the ultimate definition of smart? Then again, even a 3270 is character addressable. I worked for decades with them, regarding their way above all character based tinkering. Still, existing content can be character addressed and modified, deleted or inserted with subsequent output. Sure, most form handling software does not offer templates to do so, but the terminals can handle such requests quite well. I loved to write low level handler doing so. – Raffzahn Jan 11 '22 at 03:09
  • The VT420 added quite a few things: left/right margins, rectangular operations, DECRQSS (request status), checksum rectangular area,... – ninjalj Jan 11 '22 at 12:01
  • I think it's important to note that even dumb terminals must support control events such as CTRL-C to interrupt flow. So that (for example) printing of a large text file to the screen had to be interruptable. This is why remote terminals had to have out-of-band processing. – grovkin Jan 11 '22 at 22:36
  • 2
    @grovkin Why on earth? CTRL-C (like XON/XOFF) has never been processed on the terminal side. It's a key-press like any other, simply sent toward the host when done. Any interpretation has always be done by sol (mor or less) low level handler at the host OS. – Raffzahn Jan 11 '22 at 23:05
  • 2
    @grovkin = All that was necessary was for input and output to be full-duplex, so the terminal could send a character while processing output. After that, it was up to the OS. Actually, even in half-duplex you could use the 'break' capability of the terminal (go open-circuit) to get the attention of the computer-side connection, which is why some multi-access systems used break as an interrupt signal. – dave Jan 11 '22 at 23:09
  • @another-dave I think the information in your comment would be a useful part of the answer. Any time a principle is stated, noting important exceptions to that principle can be informative. – grovkin Jan 11 '22 at 23:13
  • sending escape codes in email was the height of fun. – Scott Seidman Jan 11 '22 at 23:17
  • 1
    @grovkin - Raffzahn can update his answer with info from comments, if he's so inclined. I'm reluctant to change someone else's words. – dave Jan 11 '22 at 23:17
  • @another-dave I hope he does. – grovkin Jan 11 '22 at 23:18
  • 1
    @grovkin Glad to do so, except, I'm not really sure what principle (and exception) you're talking about here. If refering HDX vs. FDX, then it's a line capability, not a terminal feature. Even less one about being dumb, as the 'dumbest' of all terminals is a keyboard connected to the TX line (DTE PoV) and, independent thereof, some output facitlity (like a CRT refresh logic) connected to the RX side. – Raffzahn Jan 11 '22 at 23:37
  • 1
    Control-C really has nothing to do with any of this - strictly a remotely supported (whether OS or application) feature. Control-S/Control-Q (XON/XOFF) is another story. That can be truly remote or can be in-band handshaking (i.e., handled by ports, modems, etc. rather than the remote CPU), though to a user the effect is often the same. – manassehkatz-Moving 2 Codidact Jan 11 '22 at 23:41
  • @manassehkatz-Moving2Codidact if it's emulated over a tcp connection (i.e. telnet), it requires an out-of-band to be set. otherwise, "cat large_file" would potentially fill up the buffer and it wouldn't stop printing for a while after CTRL-C was pressed. – grovkin Jan 12 '22 at 01:08
  • @grovkin Not sure exactly what you're saying, other than "delay". In-band vs. out-of-band here refers to handshaking specifically. In-band handshaking is XON/XOFF or similar - i.e., characters in the data stream that tell the other end to start/stop independent of the operating system. Out-of-band is DTR/CTS or other wires (i.e., on anything other than the primary transmit/receive wires) telling modems/ports/etc. to start/stop without affecting the data in any way. Control-C is, either way, simply a character being sent from the terminal and processed by the host - could be interpreted similar – manassehkatz-Moving 2 Codidact Jan 12 '22 at 01:38
  • to the old Break key, or it could be PageDown in WordStar or something else in another program. – manassehkatz-Moving 2 Codidact Jan 12 '22 at 01:39
  • @Raffzahn: Control-S and Control-Q would be transmitted as ordinary characters by some terminals, but some terminals--especially those which would be unable to scroll a sequence of short text lines as fast as they were received--would handle them locally. If someone typed control-S shortly after a terminal had sent its own XOFF to throttle a flow of short text lines, it would be annoying if the terminal sent an XON as soon as it had scrolled that text onto the screen. – supercat Jan 12 '22 at 17:25
  • @supercat Seems you're describing some kind of smart terminal, don't you? So tell me, if it handles XON/XOFF locally, how does it prevent it's memory from being overflown by host output? – Raffzahn Jan 12 '22 at 19:14
  • @Raffzahn: It sends an XOFF whenever its buffer has less than 16 bytes left, and an xon whenever its buffer has less than 16 bytes in it. – supercat Jan 12 '22 at 19:37
  • @supercat So what you say it's still send out and handled by the host - except invoked not only manual by a user but as well by the terminal? Sounds smart :)) – Raffzahn Jan 12 '22 at 19:44
  • @Raffzahn: My point is that typing control-S and control-Q at the console do not necessarily cause control-S and control-Q to be sent, nor is their action limited to the transmission of such characters. The terminals that behaved this way were likely microprocessor-controlled, but I think that whether a terminal is "smart" or "dumb" depends upon more than whether a microprocessor was used to replace what would otherwise be complex control logic. – supercat Jan 12 '22 at 20:13
  • @Raffzahn: I don't know whether I remember enough details to justify an answer, but I remember using a block-mode terminal in the early 1980s which was called a "dumb" terminal [at least by comparison with other terminals I never saw], but supported substantial local editing and would transmit everything from the start of screen to an end-of-transmission marker when a "SEND" button was pushed. The screen editing functions were rather fancy, but were not configurable by the remote system; I suspect "smart" terminals probably had a more form-like interface. – supercat Jan 12 '22 at 20:16
  • @supercat You mean the terminal would suppress a user typed CTRL-Q/S ? Never ever even heard of this. It would cripple user interaction, wouldn't it? Do you got any reliable reference? And no, Block mode is a complete different beast (well, mode). – Raffzahn Jan 12 '22 at 20:17
  • @Raffzahn: If the terminal had sent its own control-S, it would stifle a typed control-Q until its buffer had drained sufficiently to justify sending its own control-Q, and typing a control-S would immediately suspend rendering of received data [probably also sending a control-S even if the buffer was nearly empty, but I don't know]. Sorry I don't have any reference, nor do I even remember the make of terminal, which was basically a VT100 knock-off, but supported smooth scrolling. – supercat Jan 12 '22 at 20:23
  • @Raffzahn: The terminal I remember from the 1980s was, from what I remember of how it worked, probably connected with a half-duplex shared line running at about 19200 baud. Occasionally a light on the terminal marked "POLL" would blink, and if one hit SEND it would home the cursor and lock the keyboard until the next time the POLL light blinked, at which point the cursor would quickly scan down the display while the poll light would stay on, and then the screen would clear and the system would receive a new screen full of text from the remote, probably also at 19200. – supercat Jan 12 '22 at 20:28
  • @Raffzahn: I think the real definition of "dumb" terminal back in the day was any terminal that was inferior to some more expensive model the manufacturer would rather sell instead, while a "smart" terminal was any terminal that the manufacturer wanted to portray as superior to other units. I don't know if the OCLC terminal used a microprocessor or not; using a microprocessor would likely have been cheaper than using hardwired logic, but I don't recall it doing anything that wouldn't be possible with hardwired logic. – supercat Jan 12 '22 at 20:31
  • @Raffzahn You've included an advertisement for a VT52, whose price had been slashed to $1,350 in 1970. In the same year I was working as a Junior Lecturer, for a salary of $3,000--so the "slashed" price was still about 5 months salary! – Simon Crase Jan 13 '22 at 01:11
  • @SimonCrase :)) Noone said terminals were cheap. In fact, this is part of the reason why at universities dumb(er) terminals were extreme common, while smart ones were mostly sold for business use. In education and science it's usually about being able to show and use computers at all, while in business it's about making the needed units more productive - which justifies higher investment. – Raffzahn Jan 13 '22 at 01:27
  • @Raffzahn I know. We used teletypes (ASR-35, I think) until after 1975. – Simon Crase Jan 13 '22 at 02:31
  • 1
    I agree for what it's worth. However my recollection is that the original KSR-33 only had CR, LF, BEL and- rarely used- answerback using a programmed drum. As the carriage moved right it tensioned a spring, I don't think there was a tab facility (i.e. a single code which could tension the spring by multiple increments) or a backspace (i.e. an escapement to release the spring by a single increment). That's probably an adequate definition of "glass TTY", particularly since "TTY" literally means "Teletype" which was a product name, and "dumb terminal" can reasonably be taken as a synonym. – Mark Morgan Lloyd Jan 13 '22 at 10:43
  • 1
    It is perhaps of interest that at the hyperintelligent end of terminals there were for a time dedicated hardware X terminals (to be distinguished from the xterm program). These were persistently associated with a specific central machine, and provided a display (with keyboard and mouse) on which they ran an X11 server for X11 clients running on the associated central machine. On can draw the line wherever one wants, but there is a big gap between endpoints such as those X terminals and any of the other terminals discussed here. – John Bollinger Jan 13 '22 at 15:08
  • I didn't notice the term "smart" in the ad for the VT52. While terminals like the VT52 and VT100 were more sophisticated than their predecessors, does that imply that they were considered "smart"? – supercat Jan 13 '22 at 19:11
21

I suggest a somewhat different definition of dumb terminal. While clearly some terminals were smarter than others, I don't think things such as programmable selection of fonts and colors, direct cursor addressing, or even limited block mode operation really makes a terminal "smart". But there are a few key things that do make a terminal smart:

  • Local storage beyond the bare minimum needed for configuration. A terminal with a few bytes to store settings instead of dip switches isn't "smart". But a terminal with a local floppy drive, hard drive or magnetic tape for storage is "smart". Punched paper tape or other storage that is used simply as direct load & store to the remote location with no local processing (beyond parity checking or similar) doesn't count.
  • Ability to run local programs. It doesn't matter whether the programs are loaded locally by typing in on the keyboard or loading via punched paper tape, or loaded remotely (like Javascript downloaded in a browser), but if a terminal can run real programs locally, that is "smart". Could be BASIC, assembler, direct binary load (but programs, not loading an alternate character set), APL, whatever.

This clearly puts machines like the Datapoint 2200 into the smart category. It also keeps machines like the range of DEC VT terminals, Wyse terminals, etc. in the dumb category. Heath/Zenith H-19/Z-19 dumb, Heath/Zenith H-89/Z-89 smart.

As a simple example of why I think features such as fonts/attributes should not be something that makes a terminal "smart", consider boldface and u̲n̲d̲e̲r̲l̲i̲n̲e̲: (and ironically had to cheat here to get underline because Markdown as used by SE doesn't support user-designated underline...)

  • Teletype - Boldface: print text, CR without LF, print spaces to position, print text; Underline: print, CR without LF, print spaces to position, print _
  • First dumb terminals - no bold or underline!
  • Later dumb terminals - send attribute (control or escape code) for bold or underline, display text, send attribute to turn off bold or underline
  • HTML - Bold - <b>text</b>; Underline <u>text</u>

In all of the above cases, the host computer is sending a stream of characters that are interpreted by the terminal in some manner. No real "smart" capabilities needed. It is only later with:

  • HTML with CSS - style sheet sent at the beginning or as a separately loaded file, followed by user's choice of tags surrounding the text.

and of course Javascript really making things "smart" that the terminal needs to do anything more than an incrementally more advanced "process the incoming character stream in a well-defined manner".

This keeps a number of color and/or graphics terminals in the "dumb" category as well. For example, the Tektronix 4010 and later emulations such as the Wyse 99GT included high resolution graphics but no permanent storage and no local processing beyond processing the simple drawing commands in the input stream.

Where I do see some fuzziness is with block/batch mode terminals. IBM 3270 is the classic, but there were plenty of other terminals with similar (at least conceptually) capabilities. Even just protected space/unprotected fields is a certain level of "smart", and if there is any validation included (e.g., send a code to indicate a field must be numeric) then that does seem to cross over to "smart". On the other hand, is the ability to mark fields and enforce some validation enough of a "program" to merit "smart"?

Once microcomputers became more common, particularly with advent of affordable PC networks, the distinction became, in my opinion, a little clearer again. In the mid-1980s and beyond, anything with no local storage and no ability to run "real" programs was "dumb". PC that used a ROM to boot over the network with no local floppy or hard drive? Smart because it could run local programs, even if they had to be loaded over the network. Terminal with block mode but no ability to run user programs and no storage? Dumb.

While conceptually similar to dumb terminals in terms of power/capability relative to a remote system, any device capable of running a modern web browser is, by my definition, a smart terminal. If nothing else, if a device can run downloaded Javascript, it qualifies as smart.

  • 3
    "Dumb" in this sense is (outside of computing) colloquially a pejorative, and is the converse of "intelligent". This, I think, requires some general computational ability on the part of the terminal, i.e., you can load and run programs -- as you say. – dave Jan 11 '22 at 02:07
  • But @Raffzahn 's definition is one I have heard elsewhere. It really comes down to whether smart/dumb is relative to other terminals or relative to "traditional" computers. – manassehkatz-Moving 2 Codidact Jan 11 '22 at 02:13
  • Erm.. the question was not what would be considered smart or dumb today, but in conteext of terminals and as used in the age of terminals. A PC running some terminal program does not become a terminal. It only acts as one, but stays the PC it is without. A JS capable browser is, like a Datapoint 2200 beyond being a terminal. – Raffzahn Jan 11 '22 at 03:22
  • 4
    @Raffzahn I am basing my answer on the way I referred to such devices back in the 1980s. I considered a typical cursor-addressable/etc. terminal to be a "dumb" terminal when compared to PCs (and Apple ][ etc.) of the era. And I agree "A PC running some terminal program does not become a terminal". I posit that a JS-capable browser "dedicated device" (i.e., unable to "load Apps") is a modern-day smart terminal (i.e., client-only device) - it certainly isn't "dumb", the question is whether it morphs from "very smart terminal" to "full computer". – manassehkatz-Moving 2 Codidact Jan 11 '22 at 03:38
  • 1
    @manassehkatz-Moving2Codidact Not sure if I would follow down that rabbit hole. Where is the difference between that 'dedicated' JS device and a Laptop with no storage drive but a wireless adaptor mounting some boot drive somewhere in the net - like on an AWS container? I'd say it's still a laptop computer. Similar does the language/system used for programming/execution make no difference. Loading some bytecode or x86 neither. JS can be written in C. Last but not least, the difference between an A2 and a terminal isn't being dumb or not but being a terminal or a PC. Like Apples and Oranges. – Raffzahn Jan 11 '22 at 04:01
  • 5
    That is exactly the definition of dumb/smart that I also learned in the 80s/90s. I worked for a company that built terminal emulation cards (BS-2000) and also personal time recording systems for industrial applications (Siemens ES-100/ES-200/ES-300) and everyone made the distinction between dumb and smart terminals following this definition. Local processing and storage capacity=smart, only connected operation=dumb. – Patrick Schlüter Jan 11 '22 at 14:24
  • 4
    What I suppose is happening here is a generational shift of definition due to the progress in technology. The distinction between glass TTY/auto-formatting terminals might have made sense in the '60s early '70s, but in the '80s not anymore as even the most primitive microcontrollers (8048, 6511, etc) had enough oompf to make Apple 1 display a laughing stock. The old definition of dumb/smart did not make any sense anymore and was repurposed to something that made sense in the then current technology. – Patrick Schlüter Jan 11 '22 at 14:36
  • 1
    Re "DEC VT terminals, ..." I remember when a VT52 was a "smart" terminal. It was smart because it did considerably more than just emulate a Teletype. It allowed the remote computer to move the cursor to random places on the screen and erase rectangles. That made it capable of working with Emacs—the very definition of "smart" in a certain time and place that I remember. – Solomon Slow Jan 11 '22 at 15:47
  • 2
    @SolomonSlow - a VT52 was never smart (and I say this as a VT52 fan; the VT100 was a compromise as far as I was concerned). I'd maybe consider the VT62 (block-mode terminal a la 3277, with DDCMP protocol support) to be 'smart', or at least to have a couple of O-levels. – dave Jan 11 '22 at 23:14
  • 1
    @another-dave, "Smart" is a relative term. – Solomon Slow Jan 11 '22 at 23:45
  • 1
    This is definitely the correct answer. Weird that the other one got more upvotes. – BlueRaja - Danny Pflughoeft Jan 12 '22 at 03:39
  • @BlueRaja-DannyPflughoeft RaffZahn's answer, as usual, is comprehensive and reasonable. I happen to disagree with some parts of it. But he did the "create a question, post an immediate answer" thing, which is 100% legitimate in this system, but which also almost by definition will generate some initial upvotes. HNQ tends to favor first answer, provided it is reasonable (and that answer is reasonable). Plus RaffZahn has an incredibly high rep here for very good reasons. So I disagree, but I ain't complaining. – manassehkatz-Moving 2 Codidact Jan 12 '22 at 03:47
  • Plus especially with HNQ, you get less knowledgeable (for the specific topic, not overall) visiting. The average young developer doesn't understand the glory of a 24x80 green screen, and certainly doesn't know the history. – manassehkatz-Moving 2 Codidact Jan 12 '22 at 03:49
  • 1
    Raffzahn's answer is closer to my experience from the early 1980's. I note that Digital Equipment Corporation's advertising used the word "smart" to distinguish their block-mode terminals (VT131, VT132) from their more economical models (VT102). https://terminals-wiki.org/wiki/images/a/a0/DEC_advertisement_Computerworld_09Nov1981.jpg – RedGrittyBrick Jan 12 '22 at 13:59
10

As a practical matter, a "dumb" terminal is a terminal that has little more than the capabilities specified in the Unix terminfo database for TERM=dumb. Those are:

dumb|80-column dumb tty,
    am,
    cols#80,
    bel=^G, cr=\r, cud1=\n, ind=\n,

or, in English: it displays 80 columns of fixed-width ASCII text; it can move the cursor position back to the beginning of the current line, and to the beginning of the next line, and it can ring a bell. And that's it. You don't even get backspace. (I am not sure what the binary "am" capability means -- "automatic right margin" is the only explanation the manual gives.) This is essentially what a classical teleprinter like the Teletype Model 33 could do.

I would personally set the dividing line between "dumb" and "not dumb" a little bit higher, with the "glasstty" capability set still qualifying as "dumb":

glasstty|classic glass tty interpreting ASCII control characters,
    am, OTbs,
    cols#80,
    bel=^G, cr=\r, clear=^L, cud1=\n, cub1=^H, kcud1=\n, kcub1=^H,
    nel=\r\n, ht=^I,

Here we have backspace (and thus the ability to edit the current line) and screen clear, but we don't have an addressable cursor, fonts, colors, graphics, or non-ASCII characters. Anything with an addressable cursor is definitely "not dumb". A tty with font and charset control but no cursor addressing could still count as "dumb"; there were teleprinters that had those, after all.

"Smart" is another matter. Personally, the features that other answers say are required for a terminal to be "smart" — local persistent storage, programmability, etc — to me they sound like features that make the device not a terminal anymore, but rather a full-fledged computer. Even X terminals didn't have those. Maybe that means to me there's no such thing as a "smart terminal."

zwol
  • 449
  • 2
  • 8
  • 5
    AM or Automatic Margin simply means that when the output position gets moved past the line length (e.g. 80) it gets automatically moved to the first position of the next line - like an implied CR/LF. This is important as TTYs usually did not insert an automatic CR/LF. Without AM an application has to end a lien always with a CR to advance, with CR, this could be saved if the maximum (cols) has been outputted. Yes, even such most basic issues were anything but given. – Raffzahn Jan 12 '22 at 03:28
  • @Raffzahn Oh, I see, like a purely mechanical typewriter. I'm not old enough for any of those TTYs (although I am old enough to have learned to type on a mechanical typewriter). I was imagining something like "you have to pad all lines on the right to 80 columns with spaces" which didn't make much sense, even if punched cards were involved. – zwol Jan 12 '22 at 03:33
  • End-of-line behavior isn't a given even today. The CFA634 20×4 display, for example, has configurable behavior for both end-of-line and bottom-line newline. Writing past the end of a line can either wrap to the next line (an implicit CR/LF) or have subsequent characters be ignored. A newline on the last line can either move the cursor to the top line (keeping existing text unchanged) or scroll all lines one line up on the display (blanking the last line IIRC); both keep the cursor in the same column. – Alex Hajnal Jan 12 '22 at 13:49
7

"Dumb" terminals had a simple enough basic feature set (add character, scroll up, backspace, etc.) that their logic could be completely implemented without a CPU or microprocessor, some using just TTL SSI and MSI logic devices, plus memory (RAM or shift registers). There were even cookbooks published with logic schematics so you could build one from simple IC parts (and no CPU).

Smart terminals often had a rich enough interface protocol that a processor and software or microcode was required to decode the command sequences, and perhaps control a display with more features (fonts, status bars, colors, etc.)

Dumb terminals could exist and were manufactured years before single chip microprocessors were available. Later, after more advanced ICs were available, a vendor could segment their market with a microprocessor based terminal that only included mostly features possible in earlier non-microprocessor based terminals. Thus called "Dumb" for marketing reasons.

hotpaw2
  • 8,183
  • 1
  • 19
  • 46
  • 2
    One little problem: The Datapoint 2200, arguably one of the first really smart terminals, as it had full computer capability (programmable, local storage, etc.), was the prototype for the Intel 8008 but actually was implemented using TTL logic chips. So if a "smart" terminal needs a single-chip CPU does a terminal that doesn't have chip CPU automatically define "dumb"? I don't think so. Capabilities determine the dumb/smart (no matter where you draw the actual line between them), not the technology used to implement them. – manassehkatz-Moving 2 Codidact Jan 11 '22 at 22:45
  • 4
    The earliest Datapoint 2200 was implemented with a discrete logic 1-bit serial CPU that reportedly ran its microcode faster than an 8008. So still a processor and software driven smart terminal. Just not with the CPU on 1 chip. – hotpaw2 Jan 12 '22 at 03:21
  • 2
    I'm upvoting this because it answers the question (correctly IMO) without writing a massive essay. – JeremyP Jan 12 '22 at 08:52
2

Firstly let me say that in the UK, using the term "dumb" to mean "unintelligent" is considered offensive by those with speech disorders. It appears to be perfectly acceptable in the US.

My recollection (I'm not going to try and research it) is that the term "dumb terminal" has a rather confused history. I first encountered it in the context of mainframes (specifically ICL mainframes) where a significant amount of input/output processing was offloaded from the central computer, but the usual configuration was to do this processing in a "cluster controller" that supported perhaps 16 or 32 terminals. These terminals had very little logic beyond being able to display a particular character at a particular location. The cluster controller did everything, from reflecting keystrokes onto the screen (if you got the wiring wrong, your keystrokes would appear on someone else's screen) to some quite complex functionality such as checking that particular input fields were required to be numeric. The intelligence was all in the controller, not in the terminal. But sometimes you needed a single terminal in a location where there was no cluster, and then you could use an expensive terminal that contained all the logic built in. So you had the choice between using a dumb terminal as part of a cluster controlled by an intelligent controller, or using an intelligent terminal.

The functionality of the mainframe terminal was not unlike that of a simple HTML form. You could send a message out from a transaction processing application with control sequences to mark which parts of the screen were editable and which were fixed, to control the tab order between fields, and to do basic validation. When the user hit SEND, only the input fields were sent back (perhaps only those that had actually been edited). This all served to reduce the IO and processing load on the mainframe, allowing it to support hundreds of terminals despite having remarkably little CPU power.

Later, in the 1980s, when UNIX came along, this all changed. By this time mainframe terminal functionality had moved into the terminal and out of the controller, so in the old sense of the word, all terminals were intelligent. At the same time UNIX and VAX minicomputers had enough CPU power to do all the low-level keystroke processing in the main processor; they had none of the offloaded processing that mainframes had, so in this sense their terminals were "dumb".

However, because the application on a UNIX or VAX machine was notified of every keystroke on the terminal, they were able to deliver a much more interactive and responsive user experience. This led to an inversion of the terminology: the mainframe terminals were referred to as "dumb terminals" not because they lacked internal processing logic, but because they lacked the interactivity to deliver the same kind of user interface.

Michael Kay
  • 569
  • 2
  • 6
  • It might be more correct to state that the minicomputers had enough CPU power to spare to do all the low-level keystroke processing in the main processor. They were not actually more powerful CPUs than mainframes of the time; it was more the willingness of the users and owners to spend their CPU power on that. (Individual machines generally had far fewer users and much less overall load, even sitting idle from time to time, so spending cycles on this was much less likely to significantly delay another user's application.) – cjs Jan 14 '22 at 00:51
  • Indeed. Perhaps the minicomputer operating systems were also better at doing the interrupt handling necessary to process individual keystrokes. – Michael Kay Jan 14 '22 at 09:24
  • @MichaelKay Sometimes yes - smaller system can afford to interrupt "everything/everyone". Sometimes no - mainframe more likely to have a true separate I/O processor. – manassehkatz-Moving 2 Codidact Jan 14 '22 at 13:29
1

The other answers provide lots of details while missing the point.

On business computer systems the terminals were used to fill in forms that were then submitted to a computer program that then returned the next form to be displayed and/or filled in. As well as forms a computer program may present a multiple level menu and be reactivated when the user had chosen an option. (Think IBM mainframe)

On technical computer systems terminals were used to edit files (with vi for example) and to display auto-updated data like readings from sensors on an industrial process. (Think PDP 11)


We will only consider the form filling for business computers from now on.

Clearly form filling needs logic to move between fields and edit the text in a field. There are a few locations this logic was implemented in:

  • Within the application as was common on Unix, this was the most flexible option but limited how many users could be supported on a single machine. It also resulted in very slow movement between fields if the data link was slow or the program had been paged out. (Often the application had to respond to each key press as it was pressed.)

  • In a separate IO processor (common with VMS) this permitted the main computer to support many more users but still has the issue with slow response over poor data links.

  • In a smart terminal when the application sent a description of the form to the terminal, and the terminal returned the value of the fields to application when the user submitted the form. Often these would be multiple page forms with restrictions on the format of data that could be entered into a field for example a date field. (Hidden fields were often used to track application state, so a form could still be processed if application has been restarted.) Very common with IBM 360 system using IBM smart terminals.


A 'Dumb' terminal was a terminal without the ability to locally edit forms. The form functions in HTML logically (before Javascript was used) behaved much like a mainframe smart terminal.

Ian Ringrose
  • 425
  • 3
  • 6
  • I wonder why it wasn't common for systems to have some permanently-loaded code that would handle certain kinds of application input processing on a per-keystroke basis without having to swap in the entire application until either a key was hit that could not be handled with the common inputs handler? – supercat Jan 13 '22 at 18:21
  • 1
    @supercat Programming can get a lot more complicated that way. Varied a lot by system, but if you had a system with 100 terminals and virtual memory and 98% of the time any given terminal was not in use, your system might be able to handle 100 users with only 10x the application memory requirement (plus OS memory requirement). The first time you send a keystroke after being swapped out you get a couple of seconds delay and after that full speed ahead. The problem is when all of a sudden 50 users are going at once. Typical example might be a large-scale airline reservation system - thousands – manassehkatz-Moving 2 Codidact Jan 13 '22 at 18:53
  • of terminals but at any given time only a few people are entering transactions. Banking, many other industries as well. But loading up that same machine with enough memory for everyone would have been cost-prohibitive until relatively recently. – manassehkatz-Moving 2 Codidact Jan 13 '22 at 18:54
  • As far as this answer, basically boils down to "Smart = batch/block/form processing capable". I somewhat disagree, but that is certainly a reasonable definition. – manassehkatz-Moving 2 Codidact Jan 13 '22 at 18:55
  • @manassehkatz-Moving2Codidact: If an application is sitting within a shared "read/edit input buffer unless/until an 'action' key is hit", with characters echoing as typed, as dots, or not at all, then a user may have to wait for a task to get swapped in after hitting an "action" key [e.g. enter, next-field, previous-field, etc.] but such switching wouldn't be necessary when typing individual characters. – supercat Jan 13 '22 at 21:38
  • 1
    @supercat That's true, provided the buffering process knows which characters matter. That can be done (and I am sure has been done) in large specialized systems. But a more general level it is not so simple. Unless you use a programmable front-end I/O processor customized to handle all of that. Kind of like at U of MD in the 1980s where they used Series/1 as a front-end to translate character mode ASCII "dumb" terminals to block-mode 3270 "smart" terminals. (using the character vs block definition of dumb/smart). But those Series/1 were not trivial or cheap. – manassehkatz-Moving 2 Codidact Jan 13 '22 at 21:44
  • @supercat look at how much memory a mainframe with 1000s of connected terminals had. It was all about removal processing that did not need to be in a transaction lock from the primery processor. – Ian Ringrose Jan 13 '22 at 23:50
  • 1
    @supercat DEC would have tended to put a mini computer in each office to handle the much cheaper terminals and form editing etc, sending the complete form over the network to the database cluster. IBM also had terminal controllers that converted a few nearby terminals into smart terminals. The difference with IBM is that the customer could not put programs on the processor in brench offices (and terminals) as they just run the IBM form editing system. – Ian Ringrose Jan 13 '22 at 23:52