40

After seeing this question, I was struck with an intense curiosity to know:

Were there ever processors with word sizes that aren't powers of two, specifically after the 8-bit byte became the industry standard? (I'm well aware of the 9-, 18-, and 36-bit computers that predate the 8-bit byte.)

I've seen some things that are close to this, for example the PIC24 series of 16-bit processors that uses a 24-bit instruction word, but I'm not aware of any actual 24-bit (etc) processors.

If not, why not? I can think of a few possible downsides, but I'm not knowledgeable enough about the history to actually know.

Hearth
  • 725
  • 1
  • 6
  • 10
  • 2
    I guess, the MCU can be built with an arbitrary number of bits, but the CPU has to connect to existing bus and to existing memory. The memory manufacturers won't make a small batch of non standard memory cells. –  Nov 17 '19 at 16:42
  • 7
    18 bit was much used at one time –  Nov 17 '19 at 16:49
  • 1
    @Neil_UK I'm aware of those, and a good answer to this may well include them, but I'm looking specifically for anything that came after the de facto standardization of the 8-bit byte. –  Nov 17 '19 at 16:56
  • Certainly were and still are. For reasonably current ones look at Motorola (ok not THAT current, try Freescale, err NXP) DSP56K. I have some memory of Analog Devices (21xx or SHARC) having had 24 bit architecture but can't find references atm (SHARC are now 32 bit) – user_1818839 Nov 17 '19 at 17:15
  • 5
    https://en.wikipedia.org/wiki/Word_(computer_architecture) details word sizes – Erik Eidt Nov 17 '19 at 18:01
  • When did the 8-bit byte become industry standard? Was it with the invention of EBCDIC in 1963? – JeremyP Nov 18 '19 at 08:53
  • there where 2-bit CPU slices i3000 that can create "any" multiple of 2 bit bitwidth CPU. There where special 1024 bit military CPUs for RADAR systems. Also IIRC there where 12 bit CPUs out there too ... – Spektre Nov 18 '19 at 09:06
  • 2
    I did a design using bit slice parts (where you could even define your own instruction set!) and it supported arbitrary word sizes (although it could become very complex). The book: http://bitsavers.trailing-edge.com/components/amd/Am2900/Mick_Bit-Slice_Microprocessor_Design_1980.pdf – Peter Smith Nov 18 '19 at 14:11
  • 1
    The HP-41C's CPU had 10 bit instructions (in ROM) and dealt with 8 bit data in 56 bit registers (in RAM). IIRC, RAM access was always 56 bit wide (never fetched/wrote a single 8 but byte, but always a 56 bit word/register). However, that does not mean that the RAM (or ROM) data bus was 56 (or 10) bit wide; to minimize pin count, the RAM/ROM chips actually used a serial protocol.

    The CPU was officially named the Nut CPU. Not because it was insane, but the development code name was "coconut".

    – Klaws Nov 18 '19 at 15:58
  • 9 bit CPU? I'm not sure they exist. – RonJohn Nov 18 '19 at 16:34
  • @RonJohn I seem to recall having heard of one, but perhaps I didn't. I may have been thinking of 9-bit memory (the sort used in high-reliability stuff, with an extra parity bit). – Hearth Nov 21 '19 at 15:51
  • @Hearth "the sort used in high-reliability stuff, with an extra parity bit". That would be RAM, not a CPU. :) – RonJohn Nov 21 '19 at 15:53
  • @RonJohn Yes, I know. I'm saying I may have erroneously extrapolated my knowledge of 9-bit RAM into "well 9-bit cpus probably existed at some point" without bothering to actually check that. – Hearth Nov 21 '19 at 15:54
  • Any modern FPU would fit your bill. Most modern DSPs as well. – tofro Jun 20 '21 at 10:23
  • 1
    The Illiac used 40-bit words containing either one value or two instructions. Left- and right-hand instructions. How quaint. I picked up an original programming manual for the Illiac at a library sale long ago for two-bits (aka a US quarter). – HABO Jun 24 '21 at 20:17
  • There were more 4-bit CPU's than you'd think ... https://en.wikipedia.org/wiki/4-bit_computing – Alan B Oct 22 '21 at 15:17
  • @AlanB 4 is a power of 2. I was asking specifically about ones that are not a power of 2. – Hearth Oct 22 '21 at 15:18
  • "The memory manufacturers won't make a small batch of non standard memory cells" DRAM memory often came in 1 and 4 bit sizes.... – rackandboneman Oct 26 '21 at 16:44
  • 1
    It's not retro, but FWIW the GA144 is a modern microprocessor with an 18-bit word (https://www.greenarraychips.com) – tonys Jun 22 '23 at 15:09

15 Answers15

61

Certainly.

The DEC PDP-8 family was 12-bit, and so was the Intersil 6100, a single-chip CMOS implementation of the PDP-8 ISA.

There have been many 24-bit DSP-type processors, from Motorola, Microchip, Analog Devices, among others.

The Burroughs large systems (mainframes), starting with the B-5000 in 1961, used an ISA called "E-mode", which had 48-bit data words (8 × 6-bit characters).

There have been other unusual word sizes as well. The CDC 6600 used 60-bit words. Wikipedia has a fairly complete list.

Dave Tweed
  • 1,744
  • 1
  • 10
  • 16
  • 5
    Geez. In such a sweeping scope don't forget the 36-bit DEC PDP-10, which was a major force for a while. I worked and developed code for both the PDP-8, PDP-10, PDP-11, and PDP-12. Around this time there really wasn't any standards regarding character encoding, despite ASCII (and the earlier 5-bit Boudot code used on earlier teletypes) having been around for some years earlier. – jonk Nov 17 '19 at 18:17
  • @jonk: I have actually worked with all of the machines I mentioned at one time or another. I never worked with the PDP-10, although I was aware of its existence at the time. – Dave Tweed Nov 17 '19 at 18:52
  • 2
    I actually was running computational chemistry code (or trying to) on a Harris Series 500 system in the early-mid 1980's. This was a 24-bit system when the majority of our code ran on 32, 36, or 64-bit systems. I don't know when you want to say that an 8-bit byte was standard then or not. We also used several word-addressible systems - Cray & FPS. Fun times! – doneal24 Nov 17 '19 at 20:52
  • 2
    I think the PDP-8 predates the industry settling on 8-bit bytes though, right? The burroughs mainframes definitely did. – Hearth Nov 17 '19 at 22:54
  • @Hearth: I suppose it depends on how you define "the industry" at the time. The "elephant in the room" -- IBM -- had always been using 8-bit characters. – Dave Tweed Nov 18 '19 at 01:14
  • 4
    Always is a long time. The IBM 704, 709, and 7090 were based on 36 bit words. – Walter Mitty Nov 18 '19 at 02:52
  • @DaveTweed I once designed a translator (in an EEPROM) for EBCDIC to ASCII - fun times indeed. – Peter Smith Nov 18 '19 at 13:44
  • I once used an old (1980's?) Nicolet FTIR that had a 24-bit word size, likely because of a 24-bit DSP. Data files could be written to DOS-formatted floppies, but translation to appropriate PC integer formats was up to the user's ingenuity. – Jon Custer Nov 19 '19 at 00:39
  • The CDC 6600 was a multi-processor system. The main CPU was 60 bits, but it also had peripheral processors which were only 12 bits. – Mark Ransom Jun 22 '21 at 15:38
  • I used a Univac 1100 about 1983 that had a 36 bit word. It used the Univac FIELDATA character set to fit six characters of six bits each into a word. I believe eight bits and ASCII were fairly well fixed by that point but Univac had a historical connection with the 36 bits and kept at it long after it was clear it was a dead end. – user1683793 Jul 03 '23 at 18:25
40

specifically after the 8-bit byte became the industry standard?

There's no clear point of time where the 8-bit byte became a standard, since it's still just a de facto standard nowadays¹. However probably the 1970s were the transition time due to many newer architectures and standards with 8-bit bytes, and if you look at the word size list then you'll see that architectures from 1975 onward use word sizes that are powers of 2 (the list is not exhaustive, of course).

Due to legacy reasons, in the later decades updates to processors with odd word sizes (12/18/24/48/whatever-bit) of the previous architectures are still developed. For example the UNISYS 2200 series with one's complement math and 36-bit word is still supported until at least 2015.

If you just care about some of the buses or (fixed) instruction length then some CPUs with 24-bit address bus (but not data bus) were also produced when RAM was still expensive and there wasn't enough transistor budget. For example the 16-bit Intel 80286 (yes, it's the predecessor of modern x86) and 32-bit Motorola 68k. And current x86-64 CPUs still only have a 48-bit address bus with 48 or 52-bit virtual address space

Nevertheless most of them aren't as common as 24-bit architectures which are still widely used and produced even in the 21st century, mainly in the DSP domain, since DSPs are designed specifically for a single purpose: to churn a lot of data in a known format quickly. Some modern examples

They're all DSPs for audio processing, because professional audio formats sample data at 24-bit resolution. 20-bit DSPs also exist, for example the Zoran ZR3800x family. There are 20-bit ADCs and DACs for them, for example the AD1871 which supports 16-/20-/24-bit word lengths. And believe it or not, Analog Devices also has a 28-/56-bit ADAU1701 audio DSP

Read Analog Devices' blog Relationship of Data Word Size to Dynamic Range and Signal Quality in Digital Audio Processing Applications, section 6. Processing 110-120 dB, 20-/24-bit Professional-Quality Audio if you're interested.

Obviously there are higher-end 32-bit audio DSPs. But high-end audio enthusiasts need even more resolution so TI was pushing 48-bit DSPs although I'm not sure how successful it was. That said there's the 48-bit TI TAS3xxx (TAS3202, TAS3204) audio SoC series with

  • 76-bit ACC (accumulator) register
  • 28-bit MC coefficient register
  • 32-bit DO1-DO8 registers
  • 2-bit LFS register
  • 48-bit data registers (most of the remaining registers)

See also


¹ Even the ISO/IEC 2382-1:1993 standard doesn't specify a byte to contain 8 bits. Only octet is a unit of 8 bits:

  • byte
    A string that consists of a number of bits, treated as a unit, and usually representing a character or a part of a character.
    Note 1 to entry: The number of bits in a byte is fixed for a given data processing system.
    Note 2 to entry: The number of bits in a byte is usually 8.
  • octet
    8-bit byte
    A byte that consists of eight bits.
phuclv
  • 3,592
  • 1
  • 19
  • 30
  • 12
    +1 for pointing out why 24-bit is common for DSPs. I'd heard that 24-bit DSPs were a thing, but never looked at a specific one or made the connection with 24-bit audio. (And BTW, I guess word-addressable memory make non-power-of-2 word size a non-problem. You never have to deal with byte addresses or multiply or divide by 3. You can just make your cache lines a multiple of the word size) – Peter Cordes Nov 18 '19 at 01:38
  • 2
    I don't know why this was downvoted – phuclv Nov 18 '19 at 03:01
  • No idea, surprised me, too. Bringing up 286 in the same sentence as UNISYS is a bit out of place, though; as you say it's not a 24-bit CPU. No more than an x86-64 is a 52-bit CPU (phys width in the page table format), or a 39-bit or 40-bit or whatever (K8 physical address width IIRC). – Peter Cordes Nov 18 '19 at 03:12
  • I gave the 286 example just because the OP mentioned the PIC24 examples – phuclv Nov 18 '19 at 03:25
  • Maybe put it in a different paragraph; it distracts from your point about ISAs that do fit the criterion. – Peter Cordes Nov 18 '19 at 03:51
  • 1
    "There's no clear point of time where the 8-bit byte became a standard, since it's still just a de facto standard nowadays." After it became the de facto standard, it was enshrined in a real international standard: ISO/IEC 2382-1:1993. – Ron Maupin Nov 18 '19 at 23:48
  • The IBM/360 and /370 series were also 24-bit addressing; OS/360 commonly used the high byte of "address" fields in system control blocks to store flags. – Joe McMahon Nov 19 '19 at 03:05
  • @JoeMcMahon yes but they existed long before 8-bit byte became "standard" – phuclv Nov 19 '19 at 05:21
  • @RonMaupin who said that? Have you read the standard? It says that byte is A string that consists of a number of bits, treated as a unit, and usually representing a character or a part of a character.... Note 2 to entry: The number of bits in a byte is usually 8.**. POSIX standard requires a byte to contain 8 bits but ISO/IEC 2382-1:1993, C, C++ and other standards don't – phuclv Nov 20 '19 at 01:41
  • Looking at my green card, ASCII was indeed a standard at OS/360's release, even if OS/360's ASCII support was rudimentary compared to EBCDIC, and EBCDIC was definitely IBM's standard. Both were established 8-bit character set standards. Whether or not 8-bit bytes were "standard" everywhere at that point became somewhat moot, given IBM's market dominance. – Joe McMahon Nov 20 '19 at 18:19
  • Have any general-purpose microprocessor-based computer ever used any word size that was not a power-of-two multiple of eight? I know some DSPs use a 24-bit word size, and I know that discrete-logic-based general-purpose microcomputers have used other word sizes, but I think the 8080 pretty well established dominance of octet-based computing. – supercat Jun 09 '22 at 15:06
  • @supercat the only modern general-purpose CPU I know that have odd number of bits in a register is Itanium, but it still "octet-based" – phuclv Jun 09 '22 at 15:11
  • @supercat — The DECmate computers used the Intersil/Harris 61xx "PDP 8 on a chip" processors, which had a 12 bit word. https://en.wikipedia.org/wiki/DECmate – Michael Graf Jun 11 '22 at 09:13
  • @MichaelGraf: A twelve-bit word size seems a bit of a strange choice for word processing, since 6 bits would be insufficient to hold mixed-case characters with punctuation, and twelve would seem wasteful for internal text storage (though a twelve-bit screen-character matrix might be good to allow underlining, reverse video, highlighting, etc.). Though maybe twelve-bit words could be efficient for word processing if each word could hold a pair of characters from a [A-Za-z0-9], or a single character from a 252-character set. – supercat Jun 11 '22 at 16:55
  • @supercat — The PDP 8 came first. Then came the PDP 8 on a chip. And then DEC decided to sell it with a word processor ROM. – Michael Graf Jun 11 '22 at 21:34
  • @MichaelGraf: I would expect memory wasn't particularly cheap when the machine was introduced. So I'd be curious how it stored text so as to avoid wasting RAM. – supercat Jun 11 '22 at 23:01
  • It would seem that odd word sizes for DSP applications go back to at least the 1970s. Adage, Inc. made the Ambilog 200, which had 14-bit A/D/A and 30-bit CPU. – Theodore Jun 27 '23 at 14:46
16

The Garrett AiResearch MP944 has a good claim to be the first microprocessor. It's 20-bit, designed from 1968 to 1970, and classified until 1998, so it is not well known.

The original hardware of the IBM System/38 was a 48-bit CISC, but the design allowed switching that to 64-bit PowerPC RISC without re-compiling.

The Toshiba TLCS-12 family was designed from 1971-73 and is 12-bit. The Intersil 6100 has already been mentioned, it was a single-chip implementation of the older 12-bit DEC PDP-8.

There have been numerous 4-bit micro-controllers, but 4 is a power of two, and hence outside the OP's question.

A 1-bit computer is an interesting corner case, and existed in the form of the Motorola MC14500B

John Dallman
  • 13,177
  • 3
  • 46
  • 58
14

Were there ever 12-, 24-, 48-, etc bit processors?

Yes! See https://en.wikipedia.org/wiki/Word_(computer_architecture) for an enumeration of historical word sizes.

Before the 8-bit byte became standard, computers were not byte addressable, only word addressable.

Originally, computation was mostly numeric oriented, so a (sometimes double) word data size of 24 or 36 bits was common depending on the numeric range & precision desired.  36 bits gives a decent precision in decimal digits.

Character data was stuffed into words, and handled via packing & unpacking.  Prioritization of efficiency was for numerics not text.

Over time, the importance of text processing grew. The 8-bit byte became standardized, and byte addressable computers became the norm.  Further, the need for interoperability of data between differing computers also required standardization.  For these reasons today, it no longer makes sense to have a word size that is not a multiple of the 8-bit byte.


Your tag says microprocessors, so a quick comment on that.  By the time microprocessors were developed, the size of byte was already 8-bits.  As the utility of computers increased, applications became increasingly hungry for memory, so address spaces were already larger than ~32k, 64k.

Integrated circuits represented a substantial increase in performance and dramatic reduction in size, though came with the early cost of having a somewhat fixed maximum and relatively small number total transistors.  These factors heavily influenced the design of microprocessors, in that they tended to have only 8-bit ALUs, though by then required 16-bit addressing and address manipulating capabilities.

Over time, microprocessors far surpassed capabilities of the old pre- integrated circuit computers (we now have 64-bit computers and RISC V has standardized a 128-bit architecture), but there was a time when things went a bit backward in word sizes before getting larger again.

Erik Eidt
  • 3,357
  • 1
  • 14
  • 21
  • 1
    This was originally on EE.SE and had other tags as well, really. I was thinking of MCUs/MPUs but DSPs and other stuff also count. I'm specifically asking about things after the industry settled on the 8-bit byte anyway; I know about the PDPs and such that had words of different lengths. – Hearth Nov 17 '19 at 22:56
  • At uni I wrote code for a DEC-10 machine that had a 36bit word. (Even the 6-character filenames were packed into a single word!) – Simon F Nov 18 '19 at 10:02
9

Microchip's PIC family has processors with lots of weird word sizes. For instance for the PIC16F1454/5/9:

  • program counter is 15 bits (and stack is thus 15-bits wide as well)
  • instruction words are 14 bits
  • data addresses are 12 bits (7 bits for bank and 5 bits within the bank)
  • but data words are 8 bits
jcaron
  • 803
  • 4
  • 7
  • 1
    There are also 12-bit PICs like the PIC10F220. The linked page is wrong: it's not 16 kBytes of SRAM, it's 16 bytes of SRAM! Also 256 12-bit words of program EEPROM, confusingly spec'd as 0.375 kBytes. – Warren Young Nov 20 '19 at 03:15
8

KDF9 was 48 bits, though this probably predates the 8-bit byte standard.

KDF9 did not have 'a' character code; codes were device-specific. Printer code was for example 6 bits. However, the PROMPT file system (and ELDON2 which adopted the same) used 8 bit 'characters', with the benefit that Algol basic symbols such as underlined procedure were stored as single 'characters'. In this sense KDF9 was using an 8-bit byte, though it was still only a word-addressable machine.

48 bits was a good choice for word size, I believe based on the resulting precision+range for floating-point numbers. KDF9 used a 39-bit characteristic, 8-bit exponent, and 1-bit sign.

48 bits was on KDF9 also a good choice since it allowed a counter/increment/modifier value, 16 bits per subvalue (loaded in a "Q-store" for use) to be stored in a memory word.

dave
  • 35,301
  • 3
  • 80
  • 160
7

The ICL1900 was a 24 bit computer.

Atlas was a 48 bit computer.

There is a list of 12 bits machines on Wikipedia which includes the Ferranti Argus.

5

The ICL 1900 series was indeed 24-bit words, used as four 6-bit characters. Using 8-bit media like papertape required escape characters called alpha, beta and delta to switch case and special characters.

ICL replaced 1900s with 2903 and ME29, which was a 32-bit architecture machine cut back to 24 bits for compatibility with 1900s.

ICL also had office machines called System 10 and System 25, which it inherited from the Singer Sewing Machine Company. That wasn't even binary. It used 60-bit words as 10 6-bit chars holding decimal digits for calculation.

Burroughs large systems was 64-bit memory, with only 48 bits available to user code. It had 8 parity bits per word too. The other 8 bits were assigned to various protection mechanisms, of which I remember read-only, code/data, and system/user bits. Serious memory protection.

Those four systems ate my life for 20 years.

Paul_Pedant
  • 461
  • 3
  • 4
4

Yes. Semi-modern example: DSP56002 24-bit DSP from NXP

filo
  • 391
  • 1
  • 3
3

The ez80 is a continuation of the z80 family, sporting 24-bit register pairs.

What's quite frankly fantastically stupid, in that a register is 8 bits, but a pair of registers is 24 bits. What's even better is that the upper, most significant bits of a register pair is hidden, so you have to go through hoops to get those upper 8 bits as a separate register.

mid
  • 139
  • 2
  • 5
    I wouldn't call it 'hoops'. The ez80 can run in legacy Z80 mode or in 24-bit mode. It's no different than an x86/64 CPU treating the bottom half of RAX as EAX in 32-bit mode. – J... Nov 18 '19 at 15:49
  • 1
    It's a lot more hoops then just shifting the register, like you could in x86. – mid Nov 18 '19 at 21:37
3

To give another example the Symbolics Lisp machines had first 36 bit (36xx series, 1980s) and then 40 bit (Ivory, late 1980s and early 1990s) word sizes. These were well after the de facto standardisation on 8 bit bytes, and indeed the part of a word which held data was 32 bits for both these architectures: the additional bits were tag bits.

ignis volens
  • 191
  • 1
2

It is way too long since I programmed one but I believe that the English Electric LEO was 35 bits. Chosen since that was the minimum number that can provide the same precision as the 10-digit calculators popular at the time.

JP H
  • 21
  • 1
  • Was this after the 8-bit byte became the standard, as I specified in the question? Because I do know there were a lot of weird word lengths in the relatively early days of computing. – Hearth Jun 20 '21 at 05:24
  • 1
    Ah, LEO -- Lyon's Electronic Office. That was Lyon's Tea Rooms (several thousand of them), who solved their accountancy problems by making their own computers. When I joined ICL in 1968, there was all kinds of manuals etc. with their previous name: English Electric Leo Marconi Power-Samas. I also recall my first Defence contract: they reprinted the Ladybird children's book "How it Works -- The Computer" in plain brown covers to educate the Military top brass. – Paul_Pedant Jun 20 '21 at 17:31
2

Analog Devices SHARC DSPs

  • 48bits instructions
  • 40bits floating point numbers
  • 32bits integers
Grabul
  • 3,637
  • 16
  • 18
2

The DECSystem-10 (and -20) had 36-bit words, 18-bit addresses. A "byte" was any contiguous set of bits within a word, ranging in size from 1 to 36.

The Univac 1100 series had 36-bit words, 18-bit addresses. Within the word, one had access to 6, 12, 18 bit fixed sub-word fields, some with optional sign extension. It used 1's complement arithmetic, and had both a positive zero (no bits set), and a negative zero (all bits set). Depending on which (arithmetic or logical) comparison used, +0 was, or was not equal to -0. Also -0 + -0 = +0.

Consider the complexity of computer circuit design (these systems used individual transistor circuits) . The above systems' Arithmetic/Logic Units have to be 36 bits wide. Smaller word size systems can use simpler hardware. But if your computations need really large or small numbers, e.g. Science, you have a difficulty.

I recall when scientist clients were upset about loss of floating point precision when they were forced to migrate from a 36-bit system to a 32-bit system. Eventually, they had to rewrite their code to use 64-bit floats (and convert their data). Good thing it was government work, and they were "clients", and not my reviewers.

The CDC STAR was a 64-bit system. It's instruction set liked vectors (count, followed by up to 65,535 64-bit words). Heavily pipelined. Add, multiply, test 2 65K vectors to get a 65K vector in 1 machine instruction.

waltinator
  • 347
  • 1
  • 4
0

An Internet timeline on Byte sizes:

1969 - RFC 5 had variable length Byte sizes

1969 - RFC 6 convert 6, 7, 8, or 9 bit character codes into 8-bit ASCII for transmission

1969 May - RFC 7 has Byte defined as 8 Bits

Character Code Timeline:

ASCII and EBCDIC start dates are 1963

Processors

Given start dates for the standardization of an 8-Bit Byte size Burroughs large systems such as the B6500/B6700 with a 51 bit word are post 1963. This standardization of the Byte is so early that the answers here have to provide such a wide variety of CPUs that no one answer does provide the complete list and the external references have to be used.

PDP11
  • 718
  • 2
  • 12