4

I was reading this Wikipedia entry and thought if any other Computer/or Architecture was ever made with a different base system than base16.

And if not, any ways to do it? (maybe by making one from scratch or reprogramming a specific Cpu?)

(might be related to this question)

  • 4
    There are plenty of computers that use a word size that's not a multiple of 4 bits. There are also ternary computers (not using 0 and 1, but three values). – dirkt Jul 01 '20 at 11:19
  • can this be made using a breadboard (do tell me if this question can't be answered in comment and need it's own question) @dirkt – Nordine Lotfi Jul 01 '20 at 11:20
  • 1
    With a very very big breadboard, sure... you won't find IC's for ternary. Homemade transistorized computers are about that size... – dirkt Jul 01 '20 at 11:23
  • 1
    You need to design (a) a circuit that has 3 states, and (b) logic elements using that circuit, and (c) a computer that uses those logic elements. i.e., you're asking about designing a computer from the ground up. – dave Jul 01 '20 at 11:28
  • I see :) very informative! – Nordine Lotfi Jul 01 '20 at 11:30
  • 3
    Humans don't like plain binary numbers, so it is easier to read by grouping by something close to base 10. 8 or 16 have been traditional choices (being powers of two) – Thorbjørn Ravn Andersen Jul 01 '20 at 14:14
  • 5
    Is there a reason why this question is Retrocomputing, rather than general computer engineering theory? – DrSheldon Jul 01 '20 at 14:54
  • Yes, mostly because i assumed that, if there was an answer to this question, it would have more chance to be found already made, or should i say in the past. @DrSheldon – Nordine Lotfi Jul 01 '20 at 21:07
  • 1
    @NordineLotfi The question has already been rejected once, as it is in no way RC.SE secific, but a generic CS question. It doesn't even ask for anything historical but 'ways to do something' which is directed at the future. isn't it? – Raffzahn Jul 01 '20 at 21:11
  • @another-dave: Another possibility would be to have a system which uses pairs of bits to represent either a ternary value or a non-data state (either end-of-record or data-not-ready-yet). Using two data wires to send ternary data serially without requiring a predictable bit rate could be more efficient than sending one clock and one data wire (but being able to send only one bit) or one clock and two data wires (sending two full bits, but needing 50% more wires). – supercat Jul 02 '20 at 04:17
  • when i said "ways to do this", while it wasn't in the past tense, it was said while knowing/thinking that "if there a chance this was previously done in the past, then someone in the present/future would know"... @Raffzahn – Nordine Lotfi Jul 02 '20 at 05:15
  • Yes? Almost all of them are base 2, and 2 is not 16. – user253751 Jul 02 '20 at 10:19
  • While i concede that the original question and post are based on a misunderstanding (most computers being base2 and never using base16, and just being part of easier interpretation) I wish this would be reopened, as it have quite a lot of interesting information, that wouldn't appear otherwise on other SE :D though i'm satisfied with what was already said of course. – Nordine Lotfi Jul 03 '20 at 01:36

7 Answers7

21

Computers do not in general have "a base 16 architecture". They are binary, i.e., base 2.

The base derives from the number of states a storage element can have. Almost exclusively, we use an electronic switch that can be "on" or "off" -- 2 states, therefore base 2.

You can find at least one example of a base 3 computer in history. There were several machines with a base 10 architecture using a device called a dekatron for storage. But these days digital computers are always binary. The engineering is much simpler.

As far as I am aware there was never a base 16 computer.

Much computer software tends to display numerical values in base 16 because it's convenient for certain users. This is especially true if the word size is a multiple of 4 bits (4 bits holds 16 different values). That is a long way from saying the computer has a base-16 architecture.

dave
  • 35,301
  • 3
  • 80
  • 160
  • 4
    To illustrate your point, the 12-bit DEC PDP-8 used octal (base 8) for most of its documentation. One octal digit is three bits, so a 12 bit word fits neatly into the range 0000₈–7777₈. Sure, it would fit just as well into 000₁₆–FFF₁₆, but the PDP-8 packed its instruction set into the top 3 bits, so one could disassemble an octal dump from the first digit of each word. That wouldn't work so well in hex. And yes, it meant that the PDP-8 only had 8 basic instructions. – scruss Jul 01 '20 at 13:13
  • 1
    @scruss, Not just the PDP-8. Most of the DEC computers--maybe all except for the PDP-11 series and the VAX series?--had word sizes divisible by three, and had docs and front-panel controls that were heavy with octal notation. – Solomon Slow Jul 01 '20 at 14:44
  • One place non-binary digital circuitry is extremely commonly used is MLC flash which stores 4 or more logic levels in a single flash cell – user1937198 Jul 01 '20 at 15:40
  • The Russian Setun was a base-3 computer – Eugene Styer Jul 01 '20 at 16:31
  • Even the PDP-11 preferred octal - because the instruction word was based on 3-bit fields: 8 registers, 8 addressing modes. – dave Jul 01 '20 at 18:26
  • 3
    Communication e.g. data over phone lines has long been non-binary, which is why baud rate and bits per second are different concepts. – dave Jul 01 '20 at 18:30
  • 1
    If computers are binary, can you read or write just a single bit in RAM? – snips-n-snails Jul 01 '20 at 19:40
  • @scruss I've always wondered why software has special support for octal numbers. Besides Unix file mode bits, I didn't see what other use they had. I suppose they emerged because of machines like that. – JoL Jul 01 '20 at 20:10
  • @snips-n-snails By reading a byte, changing the bit, and writing the byte. They're physically implemented by binary means, but that doesn't mean that it necessitates bit-addressable memory. If memory were bit-addressable, it would mean that at any given address size, we'd be able to use 8 times less RAM. For example, 32-bit computers, instead of being able to use up to 4GiB of RAM, would have been limited to 500MiB, to no great advantage. – JoL Jul 01 '20 at 20:24
  • Why octal? Because we only have 8 fingers. – dave Jul 01 '20 at 21:09
  • More seriously, a base that's a power of 2 is convenient, since it means each digit in the base is an integral number of bits - so it's simple to convert numbers to bit-positions in your head. Octal was an 'obvious' choice since we've already got the numerals for it. It's not really a word-length issue; as far as I recall, the 60-bit CDC machines still conventionally used octal notation. At some point hexadecimal became fashionable and we all had to learn to count past 9. – dave Jul 01 '20 at 21:14
16

Early in the history of computing, it wasn't uncommon to have machines that were physically designed to operate in base-10. (Even if signalling was internally on/off.) Designers of the time understood some of the issues associated with representing numbers in base-2 (particularly fractional numbers), software was very primitive, and having a base-10 internal representation had a number of advantages, early on.

The ENIAC did this with decade counters that used a sequence of ten switching devices arranged in a ring to hold a single digit. A sequence of inbound pulses would cause the ring to successively select a different stage to be active, and the specific stage represented the number being stored. Signaling from one counter to another was done by sending an appropriate number of pulses down the wire - a 'three' was represented as three pulses out of a possible ten in the overall cycle. (These ring counters have their origins in pre-computing data gathering devices, which is probably why they were considered well enough known for the ENIAC - which was ultimately a conservative design.)

https://en.wikipedia.org/wiki/Ring_counter

UNIVAC and IBM's early 'business' (as opposed to 'science') computers were also all decimal based machines, albeit with more efficient implementation strategies. The IBM 650, among others, used Bi-quinary coded decimal. https://en.wikipedia.org/wiki/Bi-quinary_coded_decimal

Machines quickly pivoted over to fully binary designs, with the decimal arithmetic portion moving 'up' the stack, although even then, BCD support of various forms was common in instruction sets (including x86 - https://en.wikipedia.org/wiki/Intel_BCD_opcode )

HP calculators also had extensive hardware level support for BCD arithmetic, including a base-10 mode in the ALU and registers divided into fields based on base-10 digits.

https://www.hpmuseum.org/techcpu.htm

More recently, Flash memory parts are moving away from two-levels per memory cell (1-bit-per-cell) to multiple bits per cell (with the implication that each cell is no longer binary, but rather base-4, -8, etc.)

https://en.wikipedia.org/wiki/Multi-level_cell

mschaef
  • 4,836
  • 1
  • 16
  • 23
11

TL;DR:

Computers operate not in Base 16 (Hex) but Base 2 (Binary). Hex is only used as a convenient way for us humans to handle binary.


In Detail:

Any ways to have an architecture with a different base than base16?

I don't know any operating in base 16. Today essentially all computers operate base 2. In the olde times (tm) there were many base 10 and a few base 3 machines.

The only partitial system operating in base 16 is the floating point format used by IBM's /360 mainframe family. The /360 follow up systems are kind of an oddity about that (*1). But then again, this is only how the values are handeled. In storage they are again binary. The only other base still supported by many CPU is decimal (base 10), but as well only for operation, as storage is as well binary (4 bit per decimal number).

I was reading this Wikipedia entry [about numeral systems] and thought if any other Computer/or Architecture was ever made with a different base system than base16.

Beside, there was never one, the Wiky entry also doesn't claim such to ever existed. It only stats that Base 16 (aka Hexa/Sedecimal) is used as "compact notation for binary data". Much the same way as it states for octal.

Thinking of it, do you maybe mixup presentation of binary data to a computer user with the number system used?

Base 8 or base 16 are, much the same way as base 10, used to make binary readable to average users. It's simply handy for us to operate with a 4 digit hex number instead of 16 binary digits - isn't it?

Which number system us used or human representation depends not on the CPU, but convention. Octal comes traditionally from classic machines with word sizes being a multiple of 3, while hex is originated in IBM's /360 series, as it was the first machine to widely use multiples of 4 as word size (BCD/Nibble, Byte, Halfword, Word, Doubleword)

And if not, any ways to do it? (maybe by making one from scratch or reprogramming a specific Cpu?)

As said, (today's) computers do not operate in base 16, but base 2. Their data is only displayed to us humans as base 16. But yes, a computer can be made operating on units in any base you want. Except, it will make the machine more complicated and more expensive.

Binary is not only most simple system, as there is no other with less elements, it als fits quite well the way electronics work. Systems to a power of two (base 4/8/16/32/...) would fit as well, with only a minimal overhead, but bring no advantage to base 2. Any other system will end in much more complex hardware and lower performance.

But why doing it at all? The data can be converted anyway. So the most sensible step is to use the most basic format, and convert it later to anything the user wants.

(might be related to this question)

It's not really a good idea to repeat questions with essentially the same content that have been rejected before.


*1 - Amdahl's (he headed that development) decision was based on the fact that the /360 ISA was already build around a datapath to handle 4 bit values for BCD - which was very important back then and still is today for accounting - so it doing hex based float did not only come at a lower price tag than binary, but also benefited from all improvement made to sped up BCD.

Raffzahn
  • 222,541
  • 22
  • 631
  • 918
  • 1
    QLC SSD's actually are base-16 (4 bits per cell), while TLC SSD's are octal. That's of course storage, not computation. You do see the base-16 and base-8 math inside their Error-Correction Codes though. – MSalters Jul 02 '20 at 15:25
  • Didn't knew that! @MSalters I'm glad i posted on retrocomputing SE :D – Nordine Lotfi Jul 02 '20 at 23:34
  • @MSalters Right. then again, there are two level and single level as well, but none of that is visible or even detectable during operation. – Raffzahn Jul 03 '20 at 05:05
  • The question here seems to be asking about a processors word size (16-bit) while using the wrong terminology (base 16). I agree with this answer in that computers always use base 2, but a question about the history of computers with different word sizes would probably get more useful answers IMHO – Paul Humphreys Jul 07 '20 at 19:46
  • @PaulHumphreys To me the question is clearly about base 16, not 16 bit. Not at least supported by the OP adding a link to a wiki entry about numeral systems. (Also, not sure why you comment on an answer if criticising the question) – Raffzahn Jul 07 '20 at 19:53
  • @Raffzahn I'm not trying to criticise (IMHO reasonable) question per se, its just to me the question being asked seems to indicate a confusion between numeral bases and processor word sizes, despite the link included in the question. – Paul Humphreys Jul 07 '20 at 23:45
5

You have some good answers already but here is a specific example. The first computer that I used, back in 1974, was not in any way a base 16 system. The smallest addressable unit of memory was a 24 bit word. Characters were stored using a 6 bit code (obviously not ASCII) four to a word. These characters were not individually addressable but there were some bit operations to help e.g. rotate by 6 bits. There was no similar special handling for 4 or 8 bits, these had no special significance. Octal (base 8) played a similar role to hexadecimal today. There was no particular architectural significance, it was just a convenient compact way to represent the bit values.

Some more details here ICL 4120.

The next computer that I used was a Commodore PET and I was surprised by the byte structure and the use of hexadecimal. Byte was a new word to me.

badjohn
  • 2,014
  • 1
  • 9
  • 22
  • Impressive, never heard of this before! – Nordine Lotfi Jul 01 '20 at 12:30
  • What you encounter first often seems normal. Except for a few grey haired oldies, this means that 8 bit byte oriented machines seem normal. I expect that one day it will change again and 8 bit bytes will an oddity in the history books as well as the machine that I learned on. – badjohn Jul 01 '20 at 12:35
  • @DarkDust Yes, this is what I meant by "There was no particular architectural significance, it was just a convenient compact way to represent the bit values". So, I am not claiming that this was a base 8 machine. Indeed, it is not as exotic as a base 3 or base 10 system but I have never used one of those. – badjohn Jul 01 '20 at 12:37
1

The question has already been answered by people saying that modern computers are based on groups of binary digits, bits, that assume two values 0/1, but they only appear hexadecimal if you choose to use hexadecimal notation. if you choose to group bits inpackets of three you might use octal. it is nicest if your word size is a multiple of the number of bits in your numeric base representation, but this isn’t required: Eg PDP-11’s with 16 bit words usually used three bit octal notation. Nevertheless, this is all binary related.

But there’s more:

For example, much arithmetic hardware uses binary power of 2 weights, but has values like -1//0/+1 that cannot be represented in a single bit. More advanced multipliers and dividers use larger powers of 2, like 4 or 8, with larger digit sets. but, again, underneath it all the things are at least some power of two based, even if the digit sets are not.

Perhaps more relevant is that it used to be quite common to have decimal computers: computers whose basic operations worked on digits whose weights are powers of 10. Nowadays such decimal arithmetic is usually expressed in terms of groups of four binary bits; but in the very old days you might use gears that had 10 teeth or positions, or vacuum tubes that had five or ten operating states, and which therefore are truly not binary based, neither powers of two nor of eight nor of 16.

Krazy Glew
  • 583
  • 3
  • 10
1

And if not, any ways to do it? (maybe by making one from scratch or reprogramming a specific Cpu?)

Yes—the easiest approach generally being a FPGA. These are like a programmable digital circuit, one level down* from a CPU, and can be "rewired" just by downloading a circuit layout description onto them.

*arguably

Obviously an FPGA has lots of overheads compared to a custom-made chip (known as an ASIC.) You can only fit a fraction of the number of gates on them, they are slower, etc. but they can easily reproduce 8 and 16-bit CPUs of the '80s and '90s. As long as you have the CPU layouts... which we often don't, but check out Visual 6502 and the Verilog code that was then made for FPGA.

There are various open source CPU architectures today designed specifically for FPGA (or at least with FPGA as an option), like LXP32 and RISC-V. So in principle you could either adapt one or create your own.

You could implement a ternary (base 3) CPU if you wanted. However, the transistors and gates making up the FPGA are still boolean logic (base 2). Theoretically it would be possible to design 3 or more valued logic. Unlike FPGA, that's probably not something you could do in your basement.

Artelius
  • 1,030
  • 6
  • 8
1

There have been ternary computers.

IOTA uses ternary as commented on here and there is alleged hardware support

Frank
  • 111
  • 2
  • Looking at the alleged hardware support in iota, it is not really ternary based on powers of 3, but is binary based on powers of 2, with three values at each power of two. Digit N has weight 2^N, , value said -1,0,+1. In J. Robertson’s class on computer arithmetic this was called signed binary, One of many redundant representations. – Krazy Glew Jul 02 '20 at 10:11