29

Many early computers use ones' complement to represent some kind of signed integer. Examples include the PDP-1, the CDC-6600, and many other popular computers.

The C standard is obviously written with ones' complement machine in mind; for example, it specifies that a signed integer may hold values −32767 to +32767.

But I find that modern day examples of computers that use ones' complement rather hard to come across. I think it's safe to assume that anything you've got that runs a computer program and has signed integers of some kind uses two's complement. So what is the reason for the decline in popularity for ones complement architectures?

user3840170
  • 23,072
  • 4
  • 91
  • 150
Omar and Lorraine
  • 38,883
  • 14
  • 134
  • 274
  • 3
    The C Standard definition also permits sign/magnitude representation; I don't know which (if any) popular systems use(d) that. – Toby Speight Jul 25 '18 at 17:17
  • 3
    @Toby the IBM 700/7000 series used sign/magnitude. And IEEE 754 uses sign/magnitude. – Stephen Kitt Jul 25 '18 at 18:08
  • 5
    The fact that you can have a positive or negative zero would lead to no end of confusion. I remember doing this as a student - I had to write out the bit pattern before I figured it out. – cup Jul 25 '18 at 20:28
  • 11
    Because all too often one's complement turns into one's evil twin? – Tommy Jul 25 '18 at 20:53
  • @StephenKitt 754 deals exclusively with floating point. One's complement and two's complement don't really make sense for non-integer types. – Ray Jul 25 '18 at 22:02
  • 4
    @Ray all I wrote was “IEEE 754 uses sign/magnitude”. I wasn’t suggesting anything about whether ones’ or two’s complements would be appropriate. – Stephen Kitt Jul 25 '18 at 22:39
  • 18
    "The C standard is obviously written with one's complement machine in mind; for example, it specifies that a signed integer may hold values -32767 to +32767." In some respects, yes it was written with 1s complement in mind. However, not specifically. In particular, the C standard was written to be as implementation agnostic as possible. The above range allows for 2s complement, 1s complement and sign/magnitude. – dgnuff Jul 25 '18 at 23:24
  • 1
    @Ray - one's complement floating point is perfectly sensible, and was used in the CDC6600 and (I believe) 7600, which were for many, many years considered the best computers available for numeric computations. – Jules Jul 25 '18 at 23:53
  • 3
    Internally its rare, but don't forget IP, TCP and UDP checksums are all calculated with one's complement. – Digital Trauma Jul 26 '18 at 00:47
  • 1
    The early days of computing were heavily driven by hardware limitations. Anything that made the hardware simpler was good. This pattern kept repeating: when IC's were developed, more advanced hardware offerings took a big step backward in other complexity, going from 18- or 36- bit CPUs to 8-bit processors (though now single chip). – Erik Eidt Jul 26 '18 at 00:54
  • The C standard is written with the intent to not exclude unnecessarily possible representations. So it will make its definition as close as possible to the least common denominator of all platform. This has absolutely no bearing on the importance of the platform. – Patrick Schlüter Feb 27 '20 at 08:27
  • @PatrickSchlüter: It was also written with the expectations that people working with various platforms and application fields would recognize and respect precedents appropriate to those fields when practical, and would be more familiar than the Committee with such precedents. The only time anyone should care about whether the Standard mandates something that would be practical on all known platforms should be if someone has to write a C compiler for a platform where it would be impractical. – supercat Feb 27 '20 at 17:12
  • 1
    @Jules: Ones' complement floating-point makes sense if one views the sign bit as extending infinitely far in both directions. Just as subtracting 1 from ...00000 would yield ...11111, subtracting an infinitesimal amount from ...000.000... would yield ...111.111...; interestingly, bitwise operators would behave sensibly with such a format, with the caveat that truncation may be necessary when ANDing two values with opposite sign or ORing two values with the same sign. – supercat Feb 29 '20 at 18:49
  • "it specifies that a signed integer may hold values -32767 to +32767" The standard says that INT_MIN must be at least that low, but it may be lower, and typically is. Seems to me the C standard was written with two's complement in mind, but was designed to permit other representations too. In particular, the way signed->unsigned conversions are defined, seems to favour two's complement. Iff two's complement is used, the bit-pattern is unchanged. https://stackoverflow.com/a/832772/ – Max Barraclough May 22 '20 at 09:59
  • It is worth mentioning that one's complement has a very nice property for certain digital signal processing algorithms. It inherently truncates toward zero. Two's complement truncates toward negative infinity. This can cause certain simple filters to diverge without limit. The fix is, essentially, emulating one's complement and forcing truncation toward zero if the result was negative. This is NOT good for real-time image processing, as it can significantly increase the per-pixel instruction counts, which are what eat your timeline alive, without salt. – John R. Strohm Jul 29 '22 at 12:10

6 Answers6

46

I imagine representations other than two’s complement (ones’ complement, sign/magnitude...) declined in popularity because two’s complement is simpler to implement; in particular:

  • addition, subtraction, and multiplication of two signed input values of length n can use the same implementation as the unsigned variants, modulo 2n;
  • zero only has one representation, with all-zero bits, which means testing for equality with zero is straightforward (which has knock-on effects on implementations of equality testing etc. since the latter can be implemented using subtraction, which also works for ordering values).

Ones’ complement systems still exist, e.g. the UNIVAC 1100/2200 series, but as you mention they are rare (and existent probably only for historical reasons).

Stephen Kitt
  • 121,835
  • 17
  • 505
  • 462
  • 2
    This begs the question then; if twos-compliment is easier, why didn't the early machines use it? – Maury Markowitz Jul 27 '18 at 01:26
  • 1
    @MauryMarkowitz: The earliest machines did. The real question should be why machines stopped using two's-complement for awhile before getting back to it. – supercat Jul 27 '18 at 06:29
  • 1
    The caveat for the multiply in this answer is that it refers to the product modulo 2^n or in other words, the lower order n bits of a product which are the same for signed or unsigned multiply. In most cases, such as IBM 360 series, X86 series, 68000 series, ... , there are multiply instructions that produce a product of 2n bits, and in these cases, there is a difference between signed and unsigned multiply. – rcgldr Oct 13 '18 at 23:42
  • @MauryMarkowitz - this was more of a case of which machine for the early computers. The CDC 3000 and 6000 series (1960's) used one's complement while the IBM 360 series (1960's) used two's complement. – rcgldr Oct 13 '18 at 23:45
  • 2
    @kasperd - I deleted my prior comment and added a new one about multiply modulo 2^n. My impression is that most processors that produce n bit products have instructions to either produce the lower n bits or the upper n bits of a product, and the upper n bits will be different for signed versus unsigned. – rcgldr Oct 13 '18 at 23:48
  • @supercat For a while CPU designers kept adding human friendly features to their CPU to add features to the assembly language. These features later died with the rise of compilers. Nowdays CPU features are added almost exclusively for improving performance – slebetman Jan 28 '19 at 03:55
  • @rcgldr: If one wants to use the same register input/output logic for multiplication as for other operations, and if multiplication is fast, the cleanest approach is probably to have separate instructions for multiply and yield lower product, yield upper half of unsigned product, yield upper half of signed product, and maybe yield upper half of signed-by-unsigned product [useful for multi-precision or fixed-point math]. If one wants to support larger products directly, I would think the circuitry needed to allow storing multiple results at once into ordinary registers could be better used... – supercat Jan 28 '19 at 17:26
  • ...to add a family of "exchange value with external pipelined computation unit" instructions. Using such an approach, one could quite inexpensively build an execution unit that would allow code to perform an Nx1024-bit multiplication at a rate of one word every 16 cycles, which could overlap other operations such as loading words of the factor and an addend from memory, adding the product to the addend, and writing back the result. – supercat Jan 28 '19 at 17:33
  • The AGC also was 1 complement. One should watch Michael Steil's presentation on youtube on the programming model of the AGC to get a feel of how annoying 1 complement is. – Patrick Schlüter Feb 27 '20 at 08:30
20

Two's complement is generally simpler to implement in hardware than ones' complement, except for one thing: if one wants a "live" readout of register values using one set of lights for positive numbers and one set for negative numbers (blanking whichever set isn't appropriate) that can be accommodated very cheaply and easy with ones'-complement using one small transistor per bulb and a pair of large transistors to act as master enables for the positive and negative lights. If the states of every register exist as continuously-accessible signals, adding a live readout that shows things in signed format is nice and easy. If a system were to use two's-complement, the values displayed for negative numbers would be off by one, and a significant amount of extra circuitry would be needed for each register's readout to correct it.

In the days when computers had register readouts, using ones'-complement could make the registers easier to read. Two's-complement integer math is better in just about every other way, however.

Incidentally, ones'-complement would make sense with floating-point math if treats 0.1111111111111.... with an unending string of ones as equivalent to 1.00000000. Viewed in that light, the sign bit controls the state not just of all otherwise-unspecified bits to the left of the number, but also the state of bits to the right. ~0 would thus equal ....11111111.0 + 0.11111111....., i.e. -1+1, i.e. zero. Two's-complement representations are asymmetric, but with integer math the asymmetry is consistently one unit. In floating-point math, the two's-complement asymmetry would vary with scale, which is a bit more awkward. Symmetry is more useful with floating point than with integer math, and thus ones'-complement would be advantageous there compared with two's-complement. Sign magnitude also works (and is what many systems actually use) but ones'-complement could work essentially as well.

supercat
  • 35,993
  • 3
  • 63
  • 159
  • The CDC6600 (and presumably the 7600, which was designed to be at least mostly backwards compatible, however I haven't found any documentation on its data formats in a brief search) used one's complement floating point (and integers too, but it was designed primarily as a floating point machine for handling scientific computation, so the integer support was mostly an afterthought, I suspect). By the time he designed the Cray-1, Cray had switched to two's complement & sign-magnitude, however. – Jules Jul 25 '18 at 23:43
  • One's complement is not that complicated to implement. 2's complement adder has both carry-in and carry-out, loop them and viola, 1's complement adder is here. Probably this easiness of implementation and easiness of negation were two factors seducing machine architects to select 1's complement. – lvd Jul 26 '18 at 16:09
  • 2
    @lvd: A two's-complement adder can process bits or small clumps of bits sequentially, retiring each bit or clump before progressing to the next. Many early machines had ALUs that were smaller than the word size, and so being able to handle bits sequentially was useful. Having to make a second clenaup pass through the bits after discovering the value of a carry flag may be possible, but I wouldn't call it "easy". – supercat Jul 26 '18 at 16:26
  • The majority of 'big' machines of 60ies were not using serial architecture. The only examples I can remember are miniaturized PDP-8 variant and probably earliest tube computers where every logic gate took cubic decimeters. – lvd Jul 26 '18 at 16:33
  • @lvd: There may be efficient ways of designing an ALU which is the exact size of the intended operands, but operating on things that are larger or smaller will create additional complications. If one wants to multiply a pair of two's-complement 8-bit values, for example, one can promote to a larger type, do the multiply, and then discard the upper bits of the result. That won't work in ones'-complement, however. – supercat Jul 26 '18 at 16:43
  • @lvd until someone tries to add -0 and gets a metastable result? – user253751 Jul 26 '18 at 22:20
  • Carry-around should't propagate more than two times over the loop. – lvd Jul 27 '18 at 05:07
17

But I find that modern day examples of computers that use ones complement rather hard to come across.

I can only think of the Unisys Clearpath here - and even they are Itaniums at hardware level by now.

The C standard is obviously written with one's complement machine in mind; for example, it specifies that a signed integer may hold values -32767 to +32767.

Jup, that way either system of signed integer handling (within 16 bits) will conform.

So what is the reason for the decline in popularity for ones' complement architectures?

It was never really popular in the first place. Even with early computers the large majority did use two's complement due the much simpler handling.

While it is true that a ones' complement implementation can save circuitry due the fact that a subtraction is just an addition with the subtrahend being negated (*1,2), it also adds complexity to (micro) program design and adds pitfalls (*3).

Ones' complement offers advantages in multi-word multiplication and division (*4), as well as for certain mathematical tasks (nearing zero from either side). But it adds complexity to hardware and/or software as every operation crossing zero needs an adjustment.

To a certain degree it's in use today almost everywhere, as the IEEE 754 Floating Point standard is based around signed values including signed zero. So with a positive spin it can be said that modern CPUs use both: The core CPU's handling (read integer) is two's complement, while associated FPUs use the advantages of ones' complement.


*1 - That's why the Pascaline uses nines' complement

*2 - But only if it's not negative zero and so on.

*3 - Like the need to always know if a comparison is meant to be numeric and integer so a correction for +/- 0 is to be applied.

*4 - As no additional step for sign calculations is needed.

Raffzahn
  • 222,541
  • 22
  • 631
  • 918
  • 16
    IEEE floating point is sign-magnitude not ones complement. – Peter Green Jul 25 '18 at 17:31
  • @PeterGreen and it uses the concept of negative zero. Doesn't it? – Raffzahn Jul 25 '18 at 18:12
  • 4
    Yes both sign magnitude and ones complement can represent negative zero. – Peter Green Jul 25 '18 at 18:14
  • @PeterGreen see. That's the point. – Raffzahn Jul 25 '18 at 19:03
  • 4
    How does ones'-complement math offer advantages for multi-word multiplication and division? I see nothing but downside for it from an ease-of-computation standpoint. – supercat Jul 25 '18 at 22:52
  • 1
    @supercat for a twos' complenent multiplication/division the sign has to be calculated seperatly, while with one's complement the sign is always right after the final addition. – Raffzahn Jul 25 '18 at 23:25
  • 6
    @Raffzahn Are you sure your answer isn't potentially confusing sign-magnitude and one's-complement together just because both can represent -0? Using a single-octet example, the former would be 0x81 for -1 while the latter would be 0xFE, and IEEE-347 floating point is like the former, isn't it? – mtraceur Jul 25 '18 at 23:37
  • @mtraceur I'm not saying that ones' complement is used, but point out the usage of +/-0.0 in IEEE 754. Ones' complement delivers naturaly a -0, while IEEE 754 had to implement a different encoding. – Raffzahn Jul 25 '18 at 23:54
  • 1
    @Raffzahn Thank you, that makes more sense. (Sidenote: I have no idea why I typed "IEEE 347" instead of "IEEE 754" - for this I am filled with shame and have no excuse.) – mtraceur Jul 25 '18 at 23:57
  • 2
    @Raffzahn I think the cause for confusion is "it's in use today": in the context of the greater focus of discussing one's complement, the pronoun "it" can be misinterpreted as referring to one's complement, instead of to signed zero. – mtraceur Jul 25 '18 at 23:59
  • @mtraceur I wanted to offer a positive spin. After all, there are arguments for either system depending on the job to be done. – Raffzahn Jul 26 '18 at 00:05
  • 3
    @Raffzahn: Two's-complement signed and unsigned multiplication both work identically when the source operands are padded to the same size as the result. I'm not sure what advantage I see for ones'-complement there. As for division, I'm also not quite sure what you're getting at. Using four-bit one's-complement math, -6 would be 1001, 2 would be 0010, and -3 would be 1100. How does 1001/0010 yield 1100, and how would 1001/1100 yield 0010? – supercat Jul 26 '18 at 00:34
  • @Raffzahn Would you really say one's complement was not a popular way to handle integer arithmetic in the first place? Can you point to systems pre 1965 which used two's complement for integer arithmetic? – Omar and Lorraine Jul 26 '18 at 09:12
  • @Wilson EDSAC (1949?), Elliott 152(1950), IBM 701 (1953), PC-1 (1956), Eliot 8xx (1957), PDP-5 (1962), SDS 9xx (1963?). Oh and ofc, IBM/360 in 1964 :)) When talking about pre 1960 computers it's useful to keep in mind that these where mostly rather unique and quite varying designs. A concept of 'popular' is hardly applicable at all. Then again there might be a structure when considering that many twos' complement machiens where developed outside the US, while US designs widely followed the EDVAC/IAS example. – Raffzahn Jul 26 '18 at 10:41
  • @Wilson Thinking of data formats used for calculation in early machines, there's another, almost more substantial one: Whats the reasoning to replace decimal by binary? It's at least valid from a US point of view, where such did dominate over binary in early machines - unlike Europe. – Raffzahn Jul 26 '18 at 11:04
  • for a twos' complenent multiplication/division the sign has to be calculated seperatly -- not quite true. There are simple algorithms that multiply or divide signed numbers directly. For example, Baugh-Wooley multiplier: http://www.ece.uvic.ca/~fayez/courses/ceng465/lab_465/project2/multiplier.pdf – lvd Jul 26 '18 at 16:20
  • @Wilson: What's the oldest ones'-complement machine you can find? The earliest machines I've looked at were all two's-complement. – supercat Jul 26 '18 at 16:32
  • @Raffzahn: The only advantage of decimal is for human-readable input and output. Binary stores data more compactly, and can be processed faster and with less circuitry than decimal. – supercat Jul 26 '18 at 16:34
  • @lvd you're aware that Baugh-Wooley was first published in 1973? – Raffzahn Jul 26 '18 at 16:51
  • I see nothing in that article about the Pascaline that suggests a nines'-complement carry chain. X-Y would be computed by adding Y to the nines' complement of X and showing the nines' complement of the result, but an analogous approach could be done on two's-complement machines just as well using a ones'-complement (invert all bits) instruction, without need for a ones'-complement carry chain. – supercat Apr 01 '21 at 08:11
6

Not quite an answer but an observation.

While ones' complement machines are now almost extinct, ones' complement computations are still here, and in large scales.

Every IPv4 packet header has a checksum, that is calculated exactly as ones' complement sum of 16bit words of the header contents.

Same with TCP and (optionally) UDP packets, where the whole packet is checksummed in the same way.

lvd
  • 10,382
  • 24
  • 62
  • 9
    Computation of the IPv4-style checksum is often best done by computing the sum with a longer type and then reducing that mod 65535 using shifts and masking operations. An advantage of the 16-bit ones'-complement checksum is that swapping the byte order of the source data will swap the byte order of the sum, so the process as a whole byte-order agnostic. – supercat Jul 26 '18 at 22:31
  • yes, it is still a way two's complement machines emulate ones' complement calculations. And byte-order independence appears due to carry-around basics of the ones' complement addition. Nice example of that could be found in linux kernel sources. – lvd Jul 27 '18 at 08:54
3

One's complement requires different operations for signed integers than for unsigned integers. With two's complement, ADD and SUB are the same for both number types, except for overflow flags, so typically, a CPU sets both an unsigned overflow and a signed overflow flag for each ADD or SUB and this is easier than making 2 kinds of ADD and 2 kinds of SUB. This is closely related to modulo arithmetic : two's complement is a modulo 2^N just like unsigned integers, just with different labels (e.g. in mod 256, the number after 127 is called -128 instead of 128), whereas one's complement doesn't follow the rules of modulo.

MUL often also is practically the same for two's complement and unsigned, if it doesn't give the answer in a field bigger than that of the operands ; otherwise you need to have a signed version. DIV always needs a separate signed version.

1

The C standard is obviously written with ones' complement machine in mind; for example, it specifies that a signed integer may hold values -32767 to +32767

No, that would be the minimum set of values for a signed 16 bit integer. 2's complement is a superset of this. In fact, C11 explicitly allows for both 1 and 2's complement integers (section 6.2.6.2).

But I find that modern day examples of computers that use ones' complement rather hard to come across. I think it's safe to assume that anything you've got that runs a computer program and has signed integers of some kind uses two's complement. So what is the reason for the decline in popularity for ones complement architectures?

With 2's complement, the processor does not need to know whether it is dealing with a signed number or an unsigned number. Take, for example, the 6502. The bit pattern 1111 1100 can be viewed as either -4 or 252 but when you add it to another number, the CPU does not care whether it is signed or not. It can use the same circuitry and the answer comes out right by just doing an unsigned add. e.g.

0000 00010 + 1111 1100 = 1111 1110 = -2 or 254 depending on signed or unsigned

You just need an extra few gates to get the signed overflow as well as the unsigned carry. I think this is the main reason almost everybody uses 2's complement nowadays.

Compare with floating point representations. Floating point numbers are always signed, so the advantage of being able to treat signed and unsigned numbers the same is negated (ahem). There are other technical reasons for using 1's complement aswell, but you'll find that the floating point format that everybody uses nowadays (IEEE754) uses a 1's complement mantissa. T

JeremyP
  • 11,631
  • 1
  • 37
  • 53
  • IEEE754 doesn't use a 1's complement mantissa – phuclv Aug 09 '20 at 05:48
  • @phuclv Yes it does. For example, the most significant bit of a double precision number is the sign, the next 11 bits are the exponent and the last 52 bits are the mantissa (with an implied 1 at the beginning, giving 53 bits). If you look at the representation of, say 5 and -5, you'll find that they differ only in the sign bit e.g. 0x4014000000000000 and 0xc014000000000000. – JeremyP Aug 10 '20 at 07:42
  • That's called signed-magnitude. If it's 1's complement then 5 is 0101 and -5 is 1010. In sign magnitude it's 0101 vs 1101 – phuclv Aug 10 '20 at 09:18
  • @phuclv oops, yes you are right. – JeremyP Aug 12 '20 at 07:22
  • "No, that would be the minimum set of values for a signed 16 bit integer. 2's complement is a superset of this" true, but that's exactly the proof why C was written with ones complement in mind. By setting the minimums the way the are, 1's complement is sufficient for C and that's what 'written with 1's complement in ming' means. – Raffzahn Mar 05 '22 at 04:23