10

As far as I know, even simple RISC microcontroller have a bitshift operator, and honestly I had to use it only once when I had to compute a division on a MCU that could not do divisions in the ALU.

Were the bitshift operators included so it was possible, even with very simple ALU, to efficiently compute the multiplication, the division and the square root?

I'm wondering this because that was how it was done on mechanical calculators, so looks plausible to me that first processors somewhat mimicked existing technologies.

EDIT

I'm not asking what are the use of bitshift operators, but the reason why they were included back then. As every operation added had a cost in components, I imagine that they were striving for the smallest possibile number of components.

So the question is whether the bit shift operators are an innovation added when computers were put on a chip or whether very early CPUs also had these operators. And if so, why were they included in the instruction repertoire of early CPUs? What value did they add? (paragraph proposed by Walter Mitty)

My line of thought was that, since early computers were created to speed up the work made by human computer, who used mechanical calculators (which shift values to perform calculations), it was plausible to me that electronic computers were designed to use, at least partially, existing algorithms. I made this question because I wanted to know if there is some truth in this or I was completely wrong.

Onirepap
  • 119
  • 1
  • 7
  • 23
    This is a good question. But I'm not sure how this is specific to Retrocomputing. Modern CPUs still have bit-shift instructions. – manassehkatz-Moving 2 Codidact Jan 31 '19 at 18:51
  • Shifts not only survive on RISC architectures, but if memory serves then one of the look-at-me examples on an ARM way back when was that you can implement if(x < 0) x = 0; in a single non-conditional instruction: signed shift the result right by at least 31 places, complement it, then and that with the original result. So that architecture's shift-for-free-on-every-instruction was an actual specific sales point, for which examples were concocted. – Tommy Jan 31 '19 at 20:22
  • 1
    Many microprocessors and MCUs can only shift by one bit, which for a left shift may be no more efficient than adding. And what about rotate? – Bruce Abbott Jan 31 '19 at 20:56
  • The lack of a barrel shift on the 6502 was very painful when I was working on data compression code... not only did I need to shift bits, I really wanted to shift by more than one at a time. Graphics is another area where they're handy, e.g. converting to and from 16-bit formats like RGB565. – fadden Jan 31 '19 at 21:29
  • 13
    What do you mean 'was'? They're still there. – user207421 Jan 31 '19 at 22:47
  • Without a bitshift, how would you implement binary long division at all? – user253751 Jan 31 '19 at 22:52
  • 3
    @manassehkatz good point, but you could focus the question on why were they implemented back then? (the historic reason) which would make it arguably on-topic. I guess many of the uses weren't thought of back then. But then, addressing the OP: how would you ever write a non-trivial program in assembler without bit-shifting? ;) – Felix Palmen Jan 31 '19 at 23:52
  • 4
    Harvard Mark I (as far as I can tell) had shift instructions. ENIAC had shift instructions. They've been there from day 1 because they're useful (and trivial to implement a single shift or rotate) – Kelvin Sherlock Feb 01 '19 at 00:29
  • 2
  • 1
    @immibis long division can be done with only left shifts. But a left shift by one can be done with x += x. That makes both multiplication and division a lot easier. In fact modern compilers do occasionally optimize x *= 2 to an add a value to itself. Or you can do division with a repeating subtraction. There are various Turing-compete (mainly old decimal or hypothetical systems) without binary shift and they can still do everything – phuclv Feb 01 '19 at 02:44
  • 2
    @user207421 : yeah, but today's kids don't care whether their simple program to add two integers together consumes hundreds of MB of RAM and needs a quad core CPU to even start... so why would they feel the need to know how the CPU works internally? – vsz Feb 01 '19 at 05:37
  • @FelixPalmen That was exactly my question: the original intended use for those instructions. I always thought that originally it came natural to design the ALUs with the same features of mechanical calculators (where you have an input register, a counter, an output register and a way to shift values). – Onirepap Feb 01 '19 at 09:13
  • 1
    @Onirepap A mechanical calculator is made to perform calculations on numbers. A ALU isn't a calculator, but a device to do numeric operations as well as logic operations. and shifting is one of them. – Raffzahn Feb 01 '19 at 14:25
  • @manassehkatz While I agree that this question isn't exactly right here, you're argument misses that point. just because something is as well present in modern CPUs doesn't make the question per se off-topic. More relevant, it is not tied to any specific development or reasoning, but a more generic question about computing at all. Something that makes it even ather off-topic on StackOverflow, but quite on topic for ComputerScience.SE, doesn't it? – Raffzahn Feb 01 '19 at 14:29
  • @FelixPalmen RC.SE? Not really, unless one comes up with a true historic based reasoning, I'd say it's a generic question about usefulness of certain operations in the context of ComputerScience and thus be moved there. – Raffzahn Feb 01 '19 at 14:32
  • 1
    @Raffzahn I think we disagree on the nuances but agree on the point: move to Computer science. – manassehkatz-Moving 2 Codidact Feb 01 '19 at 15:04
  • 1
    @manassehkatz Agreed to disagree in agreement ... or something along that line :)) – Raffzahn Feb 01 '19 at 15:20
  • @Raffzahn Which reminds of one of my pet peeves about StackExchange (in general, not just RC) - Close as "should be elsewhere" only gives an option of the matching Meta but not any related forums - e.g., for RC it would be helpful to have the option (without typing it in manually) of recommending a move to Computer Science or StackOverflow or SuperUser. – manassehkatz-Moving 2 Codidact Feb 01 '19 at 15:25
  • 1
    @manassehkatz AFAIK it's a feature that will only be enabled after we leave Beta. Eventually the only relevant one. – Raffzahn Feb 01 '19 at 15:33
  • @Raffzahn I'm not sure - I think I've had the same problem on DIY. I just checked over there - Closing - Off-topic - Migration only gives "Meta DIY" as an option. Then again, maybe nobody ever bothered to ask for anything else. – manassehkatz-Moving 2 Codidact Feb 01 '19 at 15:36
  • 2
    @manassehkatz They might have to be added on purpose, so if their moderators never cared? But you may want to ask this someone who is more into management than technology than me :) – Raffzahn Feb 01 '19 at 15:47
  • @Raffzahn One of the things you're meant to do when a site graduates is ask on [meta] what sites would be useful for migration; picking up when people are annoyed about not having migration paths and re-asking is just another one of those jobs. – wizzwizz4 Feb 01 '19 at 16:48
  • Well, one core missunderstading might be burried here: "My line of thought [...] early computers were created to speed up the work made by human computer[...]". While it is true that very early examples where made for calculation purpose, the widespread use starting around 1950 was not to replace such human computers, but to replace, or better, speed up card processing. Data handling that has already been mechanized by punch card machines since the turn of the century. That's where the money was and and the computers build and sold for. ... – Raffzahn Feb 02 '19 at 18:46
  • ... There where maybe a few hundret jobs worldwide where calculation was important, but hundrets of thousands of companies from phone companies to city utilities with the need to do a monthly calculation of (meter reading times unit price plus base price, plus city tax) times sales tax for each and every customer. Worldwide a task specialized machines had been created for (like the Dehomag BKZ which had that formula even build as a seperate gearbox added to a BK so cards delivered the variables and new cards (or printouts) could be generated. ... – Raffzahn Feb 02 '19 at 18:52
  • ... Computers made that process more flexible and at the same time much faster, resulting in a way lower price. Give me one manager not going to get such a wonderful machine. Science (and even military) was more of a nice business, well payed, still dwarfed by commercial sales. Today we usually see, when looking back, much of the science work (and eventually some military declassified), as these people wrote papers - after all, it's their job - and these papers survives. Company documentation stayed within companies, and got thrown out when the machine got removed. So evidence is uneven here. – Raffzahn Feb 02 '19 at 18:56
  • 1
    I count 4 shift instructions in EDSAC's Initial Orders 2. If David Wheeler can find it useful, that's good enough for me :-) – dave Feb 03 '19 at 14:48

5 Answers5

41

Some uses for a bitshift operation:

  • implementing a more efficient multiplication than repeated adding
  • implementing division algorithms
  • implementing an algorithm for exponentiation of integers by other integers
  • bitwise algorithms e.g. "how many one bits in this integer"
  • fast multiplication and division by powers of 2 including indexing arrays of integer types (which tend to have sizes of a power of 2 of bytes).
  • sending and receiving data on serial lines.
  • manipulating bit fields within larger types.
  • extracting portions of a combined type and aligning them into new types (Raffzahn)
  • manipulating the fields of a software floating point implementation (e.g. aligning the mantissa for addition and normalising the same after any arithmetic operation) (Tommy)
  • Accessing the digits in packed BCD formats (i.e. two digits per byte) (Raffzahn)
  • Aligning the decimal point in fixed point arithmetic (davidbak)
  • cryptography (forest, tofro)
  • Aligning rasterised packed bitmaps when blitting them to screen memory (user3570736)

These are just off the top of my head. I'm sure you'll get many answers that include other uses for bit shifts.

I'm not asking what are the use of bitshift operators, but the reason why they were included back then

As I hope you can see from the list, even "back then" bit shift operations were regarded as extremely useful. In particular, it would be hard to write an efficient software multiplication routine without bit shifts.

JeremyP
  • 11,631
  • 1
  • 37
  • 53
  • 5
    Not to forget the even more important task of extracting portions of a combined type and aligning them into new types. That included everything from bitfields to BCD. – Raffzahn Jan 31 '19 at 19:56
  • Agreeing entirely with Raffzahn, I can't imagine a software floating-point implementation that would be feasible without bit shifts. – Tommy Jan 31 '19 at 20:19
  • 1
    @Tommy I think those are technically included in my last bullet point although they may be large enough areas to merit a new category. – JeremyP Jan 31 '19 at 20:46
  • @JeremyP Even without FP. Think BCD - to transform a byte containing two BCD digits into two ASCII chars - or the other way around - needs shifting by 4 (and masking). – Raffzahn Jan 31 '19 at 21:01
  • "Double-dabble" BCD conversion needs bitshifts, also do LFSRs (e.g. used for checksumming or simple PRNGs) -- so yes, the list can be expanded :) – Felix Palmen Jan 31 '19 at 22:46
  • It's also extremely useful in cryptography, which is common in DRM. – forest Jan 31 '19 at 23:21
  • Fixed point integer arithmetic! Those machines didn't have hardware floating point! – davidbak Feb 01 '19 at 07:11
  • 1
    Anything that involves feedback shift registers, used for random numbers and especially cryptography. – tofro Feb 01 '19 at 08:23
  • You might want to add compression. Variable-length codings like Huffman coding (invented 1952, used in e.g. deflate/zip files) has been widely used since the dawn of computing. – Christoph Feb 01 '19 at 12:09
  • @JeremyP When adding credits (which is cool), it might be helpful to add them to the one adding a point (first). – Raffzahn Feb 01 '19 at 13:38
  • 1
    Even the lowly UART has a bit shifting operation! – Walter Mitty Feb 01 '19 at 14:07
  • @WalterMitty That's a good one. At least for a microprocessor where everything is cut to the minimum byte serialisation to feed a primitive serial line is a quite useful extension. – Raffzahn Feb 01 '19 at 14:33
  • 1
    @Raffzahn which one did I get wrong? Belay that, it was the BCD one wasn't it. – JeremyP Feb 01 '19 at 20:00
  • @WalterMitty Already got shift registers for sending and receiving serial data. – JeremyP Feb 01 '19 at 20:04
  • Many if not most home computers with graphical output pack multiple pixels per byte into their display buffers. They need shift operations to display an image at an unaligned position. – user3570736 Feb 03 '19 at 13:59
11

Then you haven't done any embedded work, or anything to do with comms or protocols.

If you're working with embedded code, you need a way to get values into specific bits in a register. This is true even for the base processor, but especially so for microcontrollers which have extra registers to control the I/O, and for data going to and from devices such as ADCs and DACs. Bitwise operations are essential here, including bit shifting.

For an example of where this is important, I've recently been working with an ADC where the top 3 bits of its 24-bit register specify which of the 8 channels to convert. How would you turn a channel number (0-7) into the appropriate bits in this register, or vice versa, without a bitshift? You could use integer multiplies and divides, but that would be stupidly wasteful given the time those instructions take. (And under the surface, those instructions are using bitshifts themelves anyway!) Technically you could also use a case statement or lookup for each channel number, but that doesn't scale well - don't try using a 16-billion-term case statement to extract individual colours from a 24-bit RGB value!

In the same way, comms protocols regularly pack data into individual bits, especially in packet headers. Again, you need a way to get a number from a variable into any arbitrary group of bits in your protocol bytes, or vice versa. And the same problem has the same solution - you need bitshifting.

Graham
  • 2,054
  • 8
  • 11
  • 1
    "where the top 3 bits of its 24-bit register specify which of the 8 channels to convert. How would you get a value in there without a bitshift?" -- bit masking? (and, or, xor) – Felix Palmen Feb 01 '19 at 00:01
  • 4
    @FelixPalmen Bit Masking gets rid of the rest, but if you want to select those 3 bits and then (essentially) treat them as a number by themselves, you need to shift them. You could divide them, but a shift is a whole lot faster - and depending on what you need, you might not even need to bother with the bit masking. – manassehkatz-Moving 2 Codidact Feb 01 '19 at 01:26
  • 1
    @FelixPalmen If I start with values 0x000000, 0x200000, 0x400000, 0x600000 to 0xE00000, then I can put that value into the top 3 bits using bitwise AND/OR. But suppose I start with the channel number 0, 1, 2, 3 to 7. How do I turn the value 7 into 0xE00000 or vice versa? – Graham Feb 01 '19 at 05:15
  • 1
    @FelixPalmen Bit masking is not enough. You either need a big if/else or case block, integer division or bit shift IN ADDITION TO BIT MASKING. – slebetman Feb 01 '19 at 08:02
  • To all of you: please re-read my comment ;) "how would you get a value in there" has a simple answer: with bit masking. I'm just telling that this text IMHO isn't precise enough. – Felix Palmen Feb 01 '19 at 09:07
  • Also, many MCUs have some instructions for direct bit manipulation, so that is another way if there is no need for the operation to be atomic. – Onirepap Feb 01 '19 at 09:20
  • @FelixPalmen OK, gotcha - that's a fair point. Improved my answer accordingly. – Graham Feb 01 '19 at 11:20
  • 1
    @Onirepap If you only need a single bit, sure. If you need the top 3 bits though, that's less easy. Unless you do "read bit 23 from register A, store result in bit 2 of register B, read bit 22 from register A, store result in bit 1 of register B, read bit 21 from register A, store result in bit 0 of register B". At which point you've effectively recreated an atomic read-and-bitshift with a less efficient, more complex set of instructions. Technically possible, I guess, but not a great plan. – Graham Feb 01 '19 at 11:24
8

In the days before floating point hardware there was fixed-point integer arithmetic! Bit shift instructions were used to implement the "scaling factor" - specifically, to adjust it when you multiplied (or divided) numbers and then had to rescale the number to achieve the desired precision. (Or when adding and subtracting if the operands had different scale factors.)


... and now a small break for a fun anecdote:

My first professional programming job involved working on a Honeywell 716 minicomputer. 16-bit word size. It had hardware floating point but it was very slow. It had left shift and right shift instructions but the shift amount was an immediate (4-bit) value in the instruction - there was no instruction or addressing mode where that value was in a register. I was doing a bunch of fixed point arithmetic for signal processing on inputs that had a large dynamic range. There was no single scaling factor I could use for a certain multiply because whatever scaling factor I picked would either overflow for large inputs or lose all precision for small inputs. So ... I computed the necessary scaling factor from the input values and poked it directly into the shift instruction I was about to execute! (Couldn't use a jump table to the correct shift instruction or duplicate code blocks: I was also under a tight memory budget; those 716s only had 32Kwords of memory total.)

First job and I had to break all the rules. Just graduated college and to get approval for that design and code I had to convince the top staff engineer so he could go to the Navy and get their approval.

davidbak
  • 6,269
  • 1
  • 28
  • 34
  • 3
    "In the days before floating point hardware..." and in the days long after floating point hardware. In 2006/2007 I worked in a games company where floating point arithmetic was forbidden for portability reasons. – Peter Taylor Feb 01 '19 at 10:47
  • 1
    ... and in the days with FPGAs. I worked on a project from 1999-2001 to develop a low-cost car engine controller, using a 16-bit processor without floating-point support. Lots of fixed-point arithmetic. From then on, all processors I've used have had floating-point support and I breathed a sigh of relief at never needing fixed-point again. Until 3 years ago when I got involved with FPGA work - and suddenly everything is back in fixed-point processing again! – Graham Feb 01 '19 at 14:16
  • 1
    @Graham - but check out Unums/Posits! Maybe some fresh air is coming to FPGA/ASIC work? – davidbak Feb 01 '19 at 14:18
7

Along with the many uses of bitshifting (multiplying and dividing by powers of 2, bitmasks, etc.), bitshifting is also very cheap to implement: a simple implementation uses one multiplexer per bit to logical shift for each direction. Adding arithmetic shift is easy too: arithmetic left shift is identical to logical left shift, while arithmetic right shift only requires filling in shifted bits with the MSB.

Designers of old CPUs like the Z80 and MOS 6502 could afford to put in a little bit of extra logic, while hardware multiplication was significantly more resource-intensive and thus expensive. For example only the last 8-bit CPUs had hardware multiplication (notably Motorola 6809, the soon released 16-bit Motorola 68000 had multiplication instructions as well).

Here's a straightforward 3 bit logical left shifter (only 3 8-bit multiplexers are needed):

enter image description here

qwr
  • 315
  • 1
  • 7
  • Using a three-level multiplexing arrangement will allow the job to be done more with fewer transistors than using a barrel shifter (eight one-of-eight deciders, activated by driving one of eight enable wires). – supercat Feb 01 '19 at 01:34
  • ...but will in practice often take more space because of the need to run parallel groups of wires between top-half and bottom-half muxes. – supercat Feb 01 '19 at 01:41
  • 1
    FWIW: The 6502 only had single-bit shifts and rotates, which are even easier to implement in hardware. (If you wanted to shift by multiple bits, you'd simply repeat the operation.) –  Feb 03 '19 at 21:20
  • Arithmetic left shift differs from logical left shift if condition codes are set. System/Z sets overflow flag on SLA but not on SLL. M68k: ASL sets V if any inversion of MSB; LSL clears V. I think it is better than Intel x86 lazy approach. – Netch Nov 26 '21 at 12:37
-1

The fundamental unit of digital information is the bit. Bytes, words, pages, blocks, sectors, tracks, and what have you are all made up of bits. Just about everything that can be done with information can be built up from operations on bits.

Since at least as far back as the 1940s, numbers in computers have usually been represented as sets of bits. Thus the number 23 might be represented as [16, 4, 2, 1]. This is economical both for representation and manipulation.

Just about every processor has special operators for sensing and acting on the sign bit. I'm thinking about instructions like "transfer on minus". If you combine this with instructions for rotating or shifting bits in a register, you can build up just about any complex operation imaginable. The more common operations, like adding two numbers, are going to have instructions, even in a RISC processor. The less common operations are going to be done by bit manipulation.

@JeremyP has an excellent list of bit operations in his answer. It's only a partial list. I'll add telephone line management. If you have a bunch of telephone lines, it might be fairly easy to supply a processor with a set of bits, one bit for each line, where a one bit means, "on hook status has changed".

You can do just about anything, one little bit at a time.

Walter Mitty
  • 6,128
  • 17
  • 36
  • Setting sign flag after operations shouldn't actually require shifting instructions, it's implemented directly – qwr Feb 01 '19 at 17:34