As far as I know, even simple RISC microcontroller have a bitshift operator, and honestly I had to use it only once when I had to compute a division on a MCU that could not do divisions in the ALU.
Were the bitshift operators included so it was possible, even with very simple ALU, to efficiently compute the multiplication, the division and the square root?
I'm wondering this because that was how it was done on mechanical calculators, so looks plausible to me that first processors somewhat mimicked existing technologies.
EDIT
I'm not asking what are the use of bitshift operators, but the reason why they were included back then. As every operation added had a cost in components, I imagine that they were striving for the smallest possibile number of components.
So the question is whether the bit shift operators are an innovation added when computers were put on a chip or whether very early CPUs also had these operators. And if so, why were they included in the instruction repertoire of early CPUs? What value did they add? (paragraph proposed by Walter Mitty)
My line of thought was that, since early computers were created to speed up the work made by human computer, who used mechanical calculators (which shift values to perform calculations), it was plausible to me that electronic computers were designed to use, at least partially, existing algorithms. I made this question because I wanted to know if there is some truth in this or I was completely wrong.

if(x < 0) x = 0;in a single non-conditional instruction: signed shift the result right by at least 31 places, complement it, then and that with the original result. So that architecture's shift-for-free-on-every-instruction was an actual specific sales point, for which examples were concocted. – Tommy Jan 31 '19 at 20:22x *= 2to an add a value to itself. Or you can do division with a repeating subtraction. There are various Turing-compete (mainly old decimal or hypothetical systems) without binary shift and they can still do everything – phuclv Feb 01 '19 at 02:44