Say fs = 1000 and Ts = 0.001. Would it be faster to compute Ts at the beginning and subsequently multiply by 0.001 instead of dividing by 1000 when computing frequency-dependent quantities?
1 Answers
Generally, it makes sense to ensure that your code is logically correct, that it is numerically well-behaved, intuitive to read and tested. That is hard enough. Only when you observe that some innerloop or library call is a real hotspot, affecting the functionality of your software does it make sense to rewrite code for speed, and then you should always profile before and after.
If a constant is known compile time, the compiler may apply the inversion to substitute division for multiplication, if this is within precision constraints and runs faster for a given target. If ppssible, I would rather outsource that complexity to the compiler.
Edit:
It is not "technically the same operation". To see why, have a look at this MATLAB snippet:
a = single(10.0)
b = 1/a
c = 42/a
d = 42*b
c-d
ans =
single
-4.7684e-07
Since floating-point is operating with finite precision and intermediate rounding, the order of operations does matter. Depending on compiler flags, the compiler may be allowed to re-order floatingpoint arithmetic even though the result will differ to some degree.
If we look at the binary representation we see that they differ in the lsb:
dec2bin(typecast(c,'uint32'),32)
dec2bin(typecast(d,'uint32'),32)
ans = '01000000100001100110011001100110'
ans = '01000000100001100110011001100111'
-k
- 3,384
- 1
- 8
- 13
-
You pointed out something important: It depends on the compiler. Since it is technically the same operation, a good compiler would choose the best way of accomplishing it. – neolith Mar 13 '21 at 14:20
-
-
T_s * k_i * error; usek_i * errorwithk_iappropriately scaled. – TimWescott Mar 11 '21 at 20:51