The same floating-point AMD X86-64 digital signal processing system mentioned in my previous question has a problem where it sometimes slows down substantially when signals attain values very near (but not exactly) zero.
The problem is that denormalized floating point values require special processing by the CPU which is dramatically slower than dealing with normal floating point values. This can cause the DSP system to run too slow--taking longer than $1/f_s$ to compute everything that needs to be computed in one cycle.
A workaround is to add a small offset to all numbers to force them into the range of normal numbers. Is there a way to instead instruct the FPU to simply not generate denormal numbers in the first place?
The OS is Linux and the compiler is gcc.
EDIT: Also, what are the numerical consequences of disabling denormal numbers?
sincos, etc. – nibot Aug 29 '11 at 18:50