2

was reading IEEE 754 single-precision binary floating-point format: binary32 when I ran into

The IEEE 754 standard specifies a binary32 as having:

  • Sign bit: 1 bit
  • Exponent width: 8 bits
  • Significand precision: 24 bits (23 explicitly stored)

This gives from 6 to 9 significant decimal digits precision

I'm not really sure how this was calculated. could you please explain?

1 Answers1

2

Firstly, $\log_2(10) \approx 3.32$, so you need about that many bits per digit. So, you'd expect about $24/\log_2(10) \approx 7.2$ digits of precision, but that misses the trickiness here. For instance, consider the IEEE number $2^0 \times 1.000 000 000 000 000 000 000 00$ where we interpret the representation as being in binary. We would typically render this as $1.0$, but how many $0$s can we actually guarantee?

Well, that number could represent anything in the interval $[1 - 2^{-24}, 1+ 2^{-24}= 1.0000000596)$ so, we're okay to $7$ significant figures here.

However, the precision isn't going to be the same everywhere. There are places where the binary and decimal representations mesh well, and you get some extra digits, but there are places where they mesh poorly, and you need more bits than usual per digit. Working out where these are would be a good thing to make the machine do itself: 2^32 is only about 4 billion possibilities, and only 2 billion if you don't care about sign.

user24142
  • 3,732
  • Why are you assuming rounding down instead of rounding to nearest? – celtschk Aug 22 '15 at 08:43
  • Now that you mention it, round to nearest makes more sense. I'll edit. – user24142 Aug 22 '15 at 08:45
  • hi, @user24142! thanks for the answer! but I'm afraid I'm still having a hard time understanding how this answers my question. I'm actually wondering why the wiki page mentions from 6 to 9 significant decimal digits specifically. but this is a good point too as apparently we could have up to 7 significant decimal digits since we have 24 bits. –  Aug 22 '15 at 12:58