So there's something I just can't understand about ieee-754.
The specific questions are:
Which range of numbers can be represented by IEEE-754 standard using base 2 in single (double) precision?
Which range of numbers can be represented by IEEE-754 standard using base 10 in single (double) precision?
Which range of numbers can be represented by IEEE-754 standard using base 16 in single (double) precision?
(the textbook is not in English so I might not have translated this well but I hope you get the point).
The only information given in the textbook are the ranges themselves without the actual explanation of how they were calculated. For example:
binary32:
The largest normalized number: $(1-2^{-24})\times 2^{128}$
The smallest normalized number: $1.0\times 2^{-126}$
The smallest subnormal number: $1.0\times 2^{-149}$
I have a test coming up where these kind of question will appear and I really don't feel like learning all of this by heart. On the other hand, there must be a method to calculate these values, but they seem so random and that's what confuses me.