Why does this code prints nonsense values? If it makes sense then what is it?
printf("%d\n", 5.0 / 4);
By the way, I know about format specifiers I should be using %f instead of %d. but I want to know what c actually does.
Strangely, every time I run the compiled program, it prints a different thing. doesn't it have to be deterministic?
As far as I could observe, this code prints a similar thing:
float c;
printf("%d\n", &c);
are they any related?
and when i tried:
float c;
printf("%d\n%d\n", c, &c);
There is a constant 252 between those two values. 256 - sizeof(float) maybe? and declaring c as a double makes the difference 0.
Thanks in advance!
UPDATE: writing the same code on different machines yielded different results.(252 being 56. former is a 64-bit ubuntu machine and latter is 64-bit OS X)