5

In a computer you can't store any real number that you want to because in the $[0.0;1.0]$ interval there are infinite numbers and computer memory is finite. I want to show it in examples, which is why I need some formulas that do some complex calculations and return the initial number. Logically it should be the same but computer will calculate and return a different number. I want to get maximal difference.

Sample expressions can be like $f(x) = x^2/x$ or $(x + x) / 2$

Of course these are too simple ones.

Start wearing purple
  • 53,234
  • 13
  • 164
  • 223
TIKSN
  • 153

2 Answers2

3

In languages that use IEEE floating point, you can get counterintuitive results with even something as simple as $$1.0-0.3-0.3-0.3-0.1 = -2.77556\times 10^{-17} \neq 0$$

and so for $f(x) = x + 1.0-0.3-0.3-0.3-0.1$, you will have $f(0)\neq 0$.

user7530
  • 49,280
  • Thanks @user7530 it works. – TIKSN Oct 19 '13 at 23:24
  • Note that decimals doesn't suffer from this. This is a direct effect of double/float are binary numbers which in turn means that 0.3 can't be exactly represented creating this effect. Decimal OTOH are what they say they; ie a floating point based around the decimal system and 0.3 can be exactly represented. This is one of many reasons why doubles are a poor choice to represent money. – Just another metaprogrammer Oct 21 '13 at 13:10
  • 1
    @FuleSnabel Yes exactly. In fact I would argue that money should never be represented using anything but integers (fixed-point decimals). – user7530 Oct 21 '13 at 13:34
2

I read an interesting sample in Kahan's article: http://www.cs.berkeley.edu/~wkahan/Mindless.pdf

You start from simple definitions:

$x_0 = 4, x_1 = 4.25$

$f y z = 108 - (815 - 1500/z)/y$

And let:

$x_{n+1} = f x_n x_{n-1}$

The resulting series should converge towards 5 but depending on the floating point precision ends up in very different places. For instance of my machine using IEEE 64 doubles with 53 significant bits the computation converges toward 100!

Just to convince myself Kahan is not pulling our legs I redid the computation with bigrationals and then it does converge towards 5.

As you asked for this in the context of computers I hope you don't mind an F# sample program demonstrating the issue

let JMM y z = 108. - (815. - 1500./z)/y

let ApplyJMM n = 
    let mutable a = 4.
    let mutable b = 4.25
    printfn "%f" a
    printfn "%f" b
    for i in 0..n do
        let x = JMM b a 
        a <- b
        b <- x
        printfn "%f" b

ApplyJMM 80

You can try this program without downloading F# by using the interactive compiler: http://www.tryfsharp.org/Create

PS If you are interested in the slightly more complex F# program that uses BigRationals (in order to avoid rounding issues)

// Have to define a bigrational type as F# doesn't have it out of box
type BigRational = bigint*bigint

// The minus operation    
let inline ( --- ) ((lx, ly) : BigRational) ((rx, ry) : BigRational) = 
    let x = lx * ry - rx * ly
    let y = ly * ry
    let gcd = bigint.GreatestCommonDivisor(x,y)
    if gcd.IsOne then x,y
    else (x / gcd),(y / gcd)

// The divide operation    
let inline ( /-/ ) ((lx, ly) : BigRational) ((rx, ry) : BigRational) = 
    let x = lx * ry
    let y = ly * rx
    let gcd = bigint.GreatestCommonDivisor(x,y)
    if gcd.IsOne then x,y
    else (x / gcd),(y / gcd)

// Constructs a BigRational from an integer
let br (i : bigint) : BigRational = i, 1I

let JMM y z = (br 108I) --- ((br 815I) --- (br 1500I)/-/z)/-/y

let ApplyJMM n = 
    let mutable a : BigRational = 4I,1I
    let mutable b : BigRational = 17I,4I
    for i in 0..n do
        let x = JMM b a 
        a <- b
        b <- x

        let t,d = b

        printfn "%s, %s" (t.ToString()) (d.ToString())
ApplyJMM 80