4

This expression $\left(1 + \frac{1}{n}\right)^n$ approximates $e^1$.

  • When $n = 252257928$, the relative error, $(e - \text{result})/e$, is $1.740557727387924\mathrm{e-}12$
  • When $n = 215450934$, the relative error is $2.430185813419991\mathrm{e-}08$

But both $n$ are very big numbers. Logically, they should produce similar round-off errors, right? But why is that the resulting round-off errors are so different? What's the logic behind it?

I did the Math on Matlab. It looks that Wolfram Alpha produces different round-off errors, but don't know why.

  • What is the second $n$? I think you have a typo in the question – Matthew Leingang May 24 '19 at 01:30
  • @MatthewLeingang edited, thanks – Joshua Leung May 24 '19 at 01:32
  • These are quite small numbers, are you mistaking E-08 for E+08? – Victoria M May 24 '19 at 01:32
  • @VictoriaM I mean the "N"s are big numbers, not the round-off error. I am wondering why 2 big numbers produce 2 drastically different round-off error – Joshua Leung May 24 '19 at 01:33
  • With Wolfram Alpha, I got errors on the order of $10^{-9}$ for both $n= 252257928$ and $n= 215450934$ – Matthew Leingang May 24 '19 at 01:34
  • @MatthewLeingang I did it on Matlab.... interesting – Joshua Leung May 24 '19 at 01:36
  • 2
    @JoshuaLeung If you could edit the original question to include your code we may be able to tell you where the issue arises, thanks. – Victoria M May 24 '19 at 01:45
  • @VictoriaM I added. Thanks!!! – Joshua Leung May 24 '19 at 02:28
  • 1
    What precision are the computations using? Given sufficient precision, the relative error should be about $\frac1{2n}$ – robjohn May 24 '19 at 02:52
  • @robjohn I am not setting any precision. Maybe the default one? – Joshua Leung May 24 '19 at 03:00
  • @achillehui: $\left(1+\frac1n\right)^n\lt e$, so the error shouldn't change sign. – robjohn May 24 '19 at 03:36
  • 2
    @JoshuaLeung: Using Mathematica, n = SetPrecision[252257928,16]; (E-(1+1/n)^n)/E gives 1.982098*10^-9 and n = SetPrecision[215450934,16]; (E-(1+1/n)^n)/E gives 2.320714*10^-9. Both of these match the relative error of $\frac1{2n}$ I mentioned above. – robjohn May 24 '19 at 04:05
  • @robjohn I don't get it. Matlab also uses 16 digits of precision. so it should produce the same number as Mathematica. What did I do wrong...... – Joshua Leung May 24 '19 at 04:09
  • @JoshuaLeung: what does the syntax [err abs(e - result)/abs(e)] mean in Matlab? – robjohn May 24 '19 at 04:14
  • @robjohn it basically appends the newly calculated relative error abs(e - result)/abs(e) into the array. "err" is the previous values. – Joshua Leung May 24 '19 at 04:19
  • If Mathematica uses a multi-precision data type, then the set precision 16 is the "display" precision. All operations will be executed using an appropriately higher "working" precision, so that ideally the result is exact to the last digit. While the FPU also has an "extended" format that could be used to reduce floating point errors, this effect will only be relevant when the full computation can be kept inside the FPU stack. It is improbable that it will be used that way in an interpreted language. – Lutz Lehmann May 24 '19 at 09:06
  • @LutzL: I wasn't trying to reproduce the erroneous values; I was just trying to show what the error values should be when given sufficient precision. SetPrecision only tells Mathematica what the precision, that is $\frac{\text{maximum error}}{\text{value}}$, of a given number is. It will then use arbitrary precision math to give as accurate an answer it can with the given precisions of the numbers given it.. – robjohn May 24 '19 at 14:22
  • Yes, that's what I said. It is expected of a true CAS that the results are as accurate as possible, while a numerics software will foremost work with the faster "hard-wired" data types. One could still get the accurate results in double precision by using exp(n*log1p(1/n)) as alternative expression. – Lutz Lehmann May 24 '19 at 14:28
  • To see the effect of round-off error, we can simulate a limited precision with Round. For example, n = SetPrecision[252257928,16]; (E-Round[1+1/n,2^-52]^n)/E gives -1.741*10^-12 where round-off error is larger than the error of $\left(1+\frac1n\right)^n$, which which would result in a value of 1.982098*10^-9. – robjohn May 24 '19 at 14:39
  • @JoshuaLeung: you might try digitsOld=digits(100) and n=vpa(252257928) and e=exp(vpa(1)) then evaluating things. – robjohn May 24 '19 at 15:01
  • 32 digits should actually be enough, so you could get by without the digitsOld=digits(100). – robjohn May 24 '19 at 15:10

1 Answers1

8

The result of $1+\frac1n$ is not exactly representable in floating point, you will get an error $\delta$, the value in memory is $1+\frac1n+δ$ with $|δ|\lessapprox\frac{\mu}2$ (rounding to next) where $\mu\approx 2\cdot 10^{-16}$ is the machine constant.

In the final expression this propagates to \begin{align} \left(1+\frac1n+δ\right)^n &=\exp\left(n\ln\left(1+\frac1n+δ\right)\right)\\ &=\exp\left(1+nδ-\frac n2\left(\frac1n+δ\right)^2+...\right)\\ &=\exp\left(1+nδ-\frac1{2n}-δ-\frac12nδ^2+...\right)\\ &=e\cdot \left(1+(n-1)δ-\frac1{2n}+...\right) \end{align} using $e^{a+b}=e^a(1+b+\frac12b^2+...)$ if $|b|\ll 1$. The leading terms in the relative error are $(n-1)δ$ and $-\frac1{2n}$. The first, random term will reach in its bound the size of the second, theoretical error at around $n\simeq \sqrt{\frac1{\mu}}$ which is about $10^8$. For larger $n$ the random floating point error of maximum size $n\frac{\mu}2$ dominates.

Around $n=10^8$ where both influences balance, it can happen by chance that they are really of equal size but opposing sign, that is, that $δ\approx-\frac1{2n^2}$, so that the resulting error is much smaller than the bounds predict.

  • For the given example $n=252257928$ in the question one gets $2^{52}(\frac1n+\frac1{2n^2})=17853153.999968924$, and thus a very small combined error around $n\cdot 3⋅10^{-5}⋅\mu=1.5⋅10^{-12}$.
  • For the "normal" case example $n=215450934$ this mantissa computation leads to $\frac{2^{52}}n=20903133.459474795$ and thus rounding down by about $δ=-0.5⋅\mu<0$, so that the errors $-0.5/n=-2.32⋅10^{-9}$ and $-0.5⋅\mu⋅(n+1)=-2.39⋅10^{-8}$ reinforce each other.

plot of the relative error

Lutz Lehmann
  • 126,666
  • I am sorry... how did you get the leading error terms (−1)δ and −1/2??? I have been looking at the answer for a very long time and still can't get my head around it. – Joshua Leung May 24 '19 at 19:27
  • Also, why did you divide machine epsilon by 2? |δ| ≤ /2, where is this coming from? – Joshua Leung May 24 '19 at 19:42
  • The leading error terms (relative error) should now be more explicit. In $[1,2]$ the distance between consecutive floating point numbers is $\mu$. Thus the distance from any real number in $[1,2]$ to the next floating point number is half of that, $\mu/2$. – Lutz Lehmann May 24 '19 at 20:04