4

Hello, all!

I have a big sum of log-normal (with location parameter $\mu$ and scale parameter $\sigma$) random variables $X_i$ $\sum_{i=1}^N X_i$ with $N \gg 1$. How could I estimate convergence rate to a gaussian distribution relative to $\mu$ and $\sigma$?

Thank you.

Michael Hardy
  • 11,922
  • 11
  • 81
  • 119

2 Answers2

4

Log normal distribution has finite variance, so if you subtract the mean, the magic words are "Berry-Esseen theorem". If you don't subtract the mean, the sum diverges.

Igor Rivin
  • 95,560
  • The Berry-Esseen theorem is a standard tool for making the CLT effective, but I think the estimate is only close to sharp for something like a binomial distribution near one of the dicontinuities close to the peak. For a sum of log-normal distributions, I would expect more rapid convergence. The estimate is also rather weak for the tails. – Douglas Zare Sep 01 '11 at 23:33
  • Thank you! But why sum diverges if I do not subtract the mean? CLT must work in this case also. I suppose, Berry-Esseen theorem will work if sum of "uncenterized" (without subtraction the mean) i.i.d log-normals is considered. –  Sep 02 '11 at 07:25
  • @SPK: If you do not subtract the mean, the mean of the sum will go to infinity, which some would describe as diverging... @Doug: Since the log normal distribution has moments of all orders, one can do pretty well, if one wanted to work, but the OP did not ask for the sharpest estimate (and if he understands the Barry/Esseen argument, as in Feller, e.g., he he can probably push it pretty far). – Igor Rivin Sep 02 '11 at 13:48
  • @Igor: I have a silly question. Please, consider:
    1. as I computed with Wolfram Mathematica, the value of $\frac{\beta_3}{std^3} = \sqrt{e^{\sigma^2}-1} \cdot (2 + e^{\sigma^2})$, where $\beta_3$ is 3-d central moment of lognormal with params $\mu$ and $\sigma$, $std$ is standard deviation for the same lognormal;
    2. how can I compute precision of convergence to normal with Berry-Esseen theorem for lognormals if parameter $\sigma >> 1$? This will be mean that supremum of difference for two CDFs those are $\mathbf{R} \to [0,1]$ has upper bound $>> 1$.
    –  Jan 18 '12 at 10:48
0

If you search at Google Scholar for "sum of lognormal" or "sum of log-normal" (using the quotation marks), you will find several papers devoted to this question.

Brendan McKay
  • 37,203
  • Thank you! I have already searching it by Google@Scholar. But I am not satisfied with results I have seen. Most of papers are dedicated to approximations of log-normals sum by log-normal. But I am interesting in CLT convergence rate only: I have sum over big-big number of log-normal random variables. –  Sep 02 '11 at 07:22