1

I tried to find answers the questions below, but I could not get clear answers for them.

For a random sample of size n, $x_1, x_2, ..., x_n$ from a Normal distribution where $\sigma^2$ is unknown.

It is quite easy to derive answers with the MLE of $\sigma^2$

However, how to show the biasedness and consistency for maximum likelihood estimator of $\sigma$? I mean, not $\sigma^2$

Moreover, how to show the asymptotic distribution of the MLE $\sigma$?

coalt
  • 11
  • Thank you for your reply. But the thing is, MLE of $\sigma$ and showing its consistency and asymptotic normality. – coalt Oct 10 '18 at 15:21

2 Answers2

2

A nice thing about MLE is that its structure means that if you have an MLE $\hat{p}$ for $p$, then $f(\hat{p})$ is the MLE of $f(p)$ for any injective measurable function $f$. Thus, if $T$ is the MLE of $\sigma^2$ then $\sqrt{T}$ is the MLE of $\sigma$ for instance.

In terms of properties of $f(\hat{p})$ as an estimator, consistency is relatively mild. If $\hat{p}$ was consistent and $f$ is, say, uniformly continuous then $f(\hat{p})$ will be consistent. But $f(\hat{p})$ will usually be biased even if $\hat{p}$ wasn't. In the case of passing from a variance estimator to a standard deviation estimator you can see this by Jensen's inequality: the square root of an unbiased variance estimator is a biased standard deviation estimator.

Ian
  • 101,645
0

The issue is that MLE estimates $\sigma^2$ as the sample mean of $(X-\bar{x})^2$, where $\bar{x}$ is the sample mean rather than the distribution's true mean. Because $\bar{x}$ is obtained by averaging the empirical $x$, it's slightly closer to them than the true mean; indeed, you can prove $\sum_{i=1}^n (x_i-m)^2$ is $n\sigma^2$ if $m$ is the true mean, but only $(n-1)\sigma^2$ if $m$ is the sample mean. Replacing the usual $\frac{1}{n}$ with $\frac{1}{n-1}$ addresses this. This is called the Bonferroni correction (that link proves the aforementioned $n-1$ result).

J.G.
  • 115,835