1

Let $X_1, X_2, ...,X_n$ be random samples from a distribution with pdf $f(x) =\frac{1}{\sigma} e^{-(x-\mu)/\sigma}\mathrm I_{(\mu,\infty)}(x)$ where $\mathrm I_A$ is the indicator function of a set $A$.

Suppose that we cannot observe the whole random samples $X_1, X_2, ...,X_n$, but we can only observe $r$ smallest order statistics $X_{(1)}<...<X_{(r)}$ out of $n$ order statistics $X_{(1)}<...<X_{(r)}<...<X_{(n)}$ where $1\le r<n$.

We are going to estimate $\mu$ and $\sigma$ from the estimators below:

$\hat{\mu}=X_{(1)}$, $\hat{\sigma}=\frac{1}{r-1}\left( \sum_{k=1}^{r}(X_{(k)}-X_{(1)}) + (n-r)(X_{(r)}-X_{(1)}) \right)$.

Question: How can I find the joint pdf of $\hat{\mu}$ and $(r-1)\hat{\sigma}$?

What I know is that $\hat{\mu}$ and $(r-1)\hat{\sigma}$ are independent, so the joint pdf of $\hat{\mu}$ and $(r-1)\hat{\sigma}$ are the product of marginal pdf of $\hat{\mu}$ and $(r-1)\hat{\sigma}$, respectively.

bellcircle
  • 2,939
  • Are you sure $\hat\mu$ and $\hat\sigma$ are independent? The distribution of $\hat\mu$ looks to me to depend on $\mu$ and $\sigma$ both, and $\hat\sigma$ is probably consistent for $\sigma$. This adds up to a contradiction to me. – kimchi lover Jul 04 '17 at 14:43
  • @kimchilover Yes. $\mu$ and $\sigma$ are just fixed constants. I also found the joint pdf of $\hat{\mu}$ and $\hat{\sigma}$ and showed that the joint pdf splits up to marginal pdf's of each variables. – bellcircle Jul 04 '17 at 14:45
  • Look at censoring. – BruceET Jul 04 '17 at 22:32
  • @BruceET I searched with keyword 'censoring', but I couldn't find the answer I want. – bellcircle Jul 05 '17 at 00:54
  • Maybe try 'censored data', starting with the Wikipedia article. When data are on a time scale, sometimes it is not feasible to wait until you know the larger data values. That may apply in your situation. /// I'm not sure whether you're saying 'censoring' got you nothing, or nothing of interest. If the latter, I suggest you give the concept another look. – BruceET Jul 05 '17 at 02:17
  • @BruceET Yes, I searched with such keywords, but it didn't give any information for 'my' question. Most of the results gave only MLE and Baysian inference or so on. – bellcircle Jul 05 '17 at 02:49

1 Answers1

1

Rewrite $X_{(k)}-X_{(1)}$ in terms of spacings as $$ \left(X_{(k)}-X_{(k-1)}\right)+\left(X_{(k-1)}-X_{(k-2)}\right)\ldots+\left(X_{(2)}-X_{(1)}\right) $$ and then you will get that $$ (r-1)\hat{\sigma}=\sum_{k=1}^{r}(X_{(k)}-X_{(1)}) + (n-r)(X_{(r)}-X_{(1)}) $$ $$ = (n-1)\left(X_{(2)}-X_{(1)}\right) + (n-2)\left(X_{(3)}-X_{(2)}\right)+\ldots +(n-r+1)\left(X_{(r)}-X_{(r-1)}\right) $$ Note that after subtraction the shift parameter $\mu$ disappears. This spacings distributed like the spacings from exponential distribution with mean $\sigma$. It is well-known that $X_{(i+1)}-X_{(i)}$ is also exponential distribution with mean $\frac{\sigma}{n-i}$ and all the spacings are independent and do not depend on $X_{(1)}$. See, for example, two answers here.

Therefore all the summands $$(n-1)\left(X_{(2)}-X_{(1)}\right)$$ $$(n-2)\left(X_{(3)}-X_{(2)}\right)$$ $$\ldots$$ $$(n-r+1)\left(X_{(r)}-X_{(r-1)}\right)$$ are independent and exponentially distributed with the same mean $\sigma$. Their sum is Gamma-distributed $\Gamma(r-1,\sigma)$ with the PDF $$ f_{(r-1)\hat{\sigma}}(x) = \frac{1}{\sigma^{r-1}(r-2)!}x^{r-2}e^{-x/\sigma} $$

Sure, $\hat{\mu}=X_{(1)}$ has the PDF $$ f_{\hat{\mu}}(x)=\frac{n}{\sigma} e^{-n(x-\mu)/\sigma}\mathrm I_{(\mu,\infty)}(x) $$ and joint PDF of $\hat{\mu}$, $(r-1)\hat{\sigma}$ is the product.

NCh
  • 13,807
  • 4
  • 17
  • 33