How do we find PDF of sum of correlated exponential random variables. I know for independent random variables. But how to find it for correlated exponential random variables.
-
If they are 100% correlated that means they are essentially the same so the pdf’s would scale by N – Dan Boschen Feb 19 '20 at 13:03
-
How is the correlation defined? – AlexTP Feb 19 '20 at 20:09
-
Unless you specify the joint PDF of the exponential random variables, your question is not answerable at all. – Dilip Sarwate Feb 20 '20 at 01:37
-
@DilipSarwate as an exponential random variable i fully characterized by its mean, I believe if the correlation is defined, we can derive the joint PDF. For example, if we know the transform function $Y=g(X)$ then we know the joint pdf. The fact that both $X$ and $Y$ are exponential makes the calculation easier. Of course, the knowledge about $g(.)$ is crucial. – AlexTP Feb 20 '20 at 09:40
-
@AlexTP Two exponential random variables given means and with specified correlation coefficient can nonetheless have infinitely many different joint pdfs. Furthermore, if $Y=g(X)$ with $X$ an exponential random variable, then $Y$ is not an exponential random variable (as it must be as per the requirements in the problem statement) except when $g$ is a linear function ($g(x) = ax$ with $a > 0$) in which case the correlation coefficient is $1$. Please rethink your comment, and possibly give some thought to deleting it entirely. If you choose to delete, I will delete this response too. – Dilip Sarwate Feb 20 '20 at 15:23
-
@DilipSarwate I think deleting comments to hide my stupidity is not necessary, not thank to the anonymous Internet, but rather I do not feel guilty to ask stupid questions. (I am also ok with the fact that some people want to hide their lack of intelligence). – AlexTP Feb 20 '20 at 16:12
-
@DilipSarwate Now, let me continue learning by keep being stupid. Your "furthermore ..." is what I meant: $Y=g(X)$ and both $X$ and $Y$ are exponential, $g(.)$ must the linear function. Maybe I was wrong in saying that "$X$ and $Y$ are correlated, then we can say $Y=g(X)$ with some $g(.)$". Could you please give a counterexample? Also, that is the reason why I asked how the correlation is defined in my first comment. – AlexTP Feb 20 '20 at 16:15
-
@AlexTP If $X$ and $Y$ are correlated random variables, then it is not necessary that $Y$ is a function of $X$, and this has nothing to do with whether the random variables are exponential or otherwise. Furthermore, if $Y=g(X)$, then $X$ and $Y$ don't enjoy a joint PDF because all the probability mass lies on the curve $y=g(x)$ in the $x$-$y$ plane, and the (areal) density, measured in probability mass per uint area, is infinite at points on this curve. The probability mass has lineal density (mass per unit length) along this curve, but not areal density because the curve has 0 area. – Dilip Sarwate Feb 20 '20 at 20:25
-
@AlexTP For a specific example of correlated exponential random variables that are not linear functions of each other, see this answer over on stats.SE (where this question belongs; it has nothing to do with DSP). – Dilip Sarwate Feb 20 '20 at 20:44
-
@DilipSarwate I got it, I was wrong about the correlation interpretation. Thank you. – AlexTP Feb 23 '20 at 10:25
1 Answers
The pdf $f_Z(z)$ of the sum $Z=X+Y$ of any two jointly continuous random variables $X$ and $Y$ with joint pdf $f_{X,Y}(x,y)$ is as follows: $$\text{For all } z, -\infty < z < \infty, ~~ f_Z(z) = \int_{-\infty}^\infty f_{X,Y}(x,z-x) \, \mathrm dx.\tag{1}$$
For the special case when $X$ and $Y$ are nonnegative random variables (including as a special case, exponential random variables) and so take on nonnegative values only, $f_{X,Y}(x,y)$ has value $0$ if at least one of $x$ and $y$ is smaller than $0$. Hence, in this case, the integrand $f_{X,Y}(x,z-x)$ in $(1)$ has value $0$ if $x < 0$ or if $z < x$. Consequently, if $z$ is a negative number, then the integrand in $(1)$ is always $0$ regardless of the value of $x$ and therefore so is the integral. All of which is just a long-winded way of saying that $f_Z(z)$ has value $0$ when $z<0$, that is, $Z$ takes on nonegative values only, which any idiot could have deduced from the fact that $Z=X+Y$ and both $X$ and $Y$ are nonnegative. But the approach is useful even for $z>0$ since now we have that the integrand in $(1)$ is zero when $x<0$ or when $x >z$ and so for nonnegative $X$ and $Y$, we can simplify $(1)$ to $$f_Z(z) = \begin{cases}\displaystyle\int_0^z f_{X,Y}(x,z-x) \, \mathrm dx, & z \geq 0,\\\quad\\ 0, & z < 0\end{cases} \tag{2}$$
No further simplification of $(2)$ is possible in general.
For the special case when $X$ and $Y$ are independent random variables, $f_{X,Y}(x,y)$ factors into $f_X(x)f_Y(y)$ and so $(1)$ becomes the familiar convolution integral and $(2)$ the somewhat-less-familiar convolution integral for causal signals. But no such simplification is possible for nonindependent random variables $X$ and $Y$; we need the joint pdf to calculate $f_Z(z)$ and just knowing that $X$ and $Y$ are correlated random variables (whether exponential or Gaussian or whatever) is not enough.
- 20,349
- 4
- 48
- 94