I On pdf of a sum of two r.v.s and differentiating under the integral

  • I
  • Thread starter Thread starter psie
  • Start date Start date
  • Tags Tags
    Measure theory
AI Thread Summary
The discussion centers on the probability density function (pdf) of the sum of two continuous random variables, X1 and X2, and the validity of derivations under the assumption of continuity. It highlights the confusion around the fact that the sum of two continuous random variables may not be continuous, questioning whether this affects the derivation process. The derivation utilizes Fubini-Tonelli's theorem and the fundamental theorem of calculus to differentiate under the integral, with a change of variables to facilitate the calculation. The conclusion reached is that the derivation assumes Y, the sum, to be a continuous random variable, thereby neglecting scenarios where the sum may not be continuous. This clarification emphasizes the importance of understanding the underlying assumptions in probability theory.
psie
Messages
315
Reaction score
40
TL;DR Summary
I'm stuck at a derivation in my book on the pdf of the sum of two continuous random variables ##Y=X_1+X_2##. The formula I'm after is $$f_Y(u)=\int_{\mathbb R} f(x_1,u-x_1)\,dx_1=\int_{\mathbb R} f(u-x_2,x_2)\,dx_2,$$ where ##f## is the joint density of ##(X_1,X_2)##.
I'm reading in my book about the pdf of the sum of two continuous random variables ##X_1,X_2##. First, I'm a bit confused about the fact that the sum of two continuous random variables may not be continuous. Does this fact make the derivation below still valid or is there some key assumption that I'm missing for it to be valid?

Regarding the derivation in my book, I will omit some details, but assume ##X_1,X_2## are both real-valued and ##P## is the probability measure on some probability space. Recall ##\int 1_A \, dP=P(A)## and for a measurable function ##g## such that ##E[|g(X)|]<\infty##, we have $$E[g(X)]=\int_\Omega g(X)\, P(d\omega)=\int_\mathbb{R} g(x) \, P_X(dx)=\int_{\mathbb R}g(x) f(x)dx,$$ where ##P_X## is the induced probability by ##X## (the pushforward measure of ##P## under ##X##). The distribution is then simply given by $$\begin{align}F_{Y}(u)&=P(X_1+X_2\leq u) \nonumber \\ &=E[1_{X_1+X_2\leq u} ] \nonumber \\ &=\int_{\mathbb R^2}1_{x_1+x_2\leq u}f(x_1,x_2)\,dx_1dx_2 \nonumber \\ &=\int_{\mathbb R}\int_{\mathbb R} 1_{x_1\leq u-x_2}f(x_1,x_2)\, dx_1dx_2 \nonumber \\ &=\int_{-\infty}^\infty\int_{-\infty}^{u-x_2}f(x_1,x_2)\,dx_1dx_2. \nonumber \end{align}$$ We used the definition of the expectation and Fubini-Tonelli's theorem. Then the author goes; we differentiate with respect to ##u## and move ##\frac{d}{du}## inside the outer integral and use the fundamental theorem of calculus. However, there is not a lot of motivation given for this maneuver. Why can we do this? I'm familiar with Leibniz rule, but I'm unsure if this applies here.
 
Last edited:
Physics news on Phys.org
I think I found an answer to my question. In the last integral, we make a change of variables: ##z=x_1+x_2## and rename ##x_2=x## (for aesthetics), then $$F_Y(u)=\int_{-\infty}^\infty\int_{-\infty}^{u}f(z-x,x)\,dzdx. $$We change the order of integration and then just use the fundamental theorem of calculus: $$f_Y(u)=\frac{d}{du} F_Y(u)= \frac{d}{du}\int_{-\infty}^u\int_{-\infty}^{\infty}f(z-x,x)\,dxdz= \int_{\mathbb R} f(u-x,x)\,dx.$$
 
Last edited:
I guess regarding my first question, the whole derivation assumes ##Y## to be a continuous random variable, since its density is what we are deriving, i.e. we neglect the cases where the sum of two continuous random variables is not continuous.
 

Similar threads

Back
Top