Probability convolution problem

Click For Summary
The discussion revolves around solving the probability problem P(0.6 < Y <= 2.2) where Y is the sum of two independent random variables, X1 (uniformly distributed) and X2 (exponentially distributed). The professor uses convolution to derive the density function for Y, leading to the need to evaluate integrals based on different cases for t. The key point of confusion is the division into cases based on the limits of integration, which is clarified by noting that the exponential density function is zero for negative values, making certain limits irrelevant. Ultimately, understanding the reasoning behind the case divisions and the integration limits is crucial for solving the problem correctly.
fruitbubbles
Messages
19
Reaction score
0
So this is a probability question, and I am asked to find P(0.6 < Y <= 2.2)

where Y = X1 + X2

X1~U(0,1) and X2~exp(2). X1 and X2 are both independent random variables. Our professor worked it out, but I do not understand his explanation. So he starts by using the convolution:$$f_y (t) = \int_{-\infty}^\infty f_{x_1}(u)f_{x_2}(t-u) \, du $$

I know the density function (fx1) for a uniform distribution is 0 if u is less than 0 or greater than 1, so this whole integral is 0 except when u is between 0 and 1.

$$f_y (t) = \int_{0}^1 f_{x_1}(u)f_{x_2}(t-u) \, du $$, in which case fx1 is just 1, so we just have

$$f_y (t) = \int_{0}^1 f_{x_2}(t-u) \, du $$

We substitute y = t-u and du = -dy, and since we substituted, I know we change the limits of integration and now we have:

$$f_y (t) = \int_{t-1}^t f_{x_2}(y) \, du $$

so for $$f_{x_2}(y) = \begin{cases} 0 & \text{if $x < 0$} \\ \lambda e^{-\lambda y} & \text{if $x>=0$} \end{cases}$$. (because that's the density function of the exponential distribution) I understand until this point, but at this point my professor

"divides it into cases":

for case: (0 <= t <= 1), he gets

$$\int_{0}^t \lambda e^{-\lambda y} \, dy = 1-e^{-\lambda t }$$, also changing the limits of integration, and then in the case of (t > 1),

$$\int_{t-1}^t \lambda e^{-\lambda y} \, dy = e^{-\lambda t } - e^{-\lambda t } $$,

and then solves the integrals from there. I have NO CLUE why he divided into those "cases", and how he determined to what the limits of integration should be for each case, so if anyone could help me (I know this was a long problem), please ! I understand that there are may be other ways to do it, but I'm pretty sure our professor wants us to understand this for our exam, so if anyone understand what exactly he is doing and can help me, I'd really appreciate it.

If it helps, this is what he did after diving it into cases:

so for $$f_{x_2}(y) = \begin{cases} 0 & \text{if $t < 0$} \\ 1-e^{-\lambda t } & \text{if $0<= t <= 1$} \\ e^{-\lambda t } - e^{-\lambda t } & \text{if $t > 1$} \end{cases}$$so from there it goes:
P(0.6 < Y <= 2.2) = $$\int_{0.6}^{2.2} f_y (t) dt$$ = $$\int_{0.6}^{1} 1-e^{-\lambda t } dt$$ + $$\int_{1}^{2.2} e^{-\lambda t } - e^{-\lambda t } dt$$

I understand this part, but it's the part where he does different "cases" that I just lose it
 
Physics news on Phys.org
The integral to evaluate is
\int_{t-1}^t f_{x_2}(u) \, du

(note that the variable of integration must be the same throughout: your line above uses y and du)
The density f_{x_2}(x) is zero if x < 0: if 0 &lt; t &lt; 1 then t - 1 &lt;0 so integrating from that lower limit is irrelevant.
 
  • Like
Likes fruitbubbles
statdad said:
The integral to evaluate is
\int_{t-1}^t f_{x_2}(u) \, du

(note that the variable of integration must be the same throughout: your line above uses y and du)
The density f_{x_2}(x) is zero if x < 0: if 0 &lt; t &lt; 1 then t - 1 &lt;0 so integrating from that lower limit is irrelevant.
huh. What a simple explanation, and it makes perfect sense. Thank you!
 
The standard _A " operator" maps a Null Hypothesis Ho into a decision set { Do not reject:=1 and reject :=0}. In this sense ( HA)_A , makes no sense. Since H0, HA aren't exhaustive, can we find an alternative operator, _A' , so that ( H_A)_A' makes sense? Isn't Pearson Neyman related to this? Hope I'm making sense. Edit: I was motivated by a superficial similarity of the idea with double transposition of matrices M, with ## (M^{T})^{T}=M##, and just wanted to see if it made sense to talk...

Similar threads

  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 12 ·
Replies
12
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
Replies
6
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 9 ·
Replies
9
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K