Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Probability convolution problem

  1. Dec 3, 2014 #1
    So this is a probability question, and I am asked to find P(0.6 < Y <= 2.2)

    where Y = X1 + X2

    X1~U(0,1) and X2~exp(2). X1 and X2 are both independent random variables. Our professor worked it out, but I do not understand his explanation. So he starts by using the convolution:


    $$f_y (t) = \int_{-\infty}^\infty f_{x_1}(u)f_{x_2}(t-u) \, du $$

    I know the density function (fx1) for a uniform distribution is 0 if u is less than 0 or greater than 1, so this whole integral is 0 except when u is between 0 and 1.

    $$f_y (t) = \int_{0}^1 f_{x_1}(u)f_{x_2}(t-u) \, du $$, in which case fx1 is just 1, so we just have

    $$f_y (t) = \int_{0}^1 f_{x_2}(t-u) \, du $$

    We substitute y = t-u and du = -dy, and since we substituted, I know we change the limits of integration and now we have:

    $$f_y (t) = \int_{t-1}^t f_{x_2}(y) \, du $$

    so for $$f_{x_2}(y) = \begin{cases} 0 & \text{if $x < 0$} \\ \lambda e^{-\lambda y} & \text{if $x>=0$} \end{cases}$$. (because that's the density function of the exponential distribution) I understand until this point, but at this point my professor

    "divides it into cases":

    for case: (0 <= t <= 1), he gets

    $$\int_{0}^t \lambda e^{-\lambda y} \, dy = 1-e^{-\lambda t }$$, also changing the limits of integration, and then in the case of (t > 1),

    $$\int_{t-1}^t \lambda e^{-\lambda y} \, dy = e^{-\lambda t } - e^{-\lambda t } $$,

    and then solves the integrals from there. I have NO CLUE why he divided into those "cases", and how he determined to what the limits of integration should be for each case, so if anyone could help me (I know this was a long problem), please !! I understand that there are may be other ways to do it, but I'm pretty sure our professor wants us to understand this for our exam, so if anyone understand what exactly he is doing and can help me, I'd really appreciate it.

    If it helps, this is what he did after diving it into cases:

    so for $$f_{x_2}(y) = \begin{cases} 0 & \text{if $t < 0$} \\ 1-e^{-\lambda t } & \text{if $0<= t <= 1$} \\ e^{-\lambda t } - e^{-\lambda t } & \text{if $t > 1$} \end{cases}$$


    so from there it goes:
    P(0.6 < Y <= 2.2) = $$\int_{0.6}^{2.2} f_y (t) dt$$ = $$\int_{0.6}^{1} 1-e^{-\lambda t } dt$$ + $$\int_{1}^{2.2} e^{-\lambda t } - e^{-\lambda t } dt$$

    I understand this part, but it's the part where he does different "cases" that I just lose it
     
  2. jcsd
  3. Dec 3, 2014 #2

    statdad

    User Avatar
    Homework Helper

    The integral to evaluate is
    [tex] \int_{t-1}^t f_{x_2}(u) \, du [/tex]

    (note that the variable of integration must be the same throughout: your line above uses y and du)
    The density [itex] f_{x_2}(x)[/itex] is zero if x < 0: if [itex] 0 < t < 1 [/itex] then [itex] t - 1 <0 [/itex] so integrating from that lower limit is irrelevant.
     
  4. Dec 3, 2014 #3
    huh. What a simple explanation, and it makes perfect sense. Thank you!
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Probability convolution problem
  1. Probability Problem (Replies: 1)

  2. Probability Problem (Replies: 1)

  3. Probability problem (Replies: 5)

Loading...