1. The problem statement, all variables and given/known data The random variables X_{1} and X_{2} are independent and identically distributed with common density f_{X}(x) = e^{-x} for x>0. Determine the distribution function for the random variable Y given by Y = X_{1} + X_{2}. 2. Relevant equations Not sure. Question is from Ch4 of the book, and convolutions for computing distributions of sums of random variables aren't introduced until Ch8. Answer in the back of the book is F_{Y}(y) = 1-e^{-y}(1+y) for y>0. 3. The attempt at a solution Tried every convolution formula I could find (I found a bunch of different variations, and I'm not sure which ones are equivalent or correct): integral from zero to positive infinity of f(t)F(y-t)dt, F(t)f(y-t)dt, F(t)F(y-t)dt, and f(t)f(y-t)dt. All of them end up with a [tex]\int[/tex]e^{t}e^{-t}, which gives me an infinity term. Am I solving the integral wrong or not setting it up right or something else?
I don't know how your book is explaining convolutions, but this book does it very well (scroll down to page 291 and page 292 for the exponential distribution example): http://www.dartmouth.edu/~chance/teaching_aids/books_articles/probability_book/Chapter7.pdf That has everything you need.
You don't need to know the concept of convolution to solve this problem. You can find F_{Y}(y) using [tex]F_Y(y) = P(Y\leq y) = \iint\limits_{Y\leq y} f(x_1,x_2)\,dx_1\,dx_2[/tex] First, you should be able to tell us what the joint probability density f(x_{1},x_{2}) is equal to. Next, sketch the region that corresponds to 0≤Y≤y in the X_{1}X_{2} plane to figure out the limits on the double integral. Then integrate to get the answer.
I guess the problem, then is that I don't understand how to get the joint density. Like I said, the back of the book has the answer, I just can't see how they get it. When I differentiate the joint distribution given, I get f(y) = ye^{-y} and I'm not sure where that comes from. I don't see anything in this chapter of the book explaining how to get the joint density (though that doesn't mean it's not in there). I know my professor didn't cover it in his lectures, and he only loosely follows the book. He also doesn't look too closely at the problems he assigns out of the book to make sure they cover something he's explained. Can anyone point me at a resource where I can read up on how to do this?
I know that when two variables are independent, the product of their marginal densities is the joint density. Surely that can't be all I need to use in this case? For one thing, that gives me a joint density of e^{-y} instead of the ye^{-y} it looks like I'm supposed to be getting. For another, while it works nicely in the case of Y = X_{1} + X_{2}, I can think of a lot of densities and ways to combine X_{1} and X_{2} that would keep the product of their densities from being expressed in terms of Y. Their independence also means their conditional densities are equal to their marginal densities (essentially another way of expressing the previous info, since the conditional density is just the joint divided by marginal). The arithmetic expectation of Y should equal the sums of the arithmetic expectations of the two Xs; useful for confirming when I've got the right answer, but not helpful for finding it. Ditto for their variance, and from there you can get skew. The moment generating function of Y should be a product of the moment generating function of the Xs, which I suppose could help. X is an exponential distribution with [tex]\lambda[/tex] = 1. Thus it should have a moment generating function of M_{x}(t) = 1/(1-t). The product of two such moment generating functions is the moment generating function of Y: M_{Y}(t) = [1/(1-t)]^{2}. From what I can see on Wikipedia, that could be the moment generating function for a gamma distribution. That doesn't quite fit with my book's description of the gamma distribution, though (though it DOES fit with the book's tendency to use special distributions from later chapters in the chapters on general theory).
That's the joint density in terms of the X's, but you need the distribution in terms of Y. Right now, you're assuming dy=dx_{1}dx_{2}, but that's not true. That's why you need to integrate to find the cdf for Y. When you differentiate that, you'll get the pdf for Y.
Hmm. It's been 10 years since I took calculus; I can do basic integrals and differentiation, but figuring out how to set up some of this more complex stuff is something I probably forgot how to do within 6 months of learning it. What you're saying is that the joint density of X_{1} and X_{2} is indeed e^{x1+x2}, and I need to integrate that somehow to get Y's density? Or I need to integrate it somehow to get Y's distribution directly, which could then be differentiated to get Y's density if the problem asked for it? Can you point me to somewhere online where I can read about this, preferably with at least one example and possibly some images? I know you're trying to prod me into figuring out the right answer on my own (I do the same when I'm tutoring people on subjects that actually make sense to me ), but I don't know that I'm going to be able to figure it out from just your hints. I think I can make the gamma distribution shortcut work, since it's due in about 12 hours and based on your post times I don't know that you'll respond before then. I'd still like to know how I was actually supposed to do it, though, and I know I'll have to figure that out for myself since my professor never works the homework problems for us.
Looked at some double integrals of e^{x1+x2}, since I think what you're saying is that to get the distribution, I need to integrate the X's joint density with respect to both x_{1} and x_{2}, with some term including y (and, in the case of the first x I integrate with respect to, possibly the other x) in the limits of integration in at least one case. Thanks to the back of the book, I know I'm supposed to get -e^{y}(1+y)+C. My first thought was to first integrate with respect to x_{1} and set the limits of integration to 0 to y-x_{2} but that gets me a sinh(y)-cosh(y) factor that won't disappear in the second integration, so that's probably out. If I set the limits of the first integration to 0 to y, I get e^{-x}(e^{-y}-1), and I don't think there's any limits I can plug in that would get me what I'm supposed to get, so I'm pretty sure that's not it. I'm pretty sure the lower limit of integration in both integrals needs to be zero (since X only takes values greater than or equal to zero), but I'm having trouble finding a workable upper limit. The fact that the joint density integrated with respect to one of the two Xs gives sinh and cosh terms isn't helping. :grumpy:
Here, take a look at these lecture notes from Caltech, specifically page 4. http://ee162.caltech.edu/notes/lect8.pdf
The bolded isn't making sense to me. It seems like you would need a 3rd dimension for Y (or y?) in addition to the X_{1}X_{2} plane. While I can sort of visualize it, and I think I know which region I need, I'm not at all sure how to describe it mathematically in terms of X_{1} and X_{2}.
I think you have it right already. Your limits for x_{1} are from 0 to y-x_{2}. What are your limits for x_{2}, from 0 to what? I just suggested drawing the sketch because it usually helps people see what the limits are, but you seem to have reasoned them out already. The picture is: in the x_{1}x_{2}-plane, the condition x_{1}+x_{2}<=y is represented by a half plane with the boundary being the line x_{1}+x_{2}=y. Like you said in your other post, both X's are positive, so you're confining yourself to the first quadrant. As long as y>0, you'll have a triangular region of integration, and you can see what the limits should be on the sketch.
Yeah, just worked it out again, integrating first with x_{1} going from 0 to y-x_{2} then with x_{2} going from 0 to infinity; didn't come out right so apparently that's not it, but I feel like I'm getting closer, and I figured out what I was doing wrong that was getting me those stupid hyperbolic trig functions.
So, integrate with respect to x_{1} first, from 0 to y-x_{2}, then integrate with respect to x_{2}, from 0 to y (since it can't possibly exceed y, as x_{1} will never be negative)? Edit: Hmm, tried that, got just 1-e^{-y}. Still not sure how to get the ye^{-y} in the 1-ye^{-y}-e^{-y} I know I'm supposed to be getting...
I vote yes. If you are doing it correctly you are integrating one term that is [itex] e^{-y} [/itex] with respect to [itex] x_2 [/itex]. That term is constant with respect to [itex] x_2 [/itex], so it's antiderivative is [itex]x_2 e^-{y} [/itex]. After that step, the limits of integration put [itex] y [/itex] back in for [itex]x_2 [/itex].
OK, let's see if we can find where my rusty calculus skills betrayed me. (going to avoid using latex to put in the integral since it seems to come out weird when I preview it) joint density = e^{-x1}e^{-x2} we want: integral(0<=X_{2}<=y) of e^{-x2} times ( integral(0<=X_{1}<=y-x_{2}) of e^{-x1}dx_{1} ) times dx_{2} that resolves to: integral(0<=X_{2}<=y) of e^{-x2} times ( -e^{-t} )^{t=y-x2}_{t=0} times dx_{2} which resolves to: integral(0<=X_{2}<=y) of e^{-x2}(1-e^{y}e^{-x2})dx_{2} we can distribute that out to: ( integral(0<=X_{2}<=y) of e^{-x2}dx_{2} ) - ( e^{y} * integral(0<=X_{2}<=y) of e^{-x2}e^{-x2}dx_{2} ) after we integrate we get: ( -e^{-t} )^{t=y}_{t=0} - e^{y}( (-1/2)e^{-2t} )^{t=y}_{t=0} which gives: 1-e^{-y} + (1/2)e^{y}(1-e^{-2y}) a little quick distribution turns that into: 1 - e^{-y} + (1/2)(e^{y}-e^{-y}) which finally gives us: 1 + (1/2)e^{y} - (3/2)e^{-y} ...not what we're supposed to get.