Physics Forums

Physics Forums (http://www.physicsforums.com/index.php)
-   Calculus & Beyond Homework (http://www.physicsforums.com/forumdisplay.php?f=156)
-   -   Sum of Identically Distributed Independent Random Variables (http://www.physicsforums.com/showthread.php?t=489284)

ObliviousSage Apr11-11 10:36 AM

Sum of Identically Distributed Independent Random Variables
 
1. The problem statement, all variables and given/known data

The random variables X1 and X2 are independent and identically distributed with common density fX(x) = e-x for x>0. Determine the distribution function for the random variable Y given by Y = X1 + X2.

2. Relevant equations

Not sure. Question is from Ch4 of the book, and convolutions for computing distributions of sums of random variables aren't introduced until Ch8.

Answer in the back of the book is FY(y) = 1-e-y(1+y) for y>0.

3. The attempt at a solution

Tried every convolution formula I could find (I found a bunch of different variations, and I'm not sure which ones are equivalent or correct): integral from zero to positive infinity of f(t)F(y-t)dt, F(t)f(y-t)dt, F(t)F(y-t)dt, and f(t)f(y-t)dt. All of them end up with a [tex]\int[/tex]ete-t, which gives me an infinity term.

Am I solving the integral wrong or not setting it up right or something else?

Tangent87 Apr11-11 11:41 AM

Re: Sum of Identically Distributed Independent Random Variables
 
I don't know how your book is explaining convolutions, but this book does it very well (scroll down to page 291 and page 292 for the exponential distribution example):

http://www.dartmouth.edu/~chance/tea...k/Chapter7.pdf

That has everything you need.

vela Apr11-11 12:12 PM

Re: Sum of Identically Distributed Independent Random Variables
 
You don't need to know the concept of convolution to solve this problem.

You can find FY(y) using

[tex]F_Y(y) = P(Y\leq y) = \iint\limits_{Y\leq y} f(x_1,x_2)\,dx_1\,dx_2[/tex]

First, you should be able to tell us what the joint probability density f(x1,x2) is equal to. Next, sketch the region that corresponds to 0≤Y≤y in the X1X2 plane to figure out the limits on the double integral. Then integrate to get the answer.

ObliviousSage Apr13-11 10:14 AM

Re: Sum of Identically Distributed Independent Random Variables
 
I guess the problem, then is that I don't understand how to get the joint density. Like I said, the back of the book has the answer, I just can't see how they get it. When I differentiate the joint distribution given, I get f(y) = ye-y and I'm not sure where that comes from.

I don't see anything in this chapter of the book explaining how to get the joint density (though that doesn't mean it's not in there). I know my professor didn't cover it in his lectures, and he only loosely follows the book. He also doesn't look too closely at the problems he assigns out of the book to make sure they cover something he's explained.

Can anyone point me at a resource where I can read up on how to do this?

vela Apr13-11 11:57 AM

Re: Sum of Identically Distributed Independent Random Variables
 
The key is that the two random variables are independent. What does that mean mathematically?

ObliviousSage Apr13-11 07:59 PM

Re: Sum of Identically Distributed Independent Random Variables
 
I know that when two variables are independent, the product of their marginal densities is the joint density. Surely that can't be all I need to use in this case?

For one thing, that gives me a joint density of e-y instead of the ye-y it looks like I'm supposed to be getting. For another, while it works nicely in the case of Y = X1 + X2, I can think of a lot of densities and ways to combine X1 and X2 that would keep the product of their densities from being expressed in terms of Y.

Their independence also means their conditional densities are equal to their marginal densities (essentially another way of expressing the previous info, since the conditional density is just the joint divided by marginal). The arithmetic expectation of Y should equal the sums of the arithmetic expectations of the two Xs; useful for confirming when I've got the right answer, but not helpful for finding it. Ditto for their variance, and from there you can get skew.

The moment generating function of Y should be a product of the moment generating function of the Xs, which I suppose could help. X is an exponential distribution with [tex]\lambda[/tex] = 1. Thus it should have a moment generating function of Mx(t) = 1/(1-t). The product of two such moment generating functions is the moment generating function of Y: MY(t) = [1/(1-t)]2. From what I can see on Wikipedia, that could be the moment generating function for a gamma distribution. That doesn't quite fit with my book's description of the gamma distribution, though (though it DOES fit with the book's tendency to use special distributions from later chapters in the chapters on general theory).

vela Apr13-11 09:42 PM

Re: Sum of Identically Distributed Independent Random Variables
 
That's the joint density in terms of the X's, but you need the distribution in terms of Y. Right now, you're assuming dy=dx1dx2, but that's not true. That's why you need to integrate to find the cdf for Y. When you differentiate that, you'll get the pdf for Y.

ObliviousSage Apr13-11 11:06 PM

Re: Sum of Identically Distributed Independent Random Variables
 
Hmm. It's been 10 years since I took calculus; I can do basic integrals and differentiation, but figuring out how to set up some of this more complex stuff is something I probably forgot how to do within 6 months of learning it.

What you're saying is that the joint density of X1 and X2 is indeed ex1+x2, and I need to integrate that somehow to get Y's density? Or I need to integrate it somehow to get Y's distribution directly, which could then be differentiated to get Y's density if the problem asked for it?

Can you point me to somewhere online where I can read about this, preferably with at least one example and possibly some images? I know you're trying to prod me into figuring out the right answer on my own (I do the same when I'm tutoring people on subjects that actually make sense to me :rolleyes:), but I don't know that I'm going to be able to figure it out from just your hints.

I think I can make the gamma distribution shortcut work, since it's due in about 12 hours and based on your post times I don't know that you'll respond before then. I'd still like to know how I was actually supposed to do it, though, and I know I'll have to figure that out for myself since my professor never works the homework problems for us.

ObliviousSage Apr13-11 11:44 PM

Re: Sum of Identically Distributed Independent Random Variables
 
Looked at some double integrals of ex1+x2, since I think what you're saying is that to get the distribution, I need to integrate the X's joint density with respect to both x1 and x2, with some term including y (and, in the case of the first x I integrate with respect to, possibly the other x) in the limits of integration in at least one case.

Thanks to the back of the book, I know I'm supposed to get -ey(1+y)+C.

My first thought was to first integrate with respect to x1 and set the limits of integration to 0 to y-x2 but that gets me a sinh(y)-cosh(y) factor that won't disappear in the second integration, so that's probably out.

If I set the limits of the first integration to 0 to y, I get e-x(e-y-1), and I don't think there's any limits I can plug in that would get me what I'm supposed to get, so I'm pretty sure that's not it.

I'm pretty sure the lower limit of integration in both integrals needs to be zero (since X only takes values greater than or equal to zero), but I'm having trouble finding a workable upper limit. The fact that the joint density integrated with respect to one of the two Xs gives sinh and cosh terms isn't helping. :grumpy:

vela Apr13-11 11:55 PM

Re: Sum of Identically Distributed Independent Random Variables
 
Here, take a look at these lecture notes from Caltech, specifically page 4.

http://ee162.caltech.edu/notes/lect8.pdf

ObliviousSage Apr13-11 11:59 PM

Re: Sum of Identically Distributed Independent Random Variables
 
Quote:

Quote by vela (Post 3240976)
You don't need to know the concept of convolution to solve this problem.

You can find FY(y) using

[tex]F_Y(y) = P(Y\leq y) = \iint\limits_{Y\leq y} f(x_1,x_2)\,dx_1\,dx_2[/tex]

First, you should be able to tell us what the joint probability density f(x1,x2) is equal to. Next, sketch the region that corresponds to 0≤Y≤y in the X1X2 plane to figure out the limits on the double integral. Then integrate to get the answer.

The bolded isn't making sense to me. It seems like you would need a 3rd dimension for Y (or y?) in addition to the X1X2 plane. While I can sort of visualize it, and I think I know which region I need, I'm not at all sure how to describe it mathematically in terms of X1 and X2.

vela Apr14-11 12:00 AM

Re: Sum of Identically Distributed Independent Random Variables
 
Quote:

Quote by ObliviousSage (Post 3245814)
Looked at some double integrals of ex1+x2, since I think what you're saying is that to get the distribution, I need to integrate the X's joint density with respect to both x1 and x2, with some term including y (and, in the case of the first x I integrate with respect to, possibly the other x) in the limits of integration in at least one case.

Thanks to the back of the book, I know I'm supposed to get -ey(1+y)+C.

My first thought was to first integrate with respect to x1 and set the limits of integration to 0 to y-x2 but that gets me a sinh(y)-cosh(y) factor that won't disappear in the second integration, so that's probably out.

If I set the limits of the first integration to 0 to y, I get e-x(e-y-1), and I don't think there's any limits I can plug in that would get me what I'm supposed to get, so I'm pretty sure that's not it.

I'm pretty sure the lower limit of integration in both integrals needs to be zero (since X only takes values greater than or equal to zero), but I'm having trouble finding a workable upper limit. The fact that the joint density integrated with respect to one of the two Xs gives sinh and cosh terms isn't helping. :grumpy:

It sounds like you have the right idea, but you're just having problems with the execution.

vela Apr14-11 12:02 AM

Re: Sum of Identically Distributed Independent Random Variables
 
Quote:

Quote by ObliviousSage (Post 3245830)
The bolded isn't making sense to me. It seems like you would need a 3rd dimension for Y (or y?) in addition to the X1X2 plane. While I can sort of visualize it, and I think I know which region I need, I'm not at all sure how to describe it mathematically in terms of X1 and X2.

I think you have it right already. Your limits for x1 are from 0 to y-x2. What are your limits for x2, from 0 to what?

I just suggested drawing the sketch because it usually helps people see what the limits are, but you seem to have reasoned them out already. The picture is: in the x1x2-plane, the condition x1+x2<=y is represented by a half plane with the boundary being the line x1+x2=y. Like you said in your other post, both X's are positive, so you're confining yourself to the first quadrant. As long as y>0, you'll have a triangular region of integration, and you can see what the limits should be on the sketch.

ObliviousSage Apr14-11 12:17 AM

Re: Sum of Identically Distributed Independent Random Variables
 
Quote:

Quote by vela (Post 3245833)
I think you have it right already. Your limits for x1 are from 0 to y-x2. What are your limits for x2, from 0 to what?

I just suggested drawing the sketch because it usually helps people see what the limits are, but you seem to have reasoned them out already. The picture is: in the x1x2-plane, the condition x1+x2<=y is represented by a half plane with the boundary being the line x1+x2=y. Like you said in your other post, both X's are positive, so you're confining yourself to the first quadrant. As long as y>0, you'll have a triangular region of integration, and you can see what the limits should be on the sketch.

Yeah, just worked it out again, integrating first with x1 going from 0 to y-x2 then with x2 going from 0 to infinity; didn't come out right so apparently that's not it, but I feel like I'm getting closer, and I figured out what I was doing wrong that was getting me those stupid hyperbolic trig functions.

vela Apr14-11 12:31 AM

Re: Sum of Identically Distributed Independent Random Variables
 
If x1+x2<y, x2 can't go to infinity for any finite value of y, right?

ObliviousSage Apr14-11 12:38 AM

Re: Sum of Identically Distributed Independent Random Variables
 
Quote:

Quote by vela (Post 3245859)
If x1+x2<y, x2 can't go to infinity for any finite value of y, right?

So, integrate with respect to x1 first, from 0 to y-x2, then integrate with respect to x2, from 0 to y (since it can't possibly exceed y, as x1 will never be negative)?

Edit: Hmm, tried that, got just 1-e-y. Still not sure how to get the ye-y in the 1-ye-y-e-y I know I'm supposed to be getting...

vela Apr14-11 12:51 AM

Re: Sum of Identically Distributed Independent Random Variables
 
Those are the correct limits.

What do you have after integrating with respect to x1?

Stephen Tashi Apr14-11 01:03 AM

Re: Sum of Identically Distributed Independent Random Variables
 
Quote:

Quote by ObliviousSage (Post 3245868)
So, integrate with respect to x1 first, from 0 to y-x2, then integrate with respect to x2, from 0 to y (since it can't possibly exceed y, as x1 will never be negative)?

I vote yes.

Quote:

Still not sure how to get the ye-y
If you are doing it correctly you are integrating one term that is [itex] e^{-y} [/itex] with respect to [itex] x_2 [/itex]. That term is constant with respect to [itex] x_2 [/itex], so it's antiderivative is [itex]x_2 e^-{y} [/itex]. After that step, the limits of integration put [itex] y
[/itex] back in for [itex]x_2 [/itex].


All times are GMT -5. The time now is 04:48 AM.

Powered by vBulletin Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
© 2014 Physics Forums