Sum of Identically Distributed Independent Random Variables

In summary, the random variable Y, which is the sum of independent and identically distributed random variables X1 and X2, has a distribution function of FY(y) = 1-e-y(1+y) for y>0. To determine this, one can use the joint probability density function f(x1,x2) = ex1+x2 and integrate over the region corresponding to 0≤Y≤y in the X1X2 plane. Alternatively, one can use the moment generating function of X1 and X2 to get the moment generating function of Y and then use that to determine the distribution. It is also possible to use the concept of convolution, but this is not necessary in this case.
  • #1
ObliviousSage
36
0

Homework Statement



The random variables X1 and X2 are independent and identically distributed with common density fX(x) = e-x for x>0. Determine the distribution function for the random variable Y given by Y = X1 + X2.

Homework Equations



Not sure. Question is from Ch4 of the book, and convolutions for computing distributions of sums of random variables aren't introduced until Ch8.

Answer in the back of the book is FY(y) = 1-e-y(1+y) for y>0.

The Attempt at a Solution



Tried every convolution formula I could find (I found a bunch of different variations, and I'm not sure which ones are equivalent or correct): integral from zero to positive infinity of f(t)F(y-t)dt, F(t)f(y-t)dt, F(t)F(y-t)dt, and f(t)f(y-t)dt. All of them end up with a [tex]\int[/tex]ete-t, which gives me an infinity term.

Am I solving the integral wrong or not setting it up right or something else?
 
Physics news on Phys.org
  • #2
I don't know how your book is explaining convolutions, but this book does it very well (scroll down to page 291 and page 292 for the exponential distribution example):

http://www.dartmouth.edu/~chance/teaching_aids/books_articles/probability_book/Chapter7.pdf

That has everything you need.
 
  • #3
You don't need to know the concept of convolution to solve this problem.

You can find FY(y) using

[tex]F_Y(y) = P(Y\leq y) = \iint\limits_{Y\leq y} f(x_1,x_2)\,dx_1\,dx_2[/tex]

First, you should be able to tell us what the joint probability density f(x1,x2) is equal to. Next, sketch the region that corresponds to 0≤Y≤y in the X1X2 plane to figure out the limits on the double integral. Then integrate to get the answer.
 
  • #4
I guess the problem, then is that I don't understand how to get the joint density. Like I said, the back of the book has the answer, I just can't see how they get it. When I differentiate the joint distribution given, I get f(y) = ye-y and I'm not sure where that comes from.

I don't see anything in this chapter of the book explaining how to get the joint density (though that doesn't mean it's not in there). I know my professor didn't cover it in his lectures, and he only loosely follows the book. He also doesn't look too closely at the problems he assigns out of the book to make sure they cover something he's explained.

Can anyone point me at a resource where I can read up on how to do this?
 
  • #5
The key is that the two random variables are independent. What does that mean mathematically?
 
  • #6
I know that when two variables are independent, the product of their marginal densities is the joint density. Surely that can't be all I need to use in this case?

For one thing, that gives me a joint density of e-y instead of the ye-y it looks like I'm supposed to be getting. For another, while it works nicely in the case of Y = X1 + X2, I can think of a lot of densities and ways to combine X1 and X2 that would keep the product of their densities from being expressed in terms of Y.

Their independence also means their conditional densities are equal to their marginal densities (essentially another way of expressing the previous info, since the conditional density is just the joint divided by marginal). The arithmetic expectation of Y should equal the sums of the arithmetic expectations of the two Xs; useful for confirming when I've got the right answer, but not helpful for finding it. Ditto for their variance, and from there you can get skew.

The moment generating function of Y should be a product of the moment generating function of the Xs, which I suppose could help. X is an exponential distribution with [tex]\lambda[/tex] = 1. Thus it should have a moment generating function of Mx(t) = 1/(1-t). The product of two such moment generating functions is the moment generating function of Y: MY(t) = [1/(1-t)]2. From what I can see on Wikipedia, that could be the moment generating function for a gamma distribution. That doesn't quite fit with my book's description of the gamma distribution, though (though it DOES fit with the book's tendency to use special distributions from later chapters in the chapters on general theory).
 
  • #7
That's the joint density in terms of the X's, but you need the distribution in terms of Y. Right now, you're assuming dy=dx1dx2, but that's not true. That's why you need to integrate to find the cdf for Y. When you differentiate that, you'll get the pdf for Y.
 
  • #8
Hmm. It's been 10 years since I took calculus; I can do basic integrals and differentiation, but figuring out how to set up some of this more complex stuff is something I probably forgot how to do within 6 months of learning it.

What you're saying is that the joint density of X1 and X2 is indeed ex1+x2, and I need to integrate that somehow to get Y's density? Or I need to integrate it somehow to get Y's distribution directly, which could then be differentiated to get Y's density if the problem asked for it?

Can you point me to somewhere online where I can read about this, preferably with at least one example and possibly some images? I know you're trying to prod me into figuring out the right answer on my own (I do the same when I'm tutoring people on subjects that actually make sense to me :rolleyes:), but I don't know that I'm going to be able to figure it out from just your hints.

I think I can make the gamma distribution shortcut work, since it's due in about 12 hours and based on your post times I don't know that you'll respond before then. I'd still like to know how I was actually supposed to do it, though, and I know I'll have to figure that out for myself since my professor never works the homework problems for us.
 
  • #9
Looked at some double integrals of ex1+x2, since I think what you're saying is that to get the distribution, I need to integrate the X's joint density with respect to both x1 and x2, with some term including y (and, in the case of the first x I integrate with respect to, possibly the other x) in the limits of integration in at least one case.

Thanks to the back of the book, I know I'm supposed to get -ey(1+y)+C.

My first thought was to first integrate with respect to x1 and set the limits of integration to 0 to y-x2 but that gets me a sinh(y)-cosh(y) factor that won't disappear in the second integration, so that's probably out.

If I set the limits of the first integration to 0 to y, I get e-x(e-y-1), and I don't think there's any limits I can plug in that would get me what I'm supposed to get, so I'm pretty sure that's not it.

I'm pretty sure the lower limit of integration in both integrals needs to be zero (since X only takes values greater than or equal to zero), but I'm having trouble finding a workable upper limit. The fact that the joint density integrated with respect to one of the two Xs gives sinh and cosh terms isn't helping.
 
  • #10
  • #11
vela said:
You don't need to know the concept of convolution to solve this problem.

You can find FY(y) using

[tex]F_Y(y) = P(Y\leq y) = \iint\limits_{Y\leq y} f(x_1,x_2)\,dx_1\,dx_2[/tex]

First, you should be able to tell us what the joint probability density f(x1,x2) is equal to. Next, sketch the region that corresponds to 0≤Y≤y in the X1X2 plane to figure out the limits on the double integral. Then integrate to get the answer.

The bolded isn't making sense to me. It seems like you would need a 3rd dimension for Y (or y?) in addition to the X1X2 plane. While I can sort of visualize it, and I think I know which region I need, I'm not at all sure how to describe it mathematically in terms of X1 and X2.
 
  • #12
ObliviousSage said:
Looked at some double integrals of ex1+x2, since I think what you're saying is that to get the distribution, I need to integrate the X's joint density with respect to both x1 and x2, with some term including y (and, in the case of the first x I integrate with respect to, possibly the other x) in the limits of integration in at least one case.

Thanks to the back of the book, I know I'm supposed to get -ey(1+y)+C.

My first thought was to first integrate with respect to x1 and set the limits of integration to 0 to y-x2 but that gets me a sinh(y)-cosh(y) factor that won't disappear in the second integration, so that's probably out.

If I set the limits of the first integration to 0 to y, I get e-x(e-y-1), and I don't think there's any limits I can plug in that would get me what I'm supposed to get, so I'm pretty sure that's not it.

I'm pretty sure the lower limit of integration in both integrals needs to be zero (since X only takes values greater than or equal to zero), but I'm having trouble finding a workable upper limit. The fact that the joint density integrated with respect to one of the two Xs gives sinh and cosh terms isn't helping.
It sounds like you have the right idea, but you're just having problems with the execution.
 
  • #13
ObliviousSage said:
The bolded isn't making sense to me. It seems like you would need a 3rd dimension for Y (or y?) in addition to the X1X2 plane. While I can sort of visualize it, and I think I know which region I need, I'm not at all sure how to describe it mathematically in terms of X1 and X2.
I think you have it right already. Your limits for x1 are from 0 to y-x2. What are your limits for x2, from 0 to what?

I just suggested drawing the sketch because it usually helps people see what the limits are, but you seem to have reasoned them out already. The picture is: in the x1x2-plane, the condition x1+x2<=y is represented by a half plane with the boundary being the line x1+x2=y. Like you said in your other post, both X's are positive, so you're confining yourself to the first quadrant. As long as y>0, you'll have a triangular region of integration, and you can see what the limits should be on the sketch.
 
Last edited:
  • #14
vela said:
I think you have it right already. Your limits for x1 are from 0 to y-x2. What are your limits for x2, from 0 to what?

I just suggested drawing the sketch because it usually helps people see what the limits are, but you seem to have reasoned them out already. The picture is: in the x1x2-plane, the condition x1+x2<=y is represented by a half plane with the boundary being the line x1+x2=y. Like you said in your other post, both X's are positive, so you're confining yourself to the first quadrant. As long as y>0, you'll have a triangular region of integration, and you can see what the limits should be on the sketch.

Yeah, just worked it out again, integrating first with x1 going from 0 to y-x2 then with x2 going from 0 to infinity; didn't come out right so apparently that's not it, but I feel like I'm getting closer, and I figured out what I was doing wrong that was getting me those stupid hyperbolic trig functions.
 
  • #15
If x1+x2<y, x2 can't go to infinity for any finite value of y, right?
 
  • #16
vela said:
If x1+x2<y, x2 can't go to infinity for any finite value of y, right?

So, integrate with respect to x1 first, from 0 to y-x2, then integrate with respect to x2, from 0 to y (since it can't possibly exceed y, as x1 will never be negative)?

Edit: Hmm, tried that, got just 1-e-y. Still not sure how to get the ye-y in the 1-ye-y-e-y I know I'm supposed to be getting...
 
Last edited:
  • #17
Those are the correct limits.

What do you have after integrating with respect to x1?
 
  • #18
ObliviousSage said:
So, integrate with respect to x1 first, from 0 to y-x2, then integrate with respect to x2, from 0 to y (since it can't possibly exceed y, as x1 will never be negative)?
I vote yes.

Still not sure how to get the ye-y
If you are doing it correctly you are integrating one term that is [itex] e^{-y} [/itex] with respect to [itex] x_2 [/itex]. That term is constant with respect to [itex] x_2 [/itex], so it's antiderivative is [itex]x_2 e^-{y} [/itex]. After that step, the limits of integration put [itex] y
[/itex] back in for [itex]x_2 [/itex].
 
  • #19
OK, let's see if we can find where my rusty calculus skills betrayed me.

(going to avoid using latex to put in the integral since it seems to come out weird when I preview it)

joint density = e-x1e-x2

we want: integral(0<=X2<=y) of e-x2 times ( integral(0<=X1<=y-x2) of e-x1dx1 ) times dx2

that resolves to: integral(0<=X2<=y) of e-x2 times ( -e-t )t=y-x2t=0 times dx2

which resolves to: integral(0<=X2<=y) of e-x2(1-eye-x2)dx2

we can distribute that out to: ( integral(0<=X2<=y) of e-x2dx2 ) - ( ey * integral(0<=X2<=y) of e-x2e-x2dx2 )

after we integrate we get: ( -e-t )t=yt=0 - ey( (-1/2)e-2t )t=yt=0

which gives: 1-e-y + (1/2)ey(1-e-2y)

a little quick distribution turns that into: 1 - e-y + (1/2)(ey-e-y)

which finally gives us: 1 + (1/2)ey - (3/2)e-y

...not what we're supposed to get. :cry: :confused:
 
  • #20
You made a sign error when you plugged in the upper limit for x1.
 
  • #21
Thwarted by basic arithmetic! Even more depressing than being thwarted by real calculus...

Yeah, I see how it's supposed to work now. Will grind it out tomorrow AM, as it's well past time for me to escape into unconsciousness.

Thanks for all the help vela, and you as well Stephen!
 
  • #22
ObliviousSage said:
(going to avoid using latex to put in the integral since it seems to come out weird when I preview it)

Everybody has that problem! There's a bug with the forum's LaTex. You have to do the preview and then tell your browser to reload the page. You must do a similar process when you edit a post.
 

FAQ: Sum of Identically Distributed Independent Random Variables

What is the definition of "Sum of Identically Distributed Independent Random Variables"?

The sum of identically distributed independent random variables refers to a mathematical operation that combines multiple random variables into a single random variable by adding their values together. The random variables must have the same probability distribution and must be independent of each other.

What are some examples of identically distributed independent random variables?

Examples of identically distributed independent random variables include rolling two fair dice and adding the numbers, flipping two coins and counting the number of heads, and measuring the heights of two different individuals from the same population.

What is the significance of studying "Sum of Identically Distributed Independent Random Variables"?

Studying the sum of identically distributed independent random variables is important in probability theory and statistics, as it allows for the analysis and prediction of complex systems and phenomena. It also has practical applications in fields such as finance, engineering, and biology.

What is the difference between identically distributed and independent random variables?

Identically distributed random variables have the same probability distribution, while independent random variables do not affect each other's outcomes. In other words, the distribution of one variable does not change based on the value of another variable. Identically distributed independent random variables must satisfy both conditions.

How is the sum of identically distributed independent random variables calculated?

The sum of identically distributed independent random variables is calculated by adding together the values of each individual random variable. This can be done by either adding the values directly or by using mathematical formulas depending on the distribution of the variables. For example, if the random variables follow a normal distribution, the sum can be calculated using the central limit theorem.

Back
Top