# Sum of two independent Poisson random variables

1. Jan 29, 2012

### andresc889

Hello!

I am trying to understand an example from my book that deals with two independent Poisson random variables X1 and X2 with parameters λ1 and λ2. The problem is to find the probability distribution of Y = X1 + X2. I am aware this can be done with the moment-generating function technique, but the author is using this problem to illustrate the transformation technique.

He starts by obtaining the joint probability distribution of the two variables:

f(x1, x2) = p1(x1)p2(x2)

for x1 = 0, 1, 2,... and x1 = 0, 1, 2,...

Then he proceeds onto saying: "Since y = x1 + x2 and hence x1 = y - x2, we can substitute y - x2 for x1, getting:

g(y, x2) = f(y - x2, x2)

for y = 0, 1, 2,... and x2 = 0, 1,..., y for the joint distribution of Y and X2."

Then he goes ahead and obtains the marginal distribution of Y by summing over all x2.

My question is this. How did he obtain the region of support (y = 0, 1, 2,... and x2 = 0, 1,..., y) for g(y, x2). I can't for the life of me understand this.

2. Jan 29, 2012

### Stephen Tashi

I don't understand which aspect of the support you are asking about. Is your question about the restriction of $x_2$ to 0,1,..y ? If you had a non-zero probability for a point like $y = 3, x_2 = 4$ that would imply that $x_1 = -1$ since $y$ is defined as $x_1+ x_2$.

3. Jan 29, 2012

### andresc889

Yes. My specific question was about the restriction of $x_2$ to 0, 1 ,... y. It makes more sense when you give an example. What I'm going to ask next might be dumb and might show that I don't fully understand the transformation technique. Could he have described the support differently? For example, letting $x_2$ be 0, 1, 2,... and restricting $y$ instead somehow?

4. Jan 29, 2012

### Stephen Tashi

Actually, you can help me by explaining what (in general terms) the "tranformation technique" is and what it's used for. When I took probability, the books didn't identify a particular method called "the transformation technique". Of course, it was taken for granted that you could do a change of variables.

As long as y and x2 are defined as they are, then the non-zero values of the joint density for a fixed y value occur at x2 = 0,1,2....y. If you define y differently, then the set of x2's non-zero values for a given y could change.

I don't like the terminology that the "support" of x2 is 0,1,2...y. I prefer that the "support" of a random variable be defined as the set of values for which it's density is non-zero. The set {0,1,2...y} should be described as the "support of x2 given y" or something like that.

If you've had calculus, what is going on amounts to the usual change in the limits of integration when you change variables. Here, the "integrals" are sums. (In advanced mathematics there are very general definitions for integration and sums actually are examples of these generalized types of integrals.)

Suppose you have a discrete joint density f(i,j) defined on a rectangle where the (i,j) entries are in a 3 by 4 pattern like

# $* *$ * * *
* * * *

Then if you want to compute the marginal density for a given i0, you sum f(i0,j) for j = 1 to 4.

Suppose you change variables so the indices become (p,q) and pattern is changed to
a parallelogram like:

#

* * *
* * *
* *
*

If you want to compute the marginal density of a given p0, you must adjust the indices of q that you sum over depending on the value of p0.