Sum of two independent Poisson random variables

andresc889
Messages
5
Reaction score
0
Hello!

I am trying to understand an example from my book that deals with two independent Poisson random variables X1 and X2 with parameters λ1 and λ2. The problem is to find the probability distribution of Y = X1 + X2. I am aware this can be done with the moment-generating function technique, but the author is using this problem to illustrate the transformation technique.

He starts by obtaining the joint probability distribution of the two variables:

f(x1, x2) = p1(x1)p2(x2)

for x1 = 0, 1, 2,... and x1 = 0, 1, 2,...

Then he proceeds onto saying: "Since y = x1 + x2 and hence x1 = y - x2, we can substitute y - x2 for x1, getting:

g(y, x2) = f(y - x2, x2)

for y = 0, 1, 2,... and x2 = 0, 1,..., y for the joint distribution of Y and X2."

Then he goes ahead and obtains the marginal distribution of Y by summing over all x2.

My question is this. How did he obtain the region of support (y = 0, 1, 2,... and x2 = 0, 1,..., y) for g(y, x2). I can't for the life of me understand this.

Thank you for your help!
 
Physics news on Phys.org
andresc889 said:
How did he obtain the region of support (y = 0, 1, 2,... and x2 = 0, 1,..., y) for g(y, x2). I can't for the life of me understand this.

I don't understand which aspect of the support you are asking about. Is your question about the restriction of x_2 to 0,1,..y ? If you had a non-zero probability for a point like y = 3, x_2 = 4 that would imply that x_1 = -1 since y is defined as x_1+ x_2.
 
Stephen Tashi said:
I don't understand which aspect of the support you are asking about. Is your question about the restriction of x_2 to 0,1,..y ? If you had a non-zero probability for a point like y = 3, x_2 = 4 that would imply that x_1 = -1 since y is defined as x_1+ x_2.

Thank you for your reply!

Yes. My specific question was about the restriction of x_2 to 0, 1 ,... y. It makes more sense when you give an example. What I'm going to ask next might be dumb and might show that I don't fully understand the transformation technique. Could he have described the support differently? For example, letting x_2 be 0, 1, 2,... and restricting y instead somehow?
 
andresc889 said:
that I don't fully understand the transformation technique. Could he have described the support differently?

Actually, you can help me by explaining what (in general terms) the "tranformation technique" is and what it's used for. When I took probability, the books didn't identify a particular method called "the transformation technique". Of course, it was taken for granted that you could do a change of variables.

As long as y and x2 are defined as they are, then the non-zero values of the joint density for a fixed y value occur at x2 = 0,1,2...y. If you define y differently, then the set of x2's non-zero values for a given y could change.

I don't like the terminology that the "support" of x2 is 0,1,2...y. I prefer that the "support" of a random variable be defined as the set of values for which it's density is non-zero. The set {0,1,2...y} should be described as the "support of x2 given y" or something like that.

If you've had calculus, what is going on amounts to the usual change in the limits of integration when you change variables. Here, the "integrals" are sums. (In advanced mathematics there are very general definitions for integration and sums actually are examples of these generalized types of integrals.)

Suppose you have a discrete joint density f(i,j) defined on a rectangle where the (i,j) entries are in a 3 by 4 pattern like

# $ * *
$ * * *
* * * *

Then if you want to compute the marginal density for a given i0, you sum f(i0,j) for j = 1 to 4.

Suppose you change variables so the indices become (p,q) and pattern is changed to
a parallelogram like:

#
$ $
* * *
* * *
* *
*

If you want to compute the marginal density of a given p0, you must adjust the indices of q that you sum over depending on the value of p0.
 
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Thread 'Detail of Diagonalization Lemma'
The following is more or less taken from page 6 of C. Smorynski's "Self-Reference and Modal Logic". (Springer, 1985) (I couldn't get raised brackets to indicate codification (Gödel numbering), so I use a box. The overline is assigning a name. The detail I would like clarification on is in the second step in the last line, where we have an m-overlined, and we substitute the expression for m. Are we saying that the name of a coded term is the same as the coded term? Thanks in advance.
Back
Top