Sum of two independent Poisson random variables

AI Thread Summary
The discussion centers on understanding the support region for the joint distribution of two independent Poisson random variables, X1 and X2, when combined into Y = X1 + X2. The key point is that for a fixed value of Y, the possible values of X2 are constrained to 0 through Y, as any value of X2 greater than Y would imply a negative value for X1, which is not possible. Participants discuss the terminology around "support," suggesting it might be clearer to refer to it as the "support of X2 given Y." The transformation technique is mentioned as a method for changing variables in probability distributions, akin to adjusting limits in integration. Overall, the conversation emphasizes the importance of correctly defining the support in the context of joint distributions.
andresc889
Messages
5
Reaction score
0
Hello!

I am trying to understand an example from my book that deals with two independent Poisson random variables X1 and X2 with parameters λ1 and λ2. The problem is to find the probability distribution of Y = X1 + X2. I am aware this can be done with the moment-generating function technique, but the author is using this problem to illustrate the transformation technique.

He starts by obtaining the joint probability distribution of the two variables:

f(x1, x2) = p1(x1)p2(x2)

for x1 = 0, 1, 2,... and x1 = 0, 1, 2,...

Then he proceeds onto saying: "Since y = x1 + x2 and hence x1 = y - x2, we can substitute y - x2 for x1, getting:

g(y, x2) = f(y - x2, x2)

for y = 0, 1, 2,... and x2 = 0, 1,..., y for the joint distribution of Y and X2."

Then he goes ahead and obtains the marginal distribution of Y by summing over all x2.

My question is this. How did he obtain the region of support (y = 0, 1, 2,... and x2 = 0, 1,..., y) for g(y, x2). I can't for the life of me understand this.

Thank you for your help!
 
Physics news on Phys.org
andresc889 said:
How did he obtain the region of support (y = 0, 1, 2,... and x2 = 0, 1,..., y) for g(y, x2). I can't for the life of me understand this.

I don't understand which aspect of the support you are asking about. Is your question about the restriction of x_2 to 0,1,..y ? If you had a non-zero probability for a point like y = 3, x_2 = 4 that would imply that x_1 = -1 since y is defined as x_1+ x_2.
 
Stephen Tashi said:
I don't understand which aspect of the support you are asking about. Is your question about the restriction of x_2 to 0,1,..y ? If you had a non-zero probability for a point like y = 3, x_2 = 4 that would imply that x_1 = -1 since y is defined as x_1+ x_2.

Thank you for your reply!

Yes. My specific question was about the restriction of x_2 to 0, 1 ,... y. It makes more sense when you give an example. What I'm going to ask next might be dumb and might show that I don't fully understand the transformation technique. Could he have described the support differently? For example, letting x_2 be 0, 1, 2,... and restricting y instead somehow?
 
andresc889 said:
that I don't fully understand the transformation technique. Could he have described the support differently?

Actually, you can help me by explaining what (in general terms) the "tranformation technique" is and what it's used for. When I took probability, the books didn't identify a particular method called "the transformation technique". Of course, it was taken for granted that you could do a change of variables.

As long as y and x2 are defined as they are, then the non-zero values of the joint density for a fixed y value occur at x2 = 0,1,2...y. If you define y differently, then the set of x2's non-zero values for a given y could change.

I don't like the terminology that the "support" of x2 is 0,1,2...y. I prefer that the "support" of a random variable be defined as the set of values for which it's density is non-zero. The set {0,1,2...y} should be described as the "support of x2 given y" or something like that.

If you've had calculus, what is going on amounts to the usual change in the limits of integration when you change variables. Here, the "integrals" are sums. (In advanced mathematics there are very general definitions for integration and sums actually are examples of these generalized types of integrals.)

Suppose you have a discrete joint density f(i,j) defined on a rectangle where the (i,j) entries are in a 3 by 4 pattern like

# $ * *
$ * * *
* * * *

Then if you want to compute the marginal density for a given i0, you sum f(i0,j) for j = 1 to 4.

Suppose you change variables so the indices become (p,q) and pattern is changed to
a parallelogram like:

#
$ $
* * *
* * *
* *
*

If you want to compute the marginal density of a given p0, you must adjust the indices of q that you sum over depending on the value of p0.
 
I was reading documentation about the soundness and completeness of logic formal systems. Consider the following $$\vdash_S \phi$$ where ##S## is the proof-system making part the formal system and ##\phi## is a wff (well formed formula) of the formal language. Note the blank on left of the turnstile symbol ##\vdash_S##, as far as I can tell it actually represents the empty set. So what does it mean ? I guess it actually means ##\phi## is a theorem of the formal system, i.e. there is a...
Back
Top