I Randomly Stopped Sums vs the sum of I.I.D. Random Variables

CGandC
Messages
326
Reaction score
34
I've came across the two following theorems in my studies of Probability Generating Functions:

Theorem 1:
Suppose ##X_1, ... , X_n## are independent random variables, and let ##Y = X_1 + ... + X_n##. Then,
##G_Y(s) = \prod_{i=1}^n G_{X_i}(s)##

Theorem 2:
Let ##X_1, X_2, ...## be a sequence of independent and identically distributed random variables with common PGF ##G_X##. Let ##N## be a random variable, independent of the ##X_i##'s with PGF ##G_N##, and let ##T_N = X_1 + ... + X_N = \sum_{i=1}^N X_i##. Then the PGF of ##T_N## is:
##G_{T_N}(s) = G_N (G_X(s))##

Question:
I don't understand the difference between these two theorems.
From reading here: https://stats.stackexchange.com/que...topped-sums-vs-the-sum-of-i-i-d-random-variab
I understand that in first theorem ## n ## is a number that we know so we know how many ## X_i ## will appear in the sum in ## Y ##.
But in the second theorem ## N ## is a random variable so we don't know how many ## X_i ## will appear in the sum ## Y ##.

But I still don't fully understand.

the proof for the first theorem goes as follows:
##
G_Y(t) =G_{X_1+X_2+\ldots+X_n}(t)=\mathbb{E}\left[t^{X_1+X_2+\ldots+X_n}\right]=\mathbb{E}\left[\prod_{i=1}^n t^{X_i}\right]=\prod_{i=1}^n \mathbb{E}\left[t^{X_i}\right]=\prod_{i=1}^n G_{X_i}(t)
##

Then I tried to prove the second theorem using exactly the same proof as follows:
##
G_Y(t) =G_{X_1+X_2+\ldots+X_N}(t)=\mathbb{E}\left[t^{X_1+X_2+\ldots+X_N}\right]=\mathbb{E}\left[\prod_{i=1}^N t^{X_i}\right]=\prod_{i=1}^N \mathbb{E}\left[t^{X_i}\right]=\prod_{i=1}^N G_{X_i}(t)
##
this proof is specious, but I don't understand why. I mean, the number of ## X_i## 's that will be multiplied by each other is determined by ## N ## ,even if we don't know it, so I don't understand what's the problem.Thanks in advance for any help!
 
Physics news on Phys.org
CGandC said:
I've came across the two following theorems in my studies of Probability Generating Functions:

Theorem 1:
Suppose ##X_1, ... , X_n## are independent random variables, and let ##Y = X_1 + ... + X_n##. Then,
##G_Y(s) = \prod_{i=1}^n G_{X_i}(s)##

Theorem 2:
Let ##X_1, X_2, ...## be a sequence of independent and identically distributed random variables with common PGF ##G_X##. Let ##N## be a random variable, independent of the ##X_i##'s with PGF ##G_N##, and let ##T_N = X_1 + ... + X_N = \sum_{i=1}^N X_i##. Then the PGF of ##T_N## is:
##G_{T_N}(s) = G_N (G_X(s))##

Question:
I don't understand the difference between these two theorems.
From reading here: https://stats.stackexchange.com/que...topped-sums-vs-the-sum-of-i-i-d-random-variab
I understand that in first theorem ## n ## is a number that we know so we know how many ## X_i ## will appear in the sum in ## Y ##.
But in the second theorem ## N ## is a random variable so we don't know how many ## X_i ## will appear in the sum ## Y ##.

But I still don't fully understand.

the proof for the first theorem goes as follows:
##
G_Y(t) =G_{X_1+X_2+\ldots+X_n}(t)=\mathbb{E}\left[t^{X_1+X_2+\ldots+X_n}\right]=\mathbb{E}\left[\prod_{i=1}^n t^{X_i}\right]=\prod_{i=1}^n \mathbb{E}\left[t^{X_i}\right]=\prod_{i=1}^n G_{X_i}(t)
##

Then I tried to prove the second theorem using exactly the same proof as follows:
##
G_Y(t) =G_{X_1+X_2+\ldots+X_N}(t)=\mathbb{E}\left[t^{X_1+X_2+\ldots+X_N}\right]=\mathbb{E}\left[\prod_{i=1}^N t^{X_i}\right]=\prod_{i=1}^N \mathbb{E}\left[t^{X_i}\right]=\prod_{i=1}^N G_{X_i}(t)
##
this proof is specious, but I don't understand why. I mean, the number of ## X_i## 's that will be multiplied by each other is determined by ## N ## ,even if we don't know it, so I don't understand what's the problem.Thanks in advance for any help!

\prod_{i=1}^N G_{X_i}(t) = (G_{X_1}(t))^N is a random variable: it's a function of N. To find \mathbb{E}(t^Y) you need to remove this dependence on N by using conditional expectation: <br /> \begin{split}<br /> \mathbb{E}(t^{Y}) &amp;= \sum_{n=1}^\infty \mathbb{E}(t^{X_1 + \dots + X_N} | N = n)\mathbb{P}(N = n) \\<br /> &amp;= \sum_{n=1}^\infty \mathbb{E}(t^{X_1 + \dots + X_n})\mathbb{P}(N = n) \end{split}
 
  • Like
Likes jim mcnamara and CGandC
Ahh! that makes sense, thank you alot!
 
  • Like
Likes jim mcnamara
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Thread 'Detail of Diagonalization Lemma'
The following is more or less taken from page 6 of C. Smorynski's "Self-Reference and Modal Logic". (Springer, 1985) (I couldn't get raised brackets to indicate codification (Gödel numbering), so I use a box. The overline is assigning a name. The detail I would like clarification on is in the second step in the last line, where we have an m-overlined, and we substitute the expression for m. Are we saying that the name of a coded term is the same as the coded term? Thanks in advance.

Similar threads

Replies
2
Views
2K
Replies
1
Views
2K
Replies
5
Views
2K
Replies
1
Views
1K
Replies
6
Views
2K
Back
Top