Insights The Sum of Geometric Series from Probability Theory - Comments

Math_QED

Homework Helper
1,020
323
I think there might be some circular reasoning here. Before you can calculate probabilities, there should be a fixed probability measure no? That is, you start with a given probability space ##(\Omega, \mathcal{F}, \mathbb{P}) ## where ##\mathbb{P}## is a probability measure. In particular, we must have ##\mathbb{P}(\Omega) =1##.

You just claim that ##\mathbb{P}(\Omega) =1##, but this must be verified before starting the entire reasoning and is the crux of the entire matter.

Hope it's clear what I'm trying to say.
 
Last edited:

Demystifier

Science Advisor
Insights Author
2018 Award
9,666
2,701
@Math_QED your objection makes sense from the point of mathematical rigor. But mathematical rigor is not the point of my "derivation". The point is a practical trick that allows you to find a shortcut in finding mathematical identities. This becomes particularly interesting if one generalizes my trick to something more complicated. For instance, suppose that ##p## is not constant but has a different value ##x_k## in the ##k##th play of the game. Then my trick gives an immediate identity
$$1=x_1+(1-x_1)x_2+(1-x_1)(1-x_2)x_3+\cdots$$
Try to prove this identity by more conventional methods that do not depend on probability theory. It's not so easy. The probability theory allows you to see that it is true immediately, without any hard work. I find it amazing.
 
Last edited:

PeroK

Science Advisor
Homework Helper
Insights Author
Gold Member
2018 Award
8,788
3,150
@Math_QED your objection makes sense from the point of mathematical rigor. But mathematical rigor is not the point of my "derivation". The point is a practical trick that allows you to find a shortcut in finding mathematical identities. This becomes particularly interesting if one generalizes my trick to something more complicated. For instance, suppose that ##p## is not constant but has a different value ##x_k## in the ##k##th play of the game. Then my trick gives an immediate identity
$$1=x_1+(1-x_1)x_2+(1-x_1)(1-x_2)x_3+\cdots$$
Try to prove this identity by more conventional methods that do not depend on probability theory. It's not so easy. The probability theory allows you to see that it is true immediately, without any hard work. I find it amazing.
It's true that for fixed ##p## you must eventually win, but if the probability of winning on the ##k##th play reduces quickly, then you may not eventually win and the sum may be less than ##1##. For example:

If we let ##x_1 = \frac14## and ##x_k = \frac{1}{4^k}(1-x_1)(1- x_2) \dots (1-x_{k-1})##, then:

##x_1 + (1-x_1)x_2 + (1-x_1)(1-x_2)x_3 \dots = \frac14 + \frac{1}{16} + \frac{1}{64} \dots = \frac13##
 

Demystifier

Science Advisor
Insights Author
2018 Award
9,666
2,701
It's true that for fixed p you must eventually win, but if the probability of winning on the kth play reduces quickly
Of course, but the point is that the formula is valid if ##x_k## does not go to ##0## in the limit ##k\rightarrow\infty##. And even more important point is that it is not easy to see that without using probability theory.

If one wants to put it more precisely, the formula is valid if there is ##\epsilon>0## such that ##\epsilon<x_k\leq 1## for all ##k##.
 
Last edited:

PeroK

Science Advisor
Homework Helper
Insights Author
Gold Member
2018 Award
8,788
3,150
Of course, but the point is that the formula is valid if ##x_k## does not go to ##0## in the limit ##k\rightarrow\infty##. And even more important point is that it is not easy to see that without using probability theory.

If one wants to put it more precisely, the formula is valid if there is ##\epsilon>0## such that ##\epsilon<x_k\leq 1## for all ##k##.
When I first saw the previous post, I assumed you meant ##x_k \rightarrow 0##. Certainly, if ##x_k## is bounded below, then it's a really clever result.
 

StoneTemplePython

Science Advisor
Gold Member
1,002
485
To put a finer point on this, you have a sequence of independent events (coin tosses) with a goal in mind that they keep occurring until the first success. They are inhomogenous. In slightly more general settings you can get in trouble with this line of thinking because stochastic processes can be defective (i.e. CDF doesn't tend to one, aka Complementary CDF doesn't tend to zero).
- - - -
For this particular problem with probability of success on trial ##i## given by ##p_i##, the complementary CDF after trial ##n## is given by

##\prod_{i=1}^n (1-p_i) = \prod_{i=1}^n q_i##

passing limits as ##n\to \infty## we have the standard analytic fact that for ##p_i \in (0,1)##
##0\lt \prod_{i=1}^\infty (1-p_i)##
iff ##\sum_{i=1}^\infty p_i \lt \infty##
- - - -
edit:
for a sketch of a proof of this result
the upper bound for the above is easy, just use
##1-p_i \leq e^{-p_i}##, so
##0\leq \prod_{i=1} (1-p_i) \leq e^{-\sum_i p_i}##
and if the series diverges then our product is squeezed between zero and zero

the lower bound takes a little more care, but one way to prove it is to take logs of the product and try to prove that
##-\infty \lt \sum_{i=1}^\infty \log(1-p_i)##

then exploiting geometric series and the power series for the logarithm, see the point-wise bound that holds for sufficiently small ##p_i## (in this case ##p_i\leq \frac{1}{2})##

##-2 p_i \leq \frac{-p_i}{1-p_i} = \sum_{k=1}^\infty -p_i^k \leq \sum_{k=1}^\infty -\frac{p_i^k}{k} = \log(1-p_i)##

then sum over the bound, for all but finitely many ##p_i##
- - - -
If one wants to put it more precisely, the formula is valid if there is ##\epsilon>0## such that ##\epsilon<x_k\leq 1## for all ##k##.
in this case since ## p_i## is bounded below by a positive constant, the series diverges which implies that the complementary CDF tends to zero and thus the probability distribution sums to one.
- - - -
For a slightly different look, in effect you're using the standard (countable state) Renewal Matrix for Age of a stochastic process

##
\left[\begin{matrix}
p_1 & q_1 & 0 & 0 & 0 & 0 & \dots\\
p_2 & 0 & q_2& 0 & 0 & 0 & \dots\\
p_3 & 0 & 0 & q_3 & 0 & 0 & \dots\\
p_4 & 0 & 0 & 0 & q_4 & 0 & \dots\\
p_5 & 0 & 0 & 0 & 0 & q_5 & \dots\\
p_6 & 0 & 0 & 0 & 0 & 0 & \dots\\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \ddots\end{matrix}\right]##

and trying to verify that state zero is not transient.
 
Last edited:

scottdave

Science Advisor
Homework Helper
Insights Author
Gold Member
1,528
557
That is a slick trick, which I think would be teaching someone how to figure the geometric series formula.

Should you state at the beginning that the probability ## p > 0 ##, rather than saying ##p\neq 0##? This goes along with ## q < 1 ## constraint at the end.

If you say ## p\neq 0 ##, it could suggest that p is less than zero. With a case of less than zero chance, you cannot accept that eventually you will win, anymore.
 

Ray Vickson

Science Advisor
Homework Helper
10,698
1,697
Of course, but the point is that the formula is valid if ##x_k## does not go to ##0## in the limit ##k\rightarrow\infty##. And even more important point is that it is not easy to see that without using probability theory.

If one wants to put it more precisely, the formula is valid if there is ##\epsilon>0## such that ##\epsilon<x_k\leq 1## for all ##k##.
One can do it with algebra instead of probability. For finite ##n## let
$$S = x_1 + (1-x_1)x_2 +(1-x_1)(1-x_2) x_3 + \cdots +(1-x_2)(1-x_2) \cdots (1-x_n) x_{n+1}.$$
We have
$$1-S = (1-x_1)(1-x_2) \cdots (1-x_{n+1}).$$
 

rude man

Homework Helper
Insights Author
Gold Member
7,422
635
It's true that for fixed ##p## you must eventually win, but if the probability of winning on the ##k##th play reduces quickly, then you may not eventually win and the sum may be less than ##1##.
Say what?

If the probability of winning one play is 1/4 then the probability of not winning after k plays is (1- 1/4)k and lim k→∞ (3/4)k = 0. I.e. the probability of NOT winning after ∞ plays is 0. You will eventually win regardless of p, assuming 0<p<1..

The fact that p = 1/4 or some other smaller but finite positive number < 1 is immaterial.

Good paper!
 

PeroK

Science Advisor
Homework Helper
Insights Author
Gold Member
2018 Award
8,788
3,150
Say what?

If the probability of winning one play is 1/4 then the probability of not winning after k plays is (1- 1/4)k and lim k→∞ (3/4)k = 0. I.e. the probability of NOT winning after ∞ plays is 0. You will eventually win regardless of p, assuming 0<p<1..

The fact that p = 1/4 or some other smaller but finite positive number < 1 is immaterial.

Good paper!
Not all series sum to 1.
 

stevendaryl

Staff Emeritus
Science Advisor
Insights Author
8,275
2,453
Say what?

If the probability of winning one play is 1/4 then the probability of not winning after k plays is (1- 1/4)k and lim k→∞ (3/4)k = 0. I.e. the probability of NOT winning after ∞ plays is 0. You will eventually win regardless of p, assuming 0<p<1..

The fact that p = 1/4 or some other smaller but finite positive number < 1 is immaterial.

Good paper!
I think the point is that if the odds are CHANGING, then you have nonzero chance of losing, even with infinitely many trials. For example, if the first time you play, the probability of winning is 1/4, and the next time it's 1/8, and the next time it's 1/16, etc, then your probability of winning for infinitely many trials is:

1/4 + 3/4 x 1/8 + 3/4 x 7/8 x 1/16 + ...

which is...I don't know what it is, but it adds up to something less than 1/2.
 

Ray Vickson

Science Advisor
Homework Helper
10,698
1,697
Say what?

If the probability of winning one play is 1/4 then the probability of not winning after k plays is (1- 1/4)k and lim k→∞ (3/4)k = 0. I.e. the probability of NOT winning after ∞ plays is 0. You will eventually win regardless of p, assuming 0<p<1..

The fact that p = 1/4 or some other smaller but finite positive number < 1 is immaterial.

Good paper!
If the win-probabilities ##p_n## change with the round-number ##n## you may not be guaranteed an ultimate win. For instance, if ##p_n = 1/2^n,## the probability of losing the first ##N## games is
$$P(N) = \prod_{n=1}^N \left( 1 - \frac{1}{2^n} \right),$$ and the probability of no-win is ##P^* = \lim_{N \to \infty} P(N).## This limit is between 0.2887880492 and 0.2887881410, so the probability of a win is ##1-P^* \doteq 0.71121 ##
 

Want to reply to this thread?

"The Sum of Geometric Series from Probability Theory - Comments" You must log in or register to reply here.

Physics Forums Values

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving
Top