Average Number of Coin Flips to Reach a certain Head-Tail Difference?

In summary: The expected number is not defined for all distributions, of which this is one example.The problem is that too few games end quickly. In this case there is roughly an equal weighting for every value of np(n).The expected value is well-defined where the criterion is to be two heads or tails in front - that's just a binomial with E(n) = 4.
  • #1
greswd
764
20
Let's say you keep flipping a coin until the number of heads exceeds the number of tails by 6, or vice-versa. It is very important to consider the "vice-versa".

I did some digital simulations and found that the average number of flips required is about 35.4.

How can we derive this number using theory?

The probability distribution of the number of flips appears to follow a Pareto-like distribution.
 
Last edited:
  • Like
Likes etotheipi
Physics news on Phys.org
  • #2
greswd said:
How can we derive this number using theory?
I'm going to abuse notation here for brevity. It is easy to write down the probabilty ## P(n) ## for ## n=6 ##. You should also be able to write an expression for ## P(n) | \neg P(n-1) ## (this can be true iff the trial P(n-1) ended with exactly 5 of the same tosses). Then sum the series.
greswd said:
I did some digital simulations
We usually call this "Monte Carlo" simulation so we would say "Using a Monte Carlo method I found that the average number of flips required is about 35.4".
 
Last edited:
  • Like
Likes phinds and sysprog
  • #3
greswd said:
Let's say you keep flipping a coin until the number of heads exceeds the number of tails by 6, or vice-versa. It is very important to consider the "vice-versa".

I did some digital simulations and found that the average number of flips required is about 35.4.

How can we derive this number using theory?
you are talking about a fair game of gambler's ruin that terminates when one player is up 6. There's an enormous number of ways to solve for the expected time until stopping. If it were an unfair game, I'd suggest using Wald's Equality.

As for your problem, with a fair game, probably the simplest approach is to write out an absorbing Markov Chain with states ##\{-6,-5,-4,-3,-2,-1,0,1,2,3,4,5,6\}##, with states -6 and 6 being an absorbing. Then write out the Fundamental Matrix, and solve directly. (With some cleverness you can find a telescoping situation here if you'd like.) The markov chain chapter of Grinstead and Snell's free book is worth a read.

https://math.dartmouth.edu/~prob/prob/prob.pdf

Given the homogeneity in the fundamental matrix, you could also recognize that this is equivalent to trying to solve a linear recurrence where for ##x \in \{-5,-4,-3,-2,-1,0,1,2,3,4,5\}##, you have
##E[T_{x}] = 1 + \frac{1}{2}E[T_{x+1}] + \frac{1}{2}E[T_{x-1}]##
This recurrence may be derived directly from "first step analysis" and of course has boundary conditions of ##E[T_{6}] = E[T_{-6}] = 0##.
 
  • Like
  • Informative
Likes PeroK, sysprog, Klystron and 2 others
  • #4
greswd said:
Let's say you keep flipping a coin until the number of heads exceeds the number of tails by 6, or vice-versa. It is very important to consider the "vice-versa".

I did some digital simulations and found that the average number of flips required is about 35.4.

How can we derive this number using theory?

The probability distribution of the number of flips appears to follow a Pareto-like distribution.
Here's what I did.

First, I worked out the probability of the game finishing after ##n = 6, 8, 10, 12, 14## tosses. The numbers I got out were ##1/32, 6/128, 27/512, 110/2048, 429/8192##.

This gives probabilities of the game finishing after the nth toss.

I put the above sequence of numerators into oeis and got:

https://oeis.org/search?q=1,+6,+27,+110&sort=&language=&go=Search

Edit: there are actually two sequences that start with the same terms. This is the wrong one, so the formula is not right. I suspect this is the formula for the one-sided rule.

This gives a formula for the numerator:
$$\frac{6}{k+5}\binom{2k+3}{k-1}$$
For ##k = 1, 2, 3 \dots##, where ##n = 2k + 4##.

The expected value of ##n## should be:
$$E(n) = \sum_{k = 1}^{\infty}(2k+4)(\frac{1}{2^{2k+3}})\frac{6}{k+5}\binom{2k+3}{k-1}$$
The problem is that the sequence converges very slowly as ##a(n+1) \approx 4a(n)## for the sequence of numerators.

PS In fact, it doesn't converge.
 
Last edited:
  • Like
  • Informative
Likes sysprog and etotheipi
  • #5
PeroK said:
First, I worked out the probability of the game finishing after n=6,8,10,12,14n = 6, 8, 10, 12, 14 tosses. The numbers I got out were 1/32,6/128,27/512,110/2048,429/81921/32, 6/128, 27/512, 110/2048, 429/8192.
What about sequences like HTTTTTT (and THHHHHH) where the game finishes after n=7 tosses?
 
  • #6
pbuk said:
What about sequences like HTTTTTT (and THHHHHH) where the game finishes after n=7 tosses?
That's a different problem. That's looking for six heads or tails in a row. This problem is about having six more in total, not getting six in a row.
 
  • Like
Likes sysprog and pbuk
  • #7
PeroK said:
That's a different problem. That's looking for six heads or tails in a row. This problem is about having six more in total, not getting six in a row.
Ah yes, RTFQ - always was a weakness of mine.
 
  • Like
Likes sysprog
  • #8
(duplicate)
 
  • #9
PeroK said:
PS In fact, it doesn't converge.

oh no, so how do we solve it
 
  • #10
greswd said:
oh no, so how do we solve it
The expected number is not defined for all distributions, of which this is one example. The problem is that too few games end quickly. In this case there is roughly an equal weighting for every value of ##np(n)## (*).

The expected value is well-defined where the criterion is to be two heads or tails in front - that's just a binomial with ##E(n) = 4##. But, even requiring a lead of four heads or tails, the series diverges. You can check that if you want a good exercise along similar lines.

(*)$$\lim_{n \rightarrow \infty} \frac{(n+2)p(n+2)}{np(n)} = 1$$

PS This of course means the test is indeterminate, not that the series diverges.
 
Last edited:
  • Like
Likes sysprog
  • #11
PeroK said:
The expected number is not defined for all distributions, of which this is one example. The problem is that too few games end quickly.

If this game had a one sided stopping rule, this would be true/applicable
greswd said:
Let's say you keep flipping a coin until the number of heads exceeds the number of tails by 6, or vice-versa. It is very important to consider the "vice-versa".

But I read this as a two sided stopping rule which creates a finite state markov chain with 2 absorbing states and a single connected transient class -- this means the random variable ##T## associated with the stopping time is stochastically smaller than ##X+c## where ##X## is some geometric random variable and ##c## is a constant; put differently, the games end quickly, and geometrically so .

All that said, saying "until the number of heads exceeds the number of tails by 6, or vice-versa' is pretty far from math-ese and maybe OP meant for this to be interpreted as a one-sided stopping rule.

@greswd
please re-write your opening line. What exactly is the stopping rule here?
 
  • Like
Likes sysprog, PeroK and FactChecker
  • #12
StoneTemplePython said:
If this game had a one sided stopping rule, this would be true/applicable But I read this as a two sided stopping rule which creates a finite state markov chain with 2 absorbing states and a single connected transient class -- this means the random variable ##T## associated with the stopping time is stochastically smaller than ##X+c## where ##X## is some geometric random variable and ##c## is a constant; put differently, the games end quickly, and geometrically so .

All that said, saying "until the number of heads exceeds the number of tails by 6, or vice-versa' is pretty far from math-ese and maybe OP meant for this to be interpreted as a one-sided stopping rule.

@greswd
please re-write your opening line. What exactly is the stopping rule here?

My calculation was for a two-sided stopping rule. It was getting late last night and I was tired. I used the ratio test and got ##1## and forgot that was indeterminate, rather than divergent!

The formula in post #4 is still valid, but the series must converge after all.

@greswd you should have checked my ratio test in post #10! Apologies.
 
  • Like
Likes sysprog
  • #13
greswd said:
Let's say you keep flipping a coin until the number of heads exceeds the number of tails by 6, or vice-versa. It is very important to consider the "vice-versa".

I did some digital simulations and found that the average number of flips required is about 35.4.

How can we derive this number using theory?

The probability distribution of the number of flips appears to follow a Pareto-like distribution.

I've got an answer. It's pretty obvious really. The answer is ##36##.

First, the previous formula I gave was wrong. That one must be for the one-sided stopping rule. There is another sequence that starts with the same values. The correct one here:

https://oeis.org/A216263

It has a recursive formula. I put this into a spreadsheet and the expected value is clearly converging to ##36##.

The formula is:

##a(0) = 1, \ a(1) = 6, \ a(2) = 27##

And:

##a(k) = 6a(k-1) - 9a(k-2) + 2a(k-3)##

The number of games is ##n = 6 + 2k##

The probability of finishing after ##n## tosses is:

##p(n) = \frac{a(k)}{2^{n-1}} = \frac{a(k)}{2^{5+2k}}##

The answer is:
$$E(n) = \sum_{k=0}^{\infty} np(n) = \sum_{k=0}^{\infty} (6+2k)\frac{a(k)}{2^{5+2k}}$$
It would be interesting to prove that the series sums to ##36##.
 
Last edited:
  • Like
Likes sysprog
  • #14
A bit more on this. I worked it out for the case where you must get ##8## heads or ##8## tails ahead. The answer, not surprisingly, was ##64##. It looks like the expected value is the square of the input number.

@StoneTemplePython this must be a known result?

PS In the process I generated the sequence of terms for this case: ##1, 8, 44, 208, 910, 3808, 15504, \dots##. Again, this coincided with a binomial sequence up to a point, but then varied. I guess the binomial sequence is valid for the one-sided stopping rule and the sequences vary when there is time for the absorbed states on the other side to influence things by their absence.

Anyway, I worked out the recursive rule for this sequence (without proof!), which is:
$$a(n) = 8a(n-1) -20a(n-2)+16a(n-3) -2a(n-4)$$
This isn't in oeis.
 
  • #15
StoneTemplePython said:
If this game had a one sided stopping rule, this would be true/applicable But I read this as a two sided stopping rule which creates a finite state markov chain with 2 absorbing states and a single connected transient class -- this means the random variable ##T## associated with the stopping time is stochastically smaller than ##X+c## where ##X## is some geometric random variable and ##c## is a constant; put differently, the games end quickly, and geometrically so .

All that said, saying "until the number of heads exceeds the number of tails by 6, or vice-versa' is pretty far from math-ese and maybe OP meant for this to be interpreted as a one-sided stopping rule.

@greswd
please re-write your opening line. What exactly is the stopping rule here?
yup, its two-sided
 
  • #16
PeroK said:
I've got an answer. It's pretty obvious really. The answer is ##36##.

First, the previous formula I gave was wrong. That one must be for the one-sided stopping rule. There is another sequence that starts with the same values. The correct one here:

https://oeis.org/A216263

It has a recursive formula. I put this into a spreadsheet and the expected value is clearly converging to ##36##.

The formula is:

##a(0) = 1, \ a(1) = 6, \ a(2) = 27##

And:

##a(k) = 6a(k-1) - 9a(k-2) + 2a(k-3)##

The number of games is ##n = 6 + 2k##

The probability of finishing after ##n## tosses is:

##p(n) = \frac{a(k)}{2^{n-1}} = \frac{a(k)}{2^{5+2k}}##

The answer is:
$$E(n) = \sum_{k=0}^{\infty} np(n) = \sum_{k=0}^{\infty} (6+2k)\frac{a(k)}{2^{5+2k}}$$
It would be interesting to prove that the series sums to ##36##.
wow, that is very interesting, a whole number

i thought my Monte-Carlo trials were pretty exhaustive and near the limit, but now i need to repeat and expand them
 
  • #17
greswd said:
wow, that is very interesting, a whole number

i thought my Monte-Carlo trials were pretty exhaustive and near the limit, but now i need to repeat and expand them

Yes, the exact answer is ##6^2##. You can get it from the theory of random walks. I believe there is a nice inductive proof.

In my original answer I unwittingly got the divergent series for the one-sided problem.
 
  • #18
@greswd here's a simple proof. First, let's transform the game to start at ##6## and take one away if there is a head and add one if there is a tail. The game ends when we get to ##0## or ##12##.

Now we let ##E_k## be the expected length of the game starting from ##k##. Note that ##E_0 = E_{12} = 0##. We want to find ##E_6## of course.

We can relate ##E_k## to ##E_{k+1}## and ##E_{k-1}## by:
$$E_k = 1 + \frac 1 2 E_{k-1} + \frac 1 2 E_{k+1}$$
This is because starting at ##k## we toss once and end up with ##k+1## or ##k-1## with equal probability.

This is a difference equation for ##E_k, \ k = 0, 1, \dots 12##.

You can work through and solve for ##E_6##. E.g. if the game ends when we get to ##4##, then we have:
$$E_1 = 1 + \frac 1 2 E_2, \ E_2 = 1 + \frac 1 2 E_1 + \frac 1 2 E_3, \ E_3 = 1 + \frac 1 2 E_2$$
And we can easily solve for ##E_2 = 4##.

However, we also have the solution ##E_k = k(12-k)##, which you can check satisfies the difference equation This gives us ##E_6 = 36##.

The method generalises to any target number for heads and tails, including cases where they are not equal and where the probabilities are not ##1/2##.

E.g. if the game terminates when you get ##H## heads in front or ##T## tails in front, then the expected length of the game is ##HT##.
 
  • #19
letting ##T## be the integer valued r.v. that is given by time until absorbtion, with ##X_i## being our Rademacher random variables (+/-1 with even odds) and using notation from partial sums

##S_k= X_1+...+ X_k##
##S_T= X_1+...+ X_T##

the 'easy' solution here is uses more machinery
apply the Wald Identity for zero mean random variables like our ##X_i## (or equivalently a common second moment martingale), and compute ##E\big[S_T^2\big]## two different ways

##E\big[S_T^2\big] = \sigma_{X_i}^2 \cdot E\big[T\big] = E\big[T\big]##
##E\big[S_T^2\big] = p_\text{heads 'wins'}\cdot (n^2) + p_\text{tails 'wins'}\cdot (n^2) = n^2##
so for this problem ##E\big[T\big] = n^2 = 6^2 = 36##

the more direct solution
write this out as a finite state markov chain with states 0 through 12, and 0 and 12 are absorbing. (If you look at the Grinstead and Snell book in post 3, I think they have absorbing states always be bottom right in the matrix... some other texts have them in top right, and yet some others e.g. Feller vol 1 will have them, in special cases like this, split so one is in top left and one is in bottom right. The idea is to pick a 'nice' labeling of states -- there isn't exactly a canonical choice. When all absorbing states are in bottom right, it may be computationally easier to handle, and the blocked multiplication argument may be more clear, though it destroys nice symmetry for our problem, so I use the Feller setup. ) Consider the row stochastic transition matrix

##P =
\left[\begin{matrix}
1 & 0& 0 & 0 &0 &\dots & 0 \\
\frac{1}{2} &0 & \frac{1}{2}& 0 & 0 & \dots & 0 \\
0 & \frac{1}{2} & 0 & \frac{1}{2} &0 & \dots & 0\\
0 & 0& \frac{1}{2} & 0 & \frac{1}{2}&\dots & 0\\
\vdots & \vdots & \vdots & \vdots & \ddots & \ddots & \vdots\\
0 & 0 & 0 & \dots & \frac{1}{2} & 0&\frac{1}{2}\\
0 & 0 & 0 & \dots & 0& 0&1\end{matrix}\right]##

now consider ##P'## which is ##P## except row, col 12 and row, col 0 are deleted. ##P'## is the transition matrix associated with the transient states.

so fix some starting state ##i## (##i=6## in this problem). The probability of not being absorbed after ##k## iterations is given by
##\mathbf e_i^T (P')^k\mathbf 1##

Now for any non-negative random variable, summing / integrating over the complementary cdf gives the first moment. So

##E\big[T_i\big] = \sum_{k=0}^\infty Pr\{T_i\gt k\} = \sum_{k=0}^\infty \mathbf e_i^T (P')^k\mathbf 1##

but since the choice of ##i## is arbitrary, we may as well do this computation for all transient states at once. I.e. consider
##Y_m = \sum_{k=0}^{m-1} (P')^k\mathbf 1= \big(\sum_{k=0}^{m-1} (P')^k\big)\mathbf 1\longrightarrow \big(I-P'\big)Y_m = \big(I - (P')^m\big) \mathbf 1##
##Y_m = \big(I-P'\big)^{-1}\mathbf 1 - \big(I-P'\big)^{-1} (P')^m \mathbf 1##
taking a limit
##\sum_{k=0}^\infty (P')^k\mathbf 1 = \lim_{m\to \infty} Y_m = \big(I-P'\big)^{-1}\mathbf 1##
(note: the reason the inverse and the limit exist is because ##P'## necessarily has spectral radius ##\lt 1##)

letting ##\mathbf t## be the expected time till absorbtion vector, this reads
##t_i = \mathbf e_i^T \big(I-P'\big)^{-1}\mathbf 1## or
##\mathbf t = \big(I-P'\big)^{-1}\mathbf 1## or
##\big(I-P'\big)\mathbf t = \mathbf 1##
and we want to solve for ##\mathbf t## . note that
##\big(I-P'\big) = \left[\begin{matrix}
1 & -\frac{1}{2}& 0 & 0 & \dots & 0 \\
-\frac{1}{2} & 1 & -\frac{1}{2} & 0 & \dots & 0\\
0 & -\frac{1}{2} & 1 & -\frac{1}{2} & \dots & 0\\
0 & 0& -\frac{1}{2} & 1 & \dots & 0\\
\vdots & \vdots & \vdots & \ddots & \ddots & \vdots\\
0 & 0 & 0 & \dots & -\frac{1}{2} & 1\end{matrix}\right]##

which should remind people of Laplacians, Green's functions and many other things. The very basic and interesting point, that I mentioned in post 3 is, given the homogeneity / nice structure, this is exactly the 'extensive form' of solving the recurrence

##E[T_{i}] = 1 + \frac{1}{2}E[T_{i+1}] + \frac{1}{2}E[T_{i-1}]##
with boundary conditions at states 0 and 12 (or -6 and +6, depending on choice of labeling)
i.e. to make this explicit rearrange terms and we can see for each ##i##
## \mathbf e_i^T(I-P')\mathbf t = \frac{1}{2}E[T_{i-1}] + E[T_{i}] - \frac{1}{2}E[T_{i+1}] =1 = \mathbf e_i^T\mathbf 1 ##

we can directly solve
##\big(I-P'\big)\mathbf t = \mathbf 1##The clever finish here which generalizes to arbitrary ##n## is to see another telescope, via left multiplication by ##\mathbf 1^T##
##2 t_{1} = t_{n-1} + t_{1} = \mathbf 1^T\big(I-P'\big)\mathbf t = \mathbf 1^t\mathbf 1= 2n-1##
where for reasons of symmetry ##t_{n-1} = t_{1}## we can now use these plus original boundary conditions to have 2 'consecutive' or easy boundary conditions and solve, or notice an overlapping sub-problem
##E[T_i] = \text{expected time until reaching state one step from absorption} + t_1##
now recurse, and repeat until the transient chain is trivially sized, and add all the results up-- this gives a triangular sum of ##n^2##.

- - - -
as I recall expected time until absorption of this particular random walk is used in analyzing Papadimitriou's randomized 2-SAT algorithm. The typical CS presentation gets the final result via a telescope though via a different approach.
 
Last edited:

1. How is the average number of coin flips to reach a certain head-tail difference calculated?

The average number of coin flips to reach a certain head-tail difference is calculated by taking the sum of all possible outcomes and dividing it by the total number of possible outcomes. This gives the expected value or average number of coin flips needed to reach the desired head-tail difference.

2. Is there a formula to calculate the average number of coin flips?

Yes, there is a formula to calculate the average number of coin flips. It is calculated as (2^n - 1) / (2 - p), where n is the number of coin flips and p is the probability of getting heads or tails.

3. Does the initial head-tail difference affect the average number of coin flips?

No, the initial head-tail difference does not affect the average number of coin flips. The average number of coin flips is solely dependent on the probability of getting heads or tails.

4. Can the average number of coin flips be used to predict the outcome of future coin flips?

No, the average number of coin flips is a statistical measure and cannot be used to predict the outcome of future coin flips. Each coin flip is independent and has an equal chance of resulting in heads or tails regardless of the previous outcomes.

5. Are there any real-world applications for the average number of coin flips to reach a certain head-tail difference?

Yes, the average number of coin flips can be used in various fields such as statistics, gambling, and computer science. It can also be used to analyze the fairness of a coin or to determine the expected time for a certain event to occur in a probabilistic system.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
20
Views
1K
  • Precalculus Mathematics Homework Help
Replies
4
Views
1K
  • Set Theory, Logic, Probability, Statistics
2
Replies
45
Views
3K
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Programming and Computer Science
Replies
10
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Precalculus Mathematics Homework Help
Replies
2
Views
836
  • Calculus and Beyond Homework Help
Replies
15
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
4K
Back
Top