Coin tossing -extended version-

  • Context: Graduate 
  • Thread starter Thread starter Stellaferox
  • Start date Start date
Click For Summary

Discussion Overview

The discussion revolves around the probabilities and expected number of coin flips required to obtain a string of K consecutive heads. Participants explore the relationship between the average number of flips needed and the probability of achieving such a string within a given number of flips, particularly focusing on the case where the number of flips is defined as N = 2^(K+1)-2.

Discussion Character

  • Exploratory
  • Technical explanation
  • Mathematical reasoning
  • Debate/contested

Main Points Raised

  • Some participants note that the average number of flips to get a string of K heads is given by N = 2^(K+1)-2, while the probability of obtaining at least one string of K heads in N flips is approximately 0.647949219.
  • Others argue that the two problems being discussed are fundamentally different: one is about the average number of flips needed, and the other is about the probability of obtaining a string of K heads in a fixed number of flips.
  • A participant mentions that as K increases, the probability of getting a run of K heads seems to converge towards 0.6323.
  • Some participants discuss the emergence of a limiting value for the probability when calculating the chances of obtaining K heads in a series of 2^(K+1)-2 tosses.
  • A later reply suggests that for large values of N and K, the number of runs of K heads in N tosses approximates a Poisson distribution, leading to a probability of at least one run of K heads approaching 1 - e^-1.
  • Another participant expresses difficulty in deriving the probability formula and acknowledges the elegance of the Poisson distribution approach presented by another participant.

Areas of Agreement / Disagreement

Participants express differing views on the relationship between the average number of flips and the probability of obtaining K heads. There is no consensus on the underlying reasons for the observed probabilities, and the discussion remains unresolved regarding the exact nature of the relationship.

Contextual Notes

Some participants note challenges in calculating probabilities for large values of K and N, indicating limitations in computational tools for handling large powers of 2.

Stellaferox
Messages
16
Reaction score
0
Coin tossing ---extended version---

We learned in previous posts that to obtain a string of at least K Heads, one has to flip a coin 2^(K+1)-2 times (N) on average (Markov chains), e.g. K = 3 HEADS, N would be 14.

On the other hand: flipping a coin 14 times and calculating the probability that a string of at least 3 HEADS occurs, the probability = 0.647949219

The strange thing is that doing the string of HEADS from 1 to (let's say) 10 and calculating the matching average nr of flips (N = 2^(K+1)-2), reversing this exercise and calculating the probability of occurrence of such a string in N, is always around the above mentioned probability of 0.647949219.

Can someone please explain?

thnx in advance

Marc
 
Physics news on Phys.org


These are not the same problem.
The first problem is getting the average of the number of flips to get a string of k heads. The second is the probability of a string of k length for a given string.

Let P(n,k) = prob(string of k after n flips|no string of k after n-1 flips)

The answer to the first is ∑(n=0,∞) nP(n,k)

The answer to the second is ∑(j=0,n)P(j,k)
 


Hi Mathman,

Thank you for your answer and the compact notation in logic of what it all boils down to.

Yes I know of course. What I wanted to know is how these compared probabilites all give a result closely around 0.64. I find it hard to believe that this is coin-cidental...(forgive the pun)

Marc
 


And, moreover, the probability seems to converge towards 0.6323 as K heads increases (with N = 2^(K+1)-2 as average number of coinflips to obtain a string of K heads)
 


mathman said:
These are not the same problem.
The first problem is getting the average of the number of flips to get a string of k heads. The second is the probability of a string of k length for a given string.


Stellaferox is talking about a pattern that emerges when you calculate the probability of getting a run of k heads in a series of 2^(k+1)-2 tosses. It appears to tend toward some limiting value.

2 6 0.671875
3 14 0.647949
4 30 0.639057
5 62 0.635327
6 126 0.633651
7 254 0.632865
8 510 0.632487

The first column is k, the second is the number of tosses n, and the third is the probability of getting a run of k heads in n tosses.

This is neat. I had not noticed this before, and I don't know why it happens.
 


@ techmologist,

Yes, the question is really bugging me. How should we try to solve this equation? It is a bit hard doing this by numbercrunching as the bigger powers of 2 cannot be referenced anymore by excel or programming languages.

Marc
 


Stellaferox said:
@ techmologist,

Yes, the question is really bugging me. How should we try to solve this equation? It is a bit hard doing this by numbercrunching as the bigger powers of 2 cannot be referenced anymore by excel or programming languages.

Marc

Hi, Marc. I think I have guessed the answer (but not proved it). For large values of N and K, the number of runs of K heads in N tosses has a Poisson distribution with mean approximately N/2K+1. I got this approximate mean from considering the exact expected number of runs of K heads in N tosses, which is

[1+(N-K)/2]/2K

In the case we are considering, the mean number of runs of K heads approaches 1 as K grows large. Therefore, the probability that there is at least one run of K in N = 2K+1-2 tosses is 1 minus the probability that there are none:

P{at least one run of K heads in 2K+1-2 tosses} = 1 - P{0 runs of K heads in 2K+1-2 tosses} ~ 1-e-1 = 0.6321205588
 


Hi Techmologist,

I think you are right! I tried to substitute 2^(K+1)-2 for N in the probabilityformula and bring it to its limit but didn't succeed. your approach is much more elegant and intuitive I might say.
I didn't think of the Poisson-distribution either. Rather stupid considering I was playing with that function for the last month!
Thank you for your thinking along.

Marc
 


Stellaferox said:
Hi Techmologist,

I think you are right! I tried to substitute 2^(K+1)-2 for N in the probabilityformula and bring it to its limit but didn't succeed. your approach is much more elegant and intuitive I might say.
I didn't think of the Poisson-distribution either. Rather stupid considering I was playing with that function for the last month!
Thank you for your thinking along.

Marc

You're welcome. I'm glad you brought this up because otherwise I wouldn't have thought to apply the Poisson approximation to the problem of runs. I still need to check it to see for what range of numbers it gives reliable answers.

I've thought about it some more, and it is possible to see the tendency toward 1-e-1 in the exact formula when you substitute in N = 2K+1-2. Actually, since we are letting K grow large (at a much slower rate than N), the probability doesn't change much if you simply use N = 2K+1. Also, we can ignore those series of N tosses that begin with a run of K heads, since their contribution to the probability is negligible as K increases. That way, we can focus on only those series in which a tail is followed immediately by a run of K heads. So if P is the probability of a run of K heads in N = 2K+1 tosses, then

[tex]P \approx \sum_{j=1}^{\lfoor\frac{N}{K+1}\rfloor}(-1)^{j+1}\binom{N-jK}{j}\frac{1}{2^{j(K+1)}}[/tex]

substituting N for 2K+1 gives

[tex]P \approx \sum_{j=1}^{\lfoor\frac{N}{K+1}\rfloor}(-1)^{j+1}\binom{N-jK}{j}\frac{1}{N^j}[/tex]

Since N/(K+1) --> infinity and (N-jK)j/Nj --> 1 as K grows large*, the above sum simplifies to

[tex]P \approx \sum_{j=1}^{\infty}(-1)^{j+1}\frac{1}{j!} = 1 - e^{-1}[/tex]

This reasoning can be extended to get the full Poisson distribution for the number of runs of K "successes" in a series of N Bernoulli trials, with success probability p and failure probability q = 1-p. If we let [tex]N = \mu/qp^K[/tex] as K grows large, so that N >> K >> 1, then for small j, the probability that there are exactly j runs of K successes in N trials tends to

[tex]e^{-\mu}\frac{\mu^j}{j!}[/tex]


*(n)i is shorthand for n(n-1)...(n-i+1)[/size]
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 41 ·
2
Replies
41
Views
9K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 57 ·
2
Replies
57
Views
7K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 14 ·
Replies
14
Views
7K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 45 ·
2
Replies
45
Views
6K
  • · Replies 21 ·
Replies
21
Views
4K