Probability that 1000 coin flips results in >600 tails

  • Context: High School 
  • Thread starter Thread starter Jonathan212
  • Start date Start date
  • Tags Tags
    Probability
Click For Summary

Discussion Overview

The discussion revolves around calculating the probability of obtaining more than 600 tails in 1000 coin flips. Participants explore various mathematical approaches, including the binomial distribution and its normal approximation, while addressing computational challenges and statistical significance.

Discussion Character

  • Exploratory
  • Technical explanation
  • Mathematical reasoning
  • Debate/contested

Main Points Raised

  • Some participants inquire about the formula for calculating the probability of getting more than a certain number of tails in multiple coin flips.
  • Others suggest using the binomial distribution and normal approximation for this calculation.
  • A participant mentions the formula for the binomial distribution and notes the need to sum probabilities from 601 to 1000 tails to answer the original question.
  • Concerns are raised about computational limitations in Excel when dealing with large numbers, particularly with factorial calculations.
  • Some participants express confusion over discrepancies between results from the binomial and normal distributions, particularly regarding the expected probabilities.
  • There are discussions about the statistical significance of achieving a high number of tails, with one participant suggesting a probability of 1 in 21 million for more than 600 tails in 1000 flips.
  • Participants also discuss the cumulative distribution function in Excel and how to correctly apply it for calculating probabilities of getting at least a certain number of tails.
  • One participant mentions the potential for using coding to sum probabilities computationally, while others discuss the implications of using normal approximations versus exact binomial calculations.
  • There are references to the Chernoff Bound and its relevance to coin tossing.
  • Some participants express skepticism about the likelihood of obtaining such extreme results in practical scenarios.

Areas of Agreement / Disagreement

Participants do not reach a consensus on the best method for calculating the probabilities, and there are multiple competing views regarding the reliability of the normal approximation versus the binomial distribution. The discussion remains unresolved with ongoing questions about the accuracy of different approaches.

Contextual Notes

Participants note limitations in computational tools like Excel when handling large values, and there are unresolved questions about the accuracy of normal approximations compared to binomial calculations, particularly for high values of N.

Who May Find This Useful

This discussion may be useful for those interested in probability theory, statistical analysis, and computational methods for assessing random processes, particularly in the context of coin flipping experiments.

  • #31
Alright but such a combinatorial explosion was not expected.
 
Physics news on Phys.org
  • #32
Jonathan212 said:
The way to assess the statistical significance of a true random number generator's observed bias. Got the answer in Excel

'true random number generator' is a bit of a loaded term. Excel isn't known to be good for pseudo random number generators.

Jonathan212 said:
Got the answer in Excel, it is as BvU says. Only thing is, the result is a bit suspicious for high N: probability of 600 OR MORE heads in 1000 throws is 1 in 19 billion.

you're could run into numeric stability issues if you're calculating and summing extremely small probabilities so extra care is needed. It sounds about right assuming a fair coin. I know the correct probability is cannot be bigger than ##\frac{2.07}{10^{9}}##. Here's why:

you know every probability ##p## satisfies ##0\leq p \leq 1##. Now using chernoff bounds, I can say, with

##S_n = X_1 + X_2 + ... + X_n##
where each ##X_k## is a indendent bernouli (coin toss) with value 1 for tails and 0 for heads.
i.e. ##S_n## is a random variable that counts the number of tails

So the expected number of tails ##= \mu = E[S_n] = n \cdot E[X_k] = 1000 \cdot 0.5 = 500##

what the Chernoff bound tells us is

## 0 \lt Pr\Big\{\text{600 or More Tails}\Big\} = Pr\Big\{\big(S_n - \mu\big) \geq 100\Big\} \leq \exp\big(\frac{-2\cdot 100^2}{1000}\big) \lt 2.07 \cdot 10^{-9}##

(note thread title says ##\gt 600 ## tails where as a more recent posting says ##\geq 600##, I went with the latter)
- - - - -

edit:
for a basic look at Chernoff Bounds for coin tossing, consider looking at pages 20-30, here:
https://ocw.mit.edu/courses/electri...ce-fall-2010/readings/MIT6_042JF10_chap19.pdf

part of the math for CS course, here:
https://ocw.mit.edu/courses/electri...tics-for-computer-science-fall-2010/readings/
 
Last edited:
  • Like
Likes   Reactions: Filip Larsen
  • #33
Indeed Excel's cumulative binomial runs out of steam at N = 1029, it fails above that value. But so does Excel's normal distribution at N = 1555 (the cumulative option). Btw the source of my true random data is not Excel but /dev/random or "haveged".
 
Last edited:
  • #34
The plot looks alright though.

X-axis = N
Y-axis = 1 / probability that N throws result in >=N*0.6 heads.

Image4.jpg
 

Attachments

  • Image4.jpg
    Image4.jpg
    18.8 KB · Views: 668
Last edited:
  • Like
Likes   Reactions: mfb
  • #35
It's 100% if you do 1000 coin flips again and again until you get greater than 600 tails.

Well, unless you're cursed. Then it's 0%, even with a double tailed coin. You get killed before you flip the coin the 1000th time.
 
  • #36
Jonathan212 said:
Look at some more numbers. All at 60% heads or more:

Probability of 15 or more heads in 25 throws = 1 in 4.7
Probability of 60 or more heads in 100 throws = 1 in 35
Probability of 150 or more heads in 250 throws = 1 in 1,062
Probability of 240 or more heads in 400 throws = 1 in 27,000
Probability of 300 or more heads in 500 throws = 1 in 220,000
Probability of 360 or more heads in 600 throws = 1 in 1,800,000
Probability of 480 or more heads in 800 throws = 1 in 1,200,000,000

In some fundamental sense, the problem is that you are looking at this the wrong way -- you are looking at this in terms of some kind of constant percentage -- i.e. 60% of n.

A much improved way to look at this is consider your threshold (e.g. at least 15 heads ) vs the 'distance' from that to the mean. The distance involved is called the amount of standard deviations from the mean. If you re-visit these numbers, you'll see that you're progressively increasing the distance between the threshold and the mean as you go down this table. Look at a picture of the bell curve and you can see how it rapidly drops off the farther from the mean you go... the result here should not surprise you if you think about it this way (and know/assume that a normal approximation is reasonable).

So, here's quick look:

## \left[\begin{matrix}
\text{n} & \geq\text{threshold} & \text{raw excess over mean} & \text{ std deviations from the mean} &\text{normal approximation} \\
25 & 15 & 2.5 & 1 & 0.24 \\
100 & 60 & 10 & 2 & 0.027 \\
250 & 150 & 25 & 3.16 & 0.00085 \\
\vdots& \vdots& \vdots & \vdots & \vdots \\
500 & 300 & 50 & 4.47 & 4.0 \cdot 10^{-6} \\
1000 & 600 & 100 & 6.32 & 1.3 \cdot 10^{-10} \\\end{matrix}\right]##

for a binomial, you have
##\text{mean}= \mu = n \cdot p## and
##\text{variance}= n \cdot p(1-p) \to \text{standard deviation}= \sqrt{n \cdot p(1-p)}##

where the normal approximation used was:
##P(X\geq c) = P(X\gt c) \sim \frac{1}{c\sqrt{2 \pi}} e^{-\frac{c^2}{2}}##
(reference post #26 for more information)

and ##c## is precisely the number of standard deviations from the mean that your threshold is.

(Note: we can fine tune this with the 1/2 / continuity correction as is common for approximating binomials with standard normals, among other techniques though I skipped it them for convenience. A more meticulous approach would also bound the error made when approximating a binomial by a normal. )
 
Last edited:
  • Like
Likes   Reactions: Klystron and BvU
  • #37
Does the integral of the standard distribution have an analytical solution by any chance? If not, how do you know what interval to use to the 2 significant figures in the probability
 
  • #38
It has a name: 'error function' and there is an expression for it, but no analytical solution.

There are plenty detailed tables.

Can't say I understand your second question, though.
 
  • #39
Second question was about generating the look up table by numerical integration. If 2 s.f. are wanted for the probabilities, 2 s.f or more are required for the look up table, but an arbitrary integration step of, say, sigma*0.1 does not guarantee 2 s.f. or does it?
 
  • #40
Jonathan212 said:
Does the integral of the standard distribution have an analytical solution by any chance? If not, how do you know what interval to use to the 2 significant figures in the probability

it's a B thread so I won't push this too hard: once you get away from really simple problems you won't get nice analytical solutions. Better to approximate and bound.

I can tell you that for a standard normal ##X## and any ##c\gt 0##
##\big(\frac{1}{{\sqrt{2\pi}}}e^{-\frac{c^2}{2}}\big)\big(\frac{c}{c^2 + 2}\big) \leq P(X\geq c) \leq \big(\frac{1}{\sqrt{2 \pi}} e^{-\frac{c^2}{2}}\big) \big(\frac{1}{c}\big)##

note that the ratio of the upper and lower bound tend to one. Thus you can say that asymptotically
##P(X\geq c) \sim \big(\frac{1}{\sqrt{2 \pi}} e^{-\frac{c^2}{2}}\big) \big(\frac{1}{c}\big)##

which is what I used in that table in post #36.

I proved the upper bound in post #26. Unfortunately all of the various asymptotically tight lower bounds that I'm aware of are a bit more involved.

edit:
here's a classical way of getting a pretty good bound, courtesy of Feller

where ##\phi(x) := \frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2}}##
(i.e. the density for the standard normal random variable)

consider that for any ##x \gt 0## we have the point wise bound

##\big(1- 3x^{-4}\big)\phi(x) \lt \phi(x) \lt \big(1+x^{-2}\big)\phi(x)##

now integrating over this bound, for any ##c \gt 0##
##\big(c^{-1} - c^{-3}\big)\phi(x) = \int_{c}^\infty \big(1- 3x^{-4}\big)\phi(x)dx \lt \int_{c}^\infty\phi(x)dx = P\big(X\gt c\big) \lt \int_{c}^\infty\big(1+x^{-2}\big)\phi(x)dx = \big(c^{-1}\big)\phi(x)##

or if you prefer:
##\frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2}}\big(\frac{1}{c} -\frac{1}{c^3}\big) \lt P\big(X\gt c\big) \lt \frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2}}\big(\frac{1}{c}\big)##

where the upper bound is the same as before

- - - -
edit:
if you want to get the numerical values of the Gaussian integral, can't you just use a built-in excel function?

Jonathan212 said:
Indeed Excel's cumulative binomial runs out of steam at N = 1029, it fails above that value. But so does Excel's normal distribution at N = 1555 (the cumulative option).

or maybe not... though this does seem a bit odd to me. There really should not be an "N" input for the normal distribution. I think you are doing something wrong.

also
Jonathan212 said:
Btw the source of my true random data is not Excel but /dev/random or "haveged".

whatever this means, it sounds like you know how to program. Certainly I've looked up CDF (or complementary CDF in this case) values for standard normal random variables using Scipy on many occasions, which would be my suggestion. I guess I'm scratching my head a bit as to why you're using Excel, but I digress.

for example:
Python:
from scipy.stats import norm
print(1-norm.cdf(4.47))
print(1-norm.cdf(6.32))
prints out "3.9109798603e-06" and "1.30781607766e-10" which are basically the same thing as the bottom right two entries of the table post #36. Notice that there are no "N" values here.

This result should either give you confidence in scipy or the asymptotic approximations of the Gaussian integral that I used (or maybe increase confidence in both).
 
Last edited:
  • #41
StoneTemplePython said:
if you want to get the numerical values of the Gaussian integral, can't you just use a built-in excel function?

Indeed. Looks like the thing to do is generate the look up table with Excel for maximum accuracy, save it as a file, and load the file from a C or PHP program that handles the TRNG source to simulate lots of coin flips per second and look up the resulting count of tails in the table, in order to display the statistical significance of the result every second. Guess what comes next.

Need some help with Excel's function. Got the following formula for the integral but I think it is wrong, it does not match the binomial sum at all.

= 1 - NORMDIST( M - 1, N * 0.5, SQRT( N * 0.5 * (1-0.5) ), 1 )

Correct binomial sum is:

= 1 - BINOMDIST( M - 1, N, 0.5, 1 )

These are the formulas where Excel fails from N = 1555 upwards for the normal, and from N = 1030 upwards for the binomial.

EDIT: could also write the normal as follows and still get the same numbers while it now fails at N = 1718:

= NORMDIST( N - M, N * 0.5, SQRT( N * 0.5 * (1-0.5) ), 1 )

At N = 1000 it still has a 6.5% error relative to the binomial. :nb)
 
Last edited:
  • #42
Jonathan212 said:
At N = 1000 it still has a 6.5% error relative to the binomial. :nb)

To be clear I believe you mean ##\frac{\big \vert\text{binomial probability} - \text{normal approx}\big \vert}{\text{binomial probability}} = 6.5\%##

This too should not surprise you. For the N=1000 case you are talking about a vanishingly small probability of having your event of ##\geq 600## tails occur (which has a probability less than ##\frac{2.07}{10^{9}}##), so even if the normal is almost a perfect match to the binomial over the entire distribution (total variation distance is the underlying concept I'm hinting at) any very small differences in calculated values will be magnified when talking about extremely rare events. Put differently, it is well understood that normal approximations are weakest when you go deep into the tails / rare events. Having a ##\approx 94\%## match for something this rare is surprisingly good.

This is also part of the reason I suggested using Chernoff Bounds in post 32 -- you can get an exact upper bound and not have to worry about approximation errors for rare events.
 
Last edited:
  • Like
Likes   Reactions: Klystron and BvU
  • #43
The probability that you flip a coin 1000 times and 600 times is tail?
50%...
That's the same question just reworded as "if I flip a coin 10 times and 9 times it's heads, what's the odds it will land tails on the 10th flip?" , Same answer, 50%.
 
  • #44
The question is about seeing (at least) 600 tails in total, not about the probability of a specific flip out of the 1000.
slappmunkey said:
That's the same question just reworded as "if I flip a coin 10 times and 9 times it's heads, what's the odds it will land tails on the 10th flip?"
It isn't.
 
  • Like
Likes   Reactions: StoneTemplePython
  • #45
mfb said:
The question is about seeing (at least) 600 tails in total, not about the probability of a specific flip out of the 1000.It isn't.

It is. Each individual flip is a 50% chance to land either way, that means it's a 50/50% chance no matter how many flips you do.
 
  • #46
slappmunkey said:
It is. Each individual flip is a 50% chance to land either way, that means it's a 50/50% chance no matter how many flips you do.

two ideas:
i) graphical: pick any row n (##n\geq 2##) of pascal's triangle. You can use row 1000 if you want. Are the numbers all the same? This has something to do with the binomial distribution.
ii) suppose for a contradiction that what you've said is true and we toss a coin 1000 times. Then the probability of 0 tails is ##\frac{1}{2}##, probability of 1 tails is ##\frac{1}{2}##, probability of 3 tails is ##\frac{1}{2}##, ..., probability of 999 tails is ##\frac{1}{2}##, and probability of 1000 tails is ##\frac{1}{2}##. So the total probability is ##\frac{500500}{2}\gt 1## which is contradiction because probabilities sum to one.
 
  • #47
You are over thinking it.

Break it down flip by flip.
Flip 1: 50%
Flip 2: 50%
Flip... 1000: 50%

Now what's the average of all the flips... 50%...
 
  • #48
The OP's question has been answered.

Thread closed.
 

Similar threads

  • · Replies 20 ·
Replies
20
Views
3K
  • · Replies 21 ·
Replies
21
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 57 ·
2
Replies
57
Views
7K
  • · Replies 29 ·
Replies
29
Views
4K
  • · Replies 18 ·
Replies
18
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
Replies
8
Views
2K