Probability that 1000 coin flips results in >600 tails

  • B
  • Thread starter Jonathan212
  • Start date
  • Tags
    Probability
In summary, the conversation discusses the formula for calculating the probability of getting a certain number of tails when flipping a coin N times. The formula is N! / (M! * (N - M)!) * 0.5^M * (1 - 0.5)^(N-M), where N is the number of flips and M is the desired number of tails. The conversation also mentions using the binomial distribution and the normal approximation to calculate this probability. However, there is no single formula that can be used for this calculation and the sum of the binomial distribution must be used.
  • #36
Jonathan212 said:
Look at some more numbers. All at 60% heads or more:

Probability of 15 or more heads in 25 throws = 1 in 4.7
Probability of 60 or more heads in 100 throws = 1 in 35
Probability of 150 or more heads in 250 throws = 1 in 1,062
Probability of 240 or more heads in 400 throws = 1 in 27,000
Probability of 300 or more heads in 500 throws = 1 in 220,000
Probability of 360 or more heads in 600 throws = 1 in 1,800,000
Probability of 480 or more heads in 800 throws = 1 in 1,200,000,000

In some fundamental sense, the problem is that you are looking at this the wrong way -- you are looking at this in terms of some kind of constant percentage -- i.e. 60% of n.

A much improved way to look at this is consider your threshold (e.g. at least 15 heads ) vs the 'distance' from that to the mean. The distance involved is called the amount of standard deviations from the mean. If you re-visit these numbers, you'll see that you're progressively increasing the distance between the threshold and the mean as you go down this table. Look at a picture of the bell curve and you can see how it rapidly drops off the farther from the mean you go... the result here should not surprise you if you think about it this way (and know/assume that a normal approximation is reasonable).

So, here's quick look:

## \left[\begin{matrix}
\text{n} & \geq\text{threshold} & \text{raw excess over mean} & \text{ std deviations from the mean} &\text{normal approximation} \\
25 & 15 & 2.5 & 1 & 0.24 \\
100 & 60 & 10 & 2 & 0.027 \\
250 & 150 & 25 & 3.16 & 0.00085 \\
\vdots& \vdots& \vdots & \vdots & \vdots \\
500 & 300 & 50 & 4.47 & 4.0 \cdot 10^{-6} \\
1000 & 600 & 100 & 6.32 & 1.3 \cdot 10^{-10} \\\end{matrix}\right]##

for a binomial, you have
##\text{mean}= \mu = n \cdot p## and
##\text{variance}= n \cdot p(1-p) \to \text{standard deviation}= \sqrt{n \cdot p(1-p)}##

where the normal approximation used was:
##P(X\geq c) = P(X\gt c) \sim \frac{1}{c\sqrt{2 \pi}} e^{-\frac{c^2}{2}}##
(reference post #26 for more information)

and ##c## is precisely the number of standard deviations from the mean that your threshold is.

(Note: we can fine tune this with the 1/2 / continuity correction as is common for approximating binomials with standard normals, among other techniques though I skipped it them for convenience. A more meticulous approach would also bound the error made when approximating a binomial by a normal. )
 
Last edited:
  • Like
Likes Klystron and BvU
Physics news on Phys.org
  • #37
Does the integral of the standard distribution have an analytical solution by any chance? If not, how do you know what interval to use to the 2 significant figures in the probability
 
  • #38
It has a name: 'error function' and there is an expression for it, but no analytical solution.

There are plenty detailed tables.

Can't say I understand your second question, though.
 
  • #39
Second question was about generating the look up table by numerical integration. If 2 s.f. are wanted for the probabilities, 2 s.f or more are required for the look up table, but an arbitrary integration step of, say, sigma*0.1 does not guarantee 2 s.f. or does it?
 
  • #40
Jonathan212 said:
Does the integral of the standard distribution have an analytical solution by any chance? If not, how do you know what interval to use to the 2 significant figures in the probability

it's a B thread so I won't push this too hard: once you get away from really simple problems you won't get nice analytical solutions. Better to approximate and bound.

I can tell you that for a standard normal ##X## and any ##c\gt 0##
##\big(\frac{1}{{\sqrt{2\pi}}}e^{-\frac{c^2}{2}}\big)\big(\frac{c}{c^2 + 2}\big) \leq P(X\geq c) \leq \big(\frac{1}{\sqrt{2 \pi}} e^{-\frac{c^2}{2}}\big) \big(\frac{1}{c}\big)##

note that the ratio of the upper and lower bound tend to one. Thus you can say that asymptotically
##P(X\geq c) \sim \big(\frac{1}{\sqrt{2 \pi}} e^{-\frac{c^2}{2}}\big) \big(\frac{1}{c}\big)##

which is what I used in that table in post #36.

I proved the upper bound in post #26. Unfortunately all of the various asymptotically tight lower bounds that I'm aware of are a bit more involved.

edit:
here's a classical way of getting a pretty good bound, courtesy of Feller

where ##\phi(x) := \frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2}}##
(i.e. the density for the standard normal random variable)

consider that for any ##x \gt 0## we have the point wise bound

##\big(1- 3x^{-4}\big)\phi(x) \lt \phi(x) \lt \big(1+x^{-2}\big)\phi(x)##

now integrating over this bound, for any ##c \gt 0##
##\big(c^{-1} - c^{-3}\big)\phi(x) = \int_{c}^\infty \big(1- 3x^{-4}\big)\phi(x)dx \lt \int_{c}^\infty\phi(x)dx = P\big(X\gt c\big) \lt \int_{c}^\infty\big(1+x^{-2}\big)\phi(x)dx = \big(c^{-1}\big)\phi(x)##

or if you prefer:
##\frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2}}\big(\frac{1}{c} -\frac{1}{c^3}\big) \lt P\big(X\gt c\big) \lt \frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2}}\big(\frac{1}{c}\big)##

where the upper bound is the same as before

- - - -
edit:
if you want to get the numerical values of the Gaussian integral, can't you just use a built-in excel function?

Jonathan212 said:
Indeed Excel's cumulative binomial runs out of steam at N = 1029, it fails above that value. But so does Excel's normal distribution at N = 1555 (the cumulative option).

or maybe not... though this does seem a bit odd to me. There really should not be an "N" input for the normal distribution. I think you are doing something wrong.

also
Jonathan212 said:
Btw the source of my true random data is not Excel but /dev/random or "haveged".

whatever this means, it sounds like you know how to program. Certainly I've looked up CDF (or complementary CDF in this case) values for standard normal random variables using Scipy on many occasions, which would be my suggestion. I guess I'm scratching my head a bit as to why you're using Excel, but I digress.

for example:
Python:
from scipy.stats import norm
print(1-norm.cdf(4.47))
print(1-norm.cdf(6.32))
prints out "3.9109798603e-06" and "1.30781607766e-10" which are basically the same thing as the bottom right two entries of the table post #36. Notice that there are no "N" values here.

This result should either give you confidence in scipy or the asymptotic approximations of the Gaussian integral that I used (or maybe increase confidence in both).
 
Last edited:
  • #41
StoneTemplePython said:
if you want to get the numerical values of the Gaussian integral, can't you just use a built-in excel function?

Indeed. Looks like the thing to do is generate the look up table with Excel for maximum accuracy, save it as a file, and load the file from a C or PHP program that handles the TRNG source to simulate lots of coin flips per second and look up the resulting count of tails in the table, in order to display the statistical significance of the result every second. Guess what comes next.

Need some help with Excel's function. Got the following formula for the integral but I think it is wrong, it does not match the binomial sum at all.

= 1 - NORMDIST( M - 1, N * 0.5, SQRT( N * 0.5 * (1-0.5) ), 1 )

Correct binomial sum is:

= 1 - BINOMDIST( M - 1, N, 0.5, 1 )

These are the formulas where Excel fails from N = 1555 upwards for the normal, and from N = 1030 upwards for the binomial.

EDIT: could also write the normal as follows and still get the same numbers while it now fails at N = 1718:

= NORMDIST( N - M, N * 0.5, SQRT( N * 0.5 * (1-0.5) ), 1 )

At N = 1000 it still has a 6.5% error relative to the binomial. :nb)
 
Last edited:
  • #42
Jonathan212 said:
At N = 1000 it still has a 6.5% error relative to the binomial. :nb)

To be clear I believe you mean ##\frac{\big \vert\text{binomial probability} - \text{normal approx}\big \vert}{\text{binomial probability}} = 6.5\%##

This too should not surprise you. For the N=1000 case you are talking about a vanishingly small probability of having your event of ##\geq 600## tails occur (which has a probability less than ##\frac{2.07}{10^{9}}##), so even if the normal is almost a perfect match to the binomial over the entire distribution (total variation distance is the underlying concept I'm hinting at) any very small differences in calculated values will be magnified when talking about extremely rare events. Put differently, it is well understood that normal approximations are weakest when you go deep into the tails / rare events. Having a ##\approx 94\%## match for something this rare is surprisingly good.

This is also part of the reason I suggested using Chernoff Bounds in post 32 -- you can get an exact upper bound and not have to worry about approximation errors for rare events.
 
Last edited:
  • Like
Likes Klystron and BvU
  • #43
The probability that you flip a coin 1000 times and 600 times is tail?
50%...
That's the same question just reworded as "if I flip a coin 10 times and 9 times it's heads, what's the odds it will land tails on the 10th flip?" , Same answer, 50%.
 
  • #44
The question is about seeing (at least) 600 tails in total, not about the probability of a specific flip out of the 1000.
slappmunkey said:
That's the same question just reworded as "if I flip a coin 10 times and 9 times it's heads, what's the odds it will land tails on the 10th flip?"
It isn't.
 
  • Like
Likes StoneTemplePython
  • #45
mfb said:
The question is about seeing (at least) 600 tails in total, not about the probability of a specific flip out of the 1000.It isn't.

It is. Each individual flip is a 50% chance to land either way, that means it's a 50/50% chance no matter how many flips you do.
 
  • #46
slappmunkey said:
It is. Each individual flip is a 50% chance to land either way, that means it's a 50/50% chance no matter how many flips you do.

two ideas:
i) graphical: pick any row n (##n\geq 2##) of pascal's triangle. You can use row 1000 if you want. Are the numbers all the same? This has something to do with the binomial distribution.
ii) suppose for a contradiction that what you've said is true and we toss a coin 1000 times. Then the probability of 0 tails is ##\frac{1}{2}##, probability of 1 tails is ##\frac{1}{2}##, probability of 3 tails is ##\frac{1}{2}##, ..., probability of 999 tails is ##\frac{1}{2}##, and probability of 1000 tails is ##\frac{1}{2}##. So the total probability is ##\frac{500500}{2}\gt 1## which is contradiction because probabilities sum to one.
 
  • #47
You are over thinking it.

Break it down flip by flip.
Flip 1: 50%
Flip 2: 50%
Flip... 1000: 50%

Now what's the average of all the flips... 50%...
 
  • #48
The OP's question has been answered.

Thread closed.
 

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
20
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
1K
  • Set Theory, Logic, Probability, Statistics
2
Replies
57
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
29
Views
3K
  • Set Theory, Logic, Probability, Statistics
Replies
18
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
1K
  • Set Theory, Logic, Probability, Statistics
2
Replies
41
Views
4K
  • Set Theory, Logic, Probability, Statistics
2
Replies
45
Views
3K
  • Set Theory, Logic, Probability, Statistics
Replies
5
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
11
Views
1K
Back
Top