# Coin flip probabilities and relevance

by Verasace
Tags: coin, flip, probabilities, relevance
Mentor
P: 15,147
 Quote by FearsForLife Actually V, you're first mistake was posting in a forum designed for physicists. I've met a few, they tend to be closed minded.
Your biggest mistake is dredging up a thread that hasn't been active for *five* years.
 P: 3 Wow. Awesome, talk about the biggest bump ever. Well, I guess that guy gave up. I insulted somebody who hopefully is 5 years wiser about how to use the internet, and I'll shut up and go away now. Smooches FFL
P: 106
 Quote by FearsForLife ... but if you hit this using induction, maybe it'll clear some stuff up. If the odds of a coin flip is 50% for n, and the odds of a coin flip is 50% for n+1, that should (if I've got my induction done correctly) prove that, to infinity (and beyond!), each coin flip should uniquely have a probability of 50%.
Does it make any sense using induction to prove that?
...infinity and beyond?

Where did you learn induction?
Where did you learn math?

Hmmm...I'm quite sure you and Verasace were classmates...
 P: 101 Bad form by a lot of people here...better off addressing misconceptions with facts than with belittling
 P: 240 I feel the urge to mention a thing or two in addition to all those said to counter op's notion of so called "probability pressure". 1/ Say, we get H in 1st toss. If there is any such pressure then after the 1st toss the pressure shall be towards T, to bring to 50-50. So we must get T in 2nd toss (since there is a pressure towards it and negative pressure towards H). Therefore, under the pressure theory we must get alternate H and T. {Of course newer "pressure" or whatever theory has to be developed to counter the real life sequences}. 2/ I want to know that whether op really tossed a coin or used computer generated random numbers for his graph. If computer generated numbers are used, did he perform a (statistical) test of randomness? If yes, how did he perform the test (because his concept of pressure will affect again the distribution of any r.v.). So, he cannot relay on any existing statistical test. Unless he tested the used numbers for randomness in a "logical' way, his graphs and findings do not remain valid.
 HW Helper P: 1,371 the simplest explanation (other than the one that says the OP doesn't know what he's doing) is that he has confused empirical observation with theoretical probability (which goes back to not knowing what he's doing)
 P: 1 As others have not so politely stated get a clue. Just because there is a statistical anomaly there is no”probability force” that will make it correct itself. It may never correct itself, or more correctly stated it may take an infinate number of chances to correct itself. At every odd numbered toss you are guaranteed a statistical anomaly. After one toss it will be either heads or tails. You are literally saying that if you toss a coin twice if the first result is heads, then the second toss is bound to be tails. Which hopefully you understand isn’t the case...
HW Helper
P: 3,684
 Quote by roryjester Since infinity is something almost impossible to grasp, except maybe abstractly, like mathmatical singularities, it might not be such a great idea to use it to argue more mundane things like the flipping of a coin.
Actually infinite numbers are pretty easy to pick up (depending on the type you pick); I'm not sure where this idea comes from, though it's common. But you're right that we don't need it; there are finitistic ways to thinking about it. Here's one:

For any positive percentage (say, 0.1%) and any certainty less than 1 (say, 99%), there is an M such that for all N > M,
the chance that N fair coin flips will fall between 50% - the percentage and 50% + the percentage (49.9% to 50.1% in this example) is at least the specified certainty.

 Quote by roryjester Now, the OP's question as I understand it, is: Is there pressure for the next million or so spins after the 'all-heads' sample to favour more tails than heads? I'd say for any individual flip after that, the probability would be 50/50 just as if the coin never knew it flipped heads a million times in a row before that (heads again, baby!?). But the OP's question is really bigger than that. He's saying in the infinite minus 1 millions spins after that, is there gonna be more tails than heads? This is just my gambler's intuition speaking, but I would say yes, although not by much (the ratio will still be approaching 50:50 for all 'practical' matter).
Those contradict each other! If each following coin flip is unbiased, then the collection of coin flips will also be unbiased.

 Quote by roryjester So if u got infinite amounts of money, time, and patience, it might not be a bad idea to bet on tails in the infinite time after u see the first million flips go all heads, although another question to be asked is: Were u there to see the previous million flips before the all-heads streak, cos u know, it might have been a 'million-all-tails' result set before that; then ur back where u started: 50/50 and no real or perceived 'pressure' to compensate for older statistical anomalies. Please inform me if my jerry-rigged gambler's intuition is wrong here somehow.
Yes, intuition has failed you this time. It happens.

 Quote by roryjester By the way, does anyone here know if slot machines are truly random, or do I just have to stay with a cold machine until it is 'pressured' into becoming hot again (so I can get all my money back)?
Slot machines are generally random, modulo concerns about their use of pseudorandom numbers rather than true RNGs (don't worry about it; it doesn't affect your question). But one slot machine need not be like another. It's possible to have one machine in a room that pays out more often than others in that same room -- and from what I hear, that's not uncommon. So within a machine, it's essentially random, but between machines I wouldn't expect similar long-term results.
P: 34
 Quote by Verasace Thanks for the unridiculed (almost) reply. If there is no pressure to return to 50/50, then why doesn't one just flip heads indifinitely?
The fallacy in your reasoning is in making the assumption that because something is likely to happen, then it naturally tends to that. While this is true to some extent, it is only indirectly - it is a product of the fact that as n (number of tosses) approaches infinity, the heads to tails ratio approaches 1:1, simply because as n increases, it is increasingly improbable for you to keep up a streak of all tails and all heads that just happens to comply with the data.

Here's a mini-demonstration:
1
1 1
1 2 1
1 3 3 1
1 4 6 4 1
1 5 10 10 5 1
...

This is Pascal's triangle, but it represents the relative probability of getting a certain number of heads out of the tosses. So if you flipped a coin 6 times, there is a 10 / 32 chance of it getting 3 heads, whereas there's only a 1 / 32 chance of it getting 0 heads.

Anyway - a little off-topic, I guess. Big picture - you're mistake is in saying that the relative higher chance is because of some "law of averages" that has a will of its own.
 Sci Advisor P: 3,282 I think these two qualitative statements need not be contradictory: 1) An imbalance in the first N tosses of a fair coin gives no information that improves our prediction of the result of the next M tosses of it. 2) As the sequence of tosses of a fair coin progresses it is likely that there will be times when the total numbers of heads has a big lead over the total number of tails or vice versa. As I recall, one of Feller's books discusses 2) in a mathematically rigorous way. One can compute the probability of one result or the other taking a lead of a certain size. It's understandable that people who look at graphs of real or simulated sequences of coin tosses get the impression that swings one way are balanced by swings the other way. It would be interesting to see the results from a model of coin tosses where the tosses are not independent and a specific formula is given to describe the dependence. For example suppose for toss i < 7 the probability of a head is 1/2 and for i > 6 the probability of a head is given by: 1/2 + (0.4)(3-K)/3 where K is the number of heads in the previous 6 tosses.
 P: 2 Forget "waves", there are none. Try to look at it this way. In an infinite number of tosses the heads/tail ratio will come out to very, very close to 50%/50%. Agreed? And in that infinite number of tosses, there will have been, almost assuredly, a streak of 1000 straight heads. Also agreed? But the wave theorist says, "Woah, after those 1000 heads, assuming it was running close to 50/50 up to then, there would have to be a "tail wave" for it to end up 50%/50% at the end". But the fallacy is: in infinity, there is no "end". The intuitive force that makes the "wave" seem inevitable is tied up in the human brain's inability to conceive of or think in terms of infinity.
 P: 2 "The problem with probability is that there are some certainties about it ... The fallacy is the whole concept of infinity." If you need to have certainties, and you outright reject the concept of infinity, then the study of probabilities is going to lead you to a stone wall.
P: 4,573
 Quote by yudiski4 Forget "waves", there are none. Try to look at it this way. In an infinite number of tosses the heads/tail ratio will come out to very, very close to 50%/50%. Agreed? And in that infinite number of tosses, there will have been, almost assuredly, a streak of 1000 straight heads. Also agreed? But the wave theorist says, "Woah, after those 1000 heads, assuming it was running close to 50/50 up to then, there would have to be a "tail wave" for it to end up 50%/50% at the end". But the fallacy is: in infinity, there is no "end". The intuitive force that makes the "wave" seem inevitable is tied up in the human brain's inability to conceive of or think in terms of infinity.
You don't need to think of it necessarily in terms of infinity, but rather in terms of something "really large".

For many practical purposes the strong law tells us a lot about the kind of limiting probabilities for large enough sample sizes as it would for an infinitely large number of them.

To understand this its best to think of the derivative of 1/x. If x is big enough then any change thereafter is not going to have much of an effect if the observations up to that point reflect a mostly unbiased sample. If the sample is highly biased then we can't necessarily do this, but for most purposes "large enough" samples will provide a distribution that is good enough to represent the true distribution for "infinite" sample sizes.
P: 3
I know this thread is over 8 years old, but this reply is for the benefit of someone like me who stumbles across it. Plus I think I can explain it in a more simpler manner, especially for those with basic stats knowledge.

 Quote by Verasace Concerning coin flip probabilities..... For example, if out of 10,000 coin flips, I get 9000 heads, then for the next 10,000 flips, the distribution of heads vs. tails would not be 50/50, but would be weighed in favor of more tails in order to get back to the 50/50 mean. I call such a change in normal tendency as "probability pressure" (PP)on the "probability wave" (PW). I realize the term probability wave is already established in reference to light, but it seems to apply here. Any thoughts, suggestions, comments
Ok, say you did the first 10,000 coin flips, and got 9000 heads. This gives you a 90/10 distribution. Now you're thinking you're at the top of a heads wave, and should expect a tail wave to take you back to a 50/50 distribution.

Then you carry on and do another 1,000,000 coin flips, but this time you get exactly 500,000 heads and 500,000 tails. So no increase in tails from a pressure wave. But, even without the tail pressure wave your graph has now moved to a 50.4/49.6 distribution.

What's happened is that you've simply increased the sample size and that has reduced the effect of the 9000 heads. Hopefully you can now see that the wave patterns tending towards the 50/50 distribution, are caused by the increase in samples and not an increase in heads or tails through a pressure wave.

 Related Discussions Calculus & Beyond Homework 1 Set Theory, Logic, Probability, Statistics 3 General Discussion 4 General Math 18 Set Theory, Logic, Probability, Statistics 8