Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

How many tosses does it take for a mathematician to figure that a coin is biased?

  1. Oct 1, 2012 #1

    I am struggling with the following question:
    How many tosses does it take for a mathematician to figure that a coin is biased?

    I think that it is impossible. Here is my logic. Assume that there is a biased coin, which always comes up heads and that the mathematician is testing the coin. She has tossed it 1000 times consecutively and has got heads everytime. She still can’t form the opinion that the coin is baised because this outcome is likely (a very low probability though) with a fair coin.

    Is my logic correct?

  2. jcsd
  3. Oct 1, 2012 #2
    Depends. What degree of certainty do you want? What probability of a false positive would you be willing to accept? If you want absolute metaphysical certitude, then probability is not the branch of mathematics for you.
  4. Oct 1, 2012 #3

    Stephen Tashi

    User Avatar
    Science Advisor

    She could form an opnion, but she couldn't offer any mathematical proof her opinion was correct.

    A statistician could do a "hypothesis test" and "reject" the hypothesis that the coin was fair, but this is simply a procedure, not a mathematical proof.

    If she knew or assumed more information about the coin, she might be able to make a statement about the probability that the coin was fair.
  5. Oct 1, 2012 #4
    Can you give an example of this?
  6. Oct 1, 2012 #5

    Stephen Tashi

    User Avatar
    Science Advisor

    For example, assume the coin is drawn from a population of coins and that the distribution of the parameter p that gives the probability of a coin landing heads is uniformly distributed over the interval [0,1].

    Or as another example, assume there were two coins, one with a probablity of 0.99 of landing heads and anothe with a probability of 0.5 of landing heads. Assume the coin used in the experiment was picked at random from these two coins.

    Those types of assumptions permit drawing conclusion about whether the coin used in the experiment was a fair coin. It is the nature of probability theory that if you wish to say something about the probability of the coin being fair given the experimental data you must set up a scenario that specifies that there is someting probabilistic about the coin being fair or not.

    This general approach is called a Bayesian approach and the assumption (or knowledge) about how the coin is probabilistically selected is called a "prior distribution". People criticize this approach for being "subjective", but nothing can be said about the probability that the coin is fair given the experimental data unless "prior" information is given. Without prior information, asking the probability that the coin is fair is like asking to find the sides and angles of a triangle given one side and one angle. There simply isn't enough given information to do it.
    Last edited: Oct 1, 2012
  7. Oct 1, 2012 #6
    If by "figure" you mean "prove with absolute certainty" then you are right. You can never prove anything empirically with absolute certainty.
  8. Oct 1, 2012 #7
    It's quite easy to see this.

    Suppose you had a coin that came up heads every time. After 32 tosses, you'd know the coin was 100% biased with a probability of being incorrect at around 4 billion to 1.

    The more difficult case is when you are only slightly biased. For any specified degree of bias there is a certain number of tosses that will produce the biased outcome with a certain probability of being correct.

    You want this formula. I don't know it but I know that a statistician can derive it for you.
  9. Oct 1, 2012 #8

    Stephen Tashi

    User Avatar
    Science Advisor

    That's not correct (with a probability of 1.0). I think your'e making a common misinterpretation of a "confidence interval". However, if we assume some prior distribution on the fairness of the coin, we can probably get that conclusion by using a Bayesian "credible interval".
  10. Oct 1, 2012 #9
    I agree; 1 doesn't equal 1 - 1/4,294,967,296. The latter does equal 1.0 however if you understand how to use the two significant figures you gave.

    I'm not referring to confidence intervals. But if I were, what would be the misunderstanding?

    Since you seem to know some statistics, why don't you derive the answer?
  11. Oct 2, 2012 #10

    Stephen Tashi

    User Avatar
    Science Advisor

    Were you using one of the prior distributions that I proposed? If so, I apologize. I didn't realize that.

    Without assuming any prior distribution on the probability of success, it is possible to calculate a result of this form:

    Let p(h) be the unknown probability that the coin lands heads.
    Let N be the number of tosses
    Let f be the observed fraction of tosses that are heads
    Let epsilon > 0 be a given number.

    From that and the assumption of independent tosses, it is possible to calculate:

    P =the probability that f is within plus or minus epsilon of p(h)

    A common misunderstanding of this result is to take the fraction of heads f0 that is observed in a particular group of N tosses (such as 32/32 = 1) and assert that P is the probability that f0 is within plus or minus epsilon of p(h). The value P refers to a statistical property of the distribution of the random variable f, not to a property of one paritcular value it may take.
  12. Oct 2, 2012 #11

    Stephen Tashi

    User Avatar
    Science Advisor

    ...and another common misinterpretation in statistics is to mistinterpret a "p-value" computed on the basis of assuming a "null hypothesis".

    If we assume the following
    The coin is fair.
    Let the total number of tosses be N
    Let N0 be a given number of tosses

    On the assumption of indepedent tosses it is possible to compute a result of the form
    P = the probability that the observed number of tosses is equal or greater than N0

    A common misinterpretation of P is that (1.0 - P) is the probablity that the coin is not fair by being biased toward heads. However P is computed on the basis of assuming (with certainty) that the coin is fair. So no valid calculation based on that assumption can produce information about the coin not being fair.
  13. Oct 2, 2012 #12

    I'm not sure if that's completely accurate in this case. The null is that the coin is fair, so the p value would be the probability of falsely rejecting that the coin is fair. I would consider that information about the coin not being fair.
  14. Oct 2, 2012 #13

    Stephen Tashi

    User Avatar
    Science Advisor

    I agree.

    It's information about the peformance of the test when it tests a fair coin. It isn't information about a particular coin.

    You can't calculate a probability (different than 1.0) that the coin is fair by beginning with the assumption that the coin is fair. You need a prior distribution for the fairness of the coin in order to do that.
  15. Oct 5, 2012 #14
    You can't determine whether the coin is biased or not by simply questioning the ratio of heads : tails per M coin tosses. The ratio is determined by the experiment, but the validity of the experiment is not discernibly determined by the expectancy fallout percentage - the margin or error in conduct by relation to theory. To determine whether the coin really is biased or not, you need to think outside the box and conduct experiments regardless of the head :tail ratio, but still dependent on the proportional mass(es) of the coin and other con outlets. If the coin turns out to be biased, you can easily tell if it is by some means other than the tail : head ratio. For example, if one side was coated with a thick spray of metallic paint whereas the other face was only sprinkled with a light shade, then the coin is obiously going to favor the heavier side. However, a mathematician won't be able to derive such a resolution unless (s)he disposes of heads/tails ratios altogether - they wont help and are a matter of diversion.
  16. Oct 5, 2012 #15


    User Avatar
    Science Advisor

    One thing that should be pointed out is that measuring one attribute of a process does not mean you measure the entire process and a lot of people make this mistake.

    The other thing that has been pointed out has to do with the Type I and Type II errors and as long as you have a positive size and an appropriate power, even if you disregard the above, you still have these errors to deal with statistically and probabilistically.
  17. Oct 6, 2012 #16


    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    A useful thought experiment is to imagine applying your fairness test to quadrillions of coins. You'd be sure to reject some wrongly. Besides, all real coins are unfair, it's just a question of how unfair.
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook