Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

I Rolling a fair die

  1. Feb 13, 2017 #1

    DaveC426913

    User Avatar
    Gold Member

    I don't know how to analyze data proper-like.

    What variance would be considered statistically significant, such that one might conclude a die is not fair?

    I just rolled a six-sided die 192 times.

    There are my results: 32, 28, 35, 37, 29, 31.
    So, they vary from average by 0, -4, +3, +5, -3, -1.

    Should I just keep rolling?
     
  2. jcsd
  3. Feb 13, 2017 #2

    MarneMath

    User Avatar
    Education Advisor

    Are the numbers the number of occurrences? For example is 32 the number of times 1 came up and 31 the number of times 6 came up? One common way to do this is to do a hypothesis test and use the pearson chi-square test. I can say with reasonably high confidence that the dice is likely fair. I got a p-value of .96ish.
     
  4. Feb 13, 2017 #3

    DaveC426913

    User Avatar
    Gold Member

    Yes. No.

    I mean yes the number is the number occurrences. But no, it is not 1 through 6. (though that is not relevant to the fairness)

    I ... guess I'll be Googling 'pearson chi-square test'.

    Or, I suppose, I can just Google 'how to determine of a die is fair'...
     
  5. Feb 13, 2017 #4

    jedishrfu

    Staff: Mentor

    I think the numberphile channel on YouTube had a video on dice fairness or on creating a special dice with unique properties.

     
  6. Feb 13, 2017 #5

    FactChecker

    User Avatar
    Science Advisor
    Gold Member

    The Chi-squared goodness of fit test that @MarneMath recommends is standard and routinely used for your type of question. There are several calculators on the internet. Here is one https://graphpad.com/quickcalcs/chisquared2/

    For your data, it gave a two-tailed P value of 0.8662, which means that the odds of getting sample results from a fair die that differ from expected (32 per side) by that amount or more are 0.8662. So your results are very reasonable for a fair die.

    Whether you should keep rolling the die is up to you. You can assume that your die is not perfectly fair since no die is perfect. But you may have to roll a huge number of times to detect it with any statistical certainty.
     
    Last edited: Feb 14, 2017
  7. Feb 14, 2017 #6

    Dale

    Staff: Mentor

    You could also use a Bayesian approach. You would probably model it with a Dirichlet distribution prior and posterior. Then you could either decide how far from fair you are willing to ignore or you could compare directly to the fair dice hypothesis.
     
  8. Feb 14, 2017 #7

    DaveC426913

    User Avatar
    Gold Member

    Too bad it'll be something I can only revel in alone.

    dice.jpg

    My gamer friends are all
    "Hey that die has a nine."
    "Hey that die has two threes."
    "Hey that die has a zero."


    stupid mathphobes...
     
  9. Mar 6, 2017 #8

    If it's a pitted die, as depicted, it is imperfect with respect to fairness by virtue of non-uniform density.

    dice-jpg.113225.jpg

    These flush-faced casino dice are closer to fair:

    flush-spots-casino-dice.jpg
     
  10. Mar 6, 2017 #9

    DaveC426913

    User Avatar
    Gold Member

    Indeed. And worse than normal dice because of the large discrepancy in number of holes.
    Also, I did not distribute the numbers around the faces. You'll notice that 7 and 9 are adjacent.
    My plan is to fill the pits with epoxy, then retest to see if that fairs them up.


    (Also: Dude wth! My post is all centre-jusitifed cause-uh you! )​
     
  11. Mar 6, 2017 #10

    FactChecker

    User Avatar
    Science Advisor
    Gold Member

    I would be surprised if the difference shows up before millions of tosses. I think that the tiny difference would be lost in other random influences.
     
  12. Mar 6, 2017 #11

    DaveC426913

    User Avatar
    Gold Member

    Well, that's why we do empirical observations. To surprise us (or more accurately, to disabuse us of our preconceptions).
     
    Last edited: Mar 6, 2017
  13. Mar 6, 2017 #12

    FactChecker

    User Avatar
    Science Advisor
    Gold Member

    I agree. It would be interesting to me if a reasonable number of trials could detect that they are not fair.
     
  14. Mar 6, 2017 #13

    FactChecker

    User Avatar
    Science Advisor
    Gold Member

    Just for fun, I tested what the Chi-squared result would be if all your results were multiplied by 10 as though you had done 10 times more experiments and got those proportions. Those results would be statistically very significant. If the die were fair, results that "unfair" or worse would only happen 2 in every 1000 times.

    With 5 times as many trials giving the same proportions, the odds are 1 in 10 of getting those results or worse from a fair die.
     
  15. Mar 6, 2017 #14

    DaveC426913

    User Avatar
    Gold Member

    Isn't that kind of counter-productive?

    I mean, if I rolled a perfectly fair only 6 times in total, then multiplied the result by 333, I'd obviously get vastly biased results.
     
  16. Mar 6, 2017 #15

    FactChecker

    User Avatar
    Science Advisor
    Gold Member

    I wasn't interested in a perfectly fair die. Just to get some scope on the problem, I was trying to see of how many rolls it would take if the biased results were due to an unfair die and were continued in a larger experiment. Five times as many rolls would not be enough and 10 times would be more than enough.
     
  17. Mar 7, 2017 #16

    MarneMath

    User Avatar
    Education Advisor

    I wasn't sure if just multiplying the number by 10 would be a good way to calculate how my reps the OP would need to check if the results would be significant. So I ran a quick power calculation. With an effect size of w = .1, sig level of .05 and power at .95. The OP would need 1979 rolls. The practical problem here is obviously what you consider to be practically significantly different. I can always find something to be statistical significant if I just increase my sample size. By the nature of most statistical test, they become rather sensitive as our sample size increases.
     
  18. Mar 7, 2017 #17

    FactChecker

    User Avatar
    Science Advisor
    Gold Member

    The original data is not uniform, but the difference between it and a uniform, fair die is very insignificant. If the lack of uniformity is entirely due to an unfair die, then it would continue the same proportions in a larger experiment. So multiplying the results and the expected uniform results would be give an idea of how an unfair die like that would perform in a Chi-squared test. Using expected bin sizes of 320 and data 320,280, 350, 370, 290, 310 gave these results:
    Code (Text):

    P value and statistical significance:
      Chi squared equals 18.750 with 5 degrees of freedom.
      The two-tailed P value equals 0.0021
      By conventional criteria, this difference is considered to be very statistically significant.

    The P value answers this question: If the theory that generated the expected values were correct, what is the probability of observing such a large discrepancy (or larger) between observed and expected values? A small P value is evidence that the data are not sampled from the distribution you expected.
     
    Multiplying by 5 instead of 10 gives the expected bin size of 160 and data 160, 140, 175, 185, 145, 155. The Chi-squared results are:
    Code (Text):

    P value and statistical significance:
      Chi squared equals 9.375 with 5 degrees of freedom.
      The two-tailed P value equals 0.0950
      By conventional criteria, this difference is considered to be not quite statistically significant.

    The P value answers this question: If the theory that generated the expected values were correct, what is the probability of observing such a large discrepancy (or larger) between observed and expected values? A small P value is evidence that the data are not sampled from the distribution you expected.
     
     
  19. Mar 7, 2017 #18

    MarneMath

    User Avatar
    Education Advisor

    Well, I understand what you're saying. I'm just not sure that's a method I would use to calculate the required number of reps needed. We essentially came up with same number of required reps though. At your particular sample size the effects your measuring is basically that a difference of .1. Perhaps in dice land that's a practically significant difference. I'm not sure. Either way, that's my only caution with regards to increasing sample size until you have a decent p-value.

    *Disclosure i'm assuming a power of .9
     
  20. Mar 7, 2017 #19

    FactChecker

    User Avatar
    Science Advisor
    Gold Member

    You're right. It would only work for an unfair die that gives exactly the biased results of that data sample. It could not be used for an unknown die. It's just an example that I thought was interesting.
    That is the result for 5 times as many trials. It is not significant enough. The result for 10 times as many is shown above. It is very significant (0.002).
     
    Last edited: Mar 7, 2017
  21. Mar 7, 2017 #20

    MarneMath

    User Avatar
    Education Advisor

Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: Rolling a fair die
  1. Fair Dice Rolled (Replies: 10)

Loading...