# Dice Help

1. Jun 20, 2007

### Guttersnipe

Hello,

I need some help in resolving a dispute. Recently, a friend of mine told me that the probability of rolling a 6 on the second roll of a die after not rolling a 6 is not 1/6. He believes it is not 1/6 but rather the probability of rolling 6 after not rolling a 6 increases with each subsequent roll.

However, this seems really silly to me.

Shouldn't the probability of getting a 6 on the second role after not getting a six on the first role be 6/6 x 1/6? By using the 6/6, I'm just signifying that there is no probability the first role was a 6 since we've already rolled it and it WASN'T A SIX!

In our argument, I proposed this thought experiment. I roll one die behind a curtain and can't see which number is rolled. I then roll the second die in front of the curtain and can observe what number comes up. Now, the probability of the second die coming up with a 6 is 1/6, no? The outcome of the first die will have no effect on the second die. We argued about that one for an hour. . .

Thanks for the help.

2. Jun 20, 2007

### Staff: Mentor

Each roll of the die is independent. You are correct. Sounds like you can win some extra money from your friend if you devise the correct game.... (and prove your point at the same time).

3. Jun 20, 2007

### Hurkyl

Staff Emeritus
You are right that, no matter what you saw on the first roll, the probability of rolling a 6 on the second roll of the die is 1/6.

Your argument with the curtain is flawed, though. Suppose you had an unusual, brand new die that was rigged so that it never rolls the same number twice in a row. (but is otherwise fair)

Then, in your curtain experiment, the odds that your friend will see a "6" is exactly 1/6. However, the probability distribution for the second roll is clearly dependent on what you rolled the first time.

4. Jun 20, 2007

### Werg22

The probability of having a chain of results not having a 6 does decrease. As there are 5/6 chances of failure, there are at n tries $$\frac{5^{n}}{6^{n}}$$ chances of failing. And we all now that this fraction goes to 0 as n goes to infinity. All in all, probability is a mathematical concept and its application to real life "luck" can be ambiguous. Probability depends on the set of events we chose to define it from. The Monty Hall problem is another example in which all depends on what defines the set of events we're working with.

5. Jun 20, 2007

### jambaugh

This fallacy is called "The Gambler's fallacy" or "The Gambler's Ruin" and usually takes the form of assuming that having lost in the past one has a better chance of winning the next round of betting. Hence the "ruin" qualifier.

It is actually a misinterpretation of the law of large numbers. The person mistakenly assumes the future odds will change to adjust the outcome so the average will approach the expectation value as dictated by this law. Instead the earlier outcomes will simply get weighted less and less as the number of trials increases.

Thus given you've lost $100 so far on a rolling sixes game with 6:1 payoff. The law of large numbers doesn't say that if you keep playing your loss will approach zero but rather that your loss per turn will approach zero since$100/number of turns -> 0. It still predicts you will end up on average \$100 in the hole, given that's how far down you are at this point in the game.

You should make him put his money where his mouth is (and make some cash as well).

Regards,
James Baugh

6. Jun 21, 2007

### Guttersnipe

Thanks

Thanks. That shut him up.