Is the Gambler's Fallacy Really a Fallacy?

  • Thread starter Volkl
  • Start date
In summary, the conversation is discussing the concept of randomness and probability in gambling. The speakers debate whether there is a tendency towards randomness and if previous outcomes have any impact on future outcomes. They agree that shorter strings of like outcomes are more prevalent than longer strings, but disagree on the impact of previous outcomes on future ones. The conversation concludes with a suggestion to conduct practical experiments to settle the debate.
  • #1
Volkl
39
0
If I was a serious gambler there is a small chance that I could place 50,000,000 bets physically at a casino. Pretending that the game we are playing offers fair odds, the chances of one particular outcome coming up 1000 times in a row within the set compared to the same particular outcome coming up 100 times must be much less. If this is true the probability of 20 particular outcomes in a row must be less then the probability that 10 of the same particular outcomes can come up. Doesn't this prove that there is a tendency towards randomness meaning that there is a tendency to have less of the same particular value coming up in a row. To me, this logic proves that the gamblers fallacy is in itself a fallacy. Or do you believe that all 50,000,000 could be the same value for anyone living on earth? I had a roulette wheel with no greens in mind.
 
Last edited:
Mathematics news on Phys.org
  • #2
No, let B=black and R=red. Then the probabilities for

B R B R B R B R B B B R B B B R R R R B R B R B R B B B R B R

are exactly the same as

B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B

These two events have exactly the same probability because each roll of the Roulette wheel is independent.
 
  • #3
So again, your one of those that believes that a human could in reality experience 50,000,000
black outcomes?
 
  • #4
Volkl said:
So again, your one of those that believes that a human could in reality experience 50,000,000
black outcomes?

The chance would be very small, but the chance is the same as any other outcome.

This is very easily checked with a computer. Just let a computer spew out random outcomes and see if there is any outcome that occurs more than another. Probability doesn't lie, my friend.
 
  • #5
I agree with the chance being small, so much so I'm betting on it. Please explain why you believe the chance is small? Do you believe the chance of ten in a row being black is much larger?
 
  • #6
Whatever betting strategy you have, run it on a computer first. You'll see quickly if there are any fallacies.
 
  • #7
The logic here does not require a computer. The point is that there is a higher probability that smaller sets of like numbers occur than larger sets of like numbers, so there is a tendency for the next value to oppose the previous string of like values.
 
  • #8
Volkl said:
The logic here does not require a computer. The point is that there is a higher probability that smaller sets of like numbers occur than larger sets of like numbers, so there is a tendency for the next value to oppose the previous string of like values.

I know that seems believable to you, but the logic is inherently flawed. A value does not depend on previous values.

There is only one way to settle this and it is by practical experimentation. You will see in practical experiments that what you are saying is false.
 
  • #9
I guarantee that 1000 blacks values in a row would be found less than 100 black values in a row. Another way of saying it would be that there would be more sets of 100 blacks in a row then there would be 1000 black sets within the 50,000,000 sample size within all of the samples/trials any human could simulate in a lifetime.
 
  • #10
Volkl said:
I guarantee that 1000 blacks values in a row would be found less than 100 black values in a row.

Yes, of course. But this does not mean that if you have 100 black values in a row, that the chance on a red is somehow higher than the chance on a black...
 
  • #11
Micromass is right, there is only a variance in the probability if they are removed from the system.
 
  • #12
Since there is always more smaller sets of like numbers in a row then there is compared to larger sets of like numbers in a row - I believe it does. If not, you would not have said of course below.
 
  • #13
This system is simply 50,000,000 spins of a wheel though-Kevin.
 
  • #14
This conversation is pointless. We can never convince each other in any possible way. So I suggest you go and try some practical experiments. Do it on a computer or in a casino. You'll see that what you say is false.
 
  • #15
How can the probability for 1000 blacks in a row be less than a hundred in a row if there was no tendency towards randomness?
 
  • #16
Volkl said:
How can the probability for 1000 blacks in a row be less than a hundred in a row if there was no tendency towards randomness?

I don't see what you're getting at here. The system is random.
 
  • #17
I'm trying to get agreement on the "tendency" towards randomness.
 
  • #18
Each time you have a 50% chance of guessing right. There is ZERO impact from the choice before it.
 
  • #19
Volkl said:
This system is simply 50,000,000 spins of a wheel though-Kevin.

Then there is no variance in probability. Each time you spin the wheel the probability of getting a number in your set [itex]\mathcal{S}[/itex] is one over the cardinality of your set: [itex]\frac{1}{\left|\mathcal{S}\right|}[/itex].
 
  • #20
I'm trying to get agreement that there is a "tendency" towards randomness when comparing strings of like outcomes to other strings of like outcomes. I.e. Shorter strings of like outcomes are more prevalent then longer strings of like outcomes.
 
  • #21
Volkl said:
I'm trying to get agreement that there is a "tendency" towards randomness when comparing strings of like outcomes to other strings of like outcomes. I.e. Shorter strings of like outcomes are more prevalent then longer strings of like outcomes.

Nobody is debating here that a 1000 black outcomes are less prevalent than a 100 black outcomes. That is common sense.

However, what we're saying is that if the first 100 are black, then you have equal probability that the next one is red or black.

That is: the chance that you have 100 black and then 1 red is equal to the chance that you have 101 black.
 
  • #22
Micromass has already agreed to this concept that shorter strings of like outcomes are more prevalent then larger strings of like outcomes.
 
  • #23
Volkl said:
Micromass has already agreed to this concept that shorter strings of like outcomes are more prevalent then larger strings of like outcomes.

Not only that: shorter strings of a certain outcome (not necessarily like) are more prevalent than larger strings of a certain outcome.

That is: you will see more of the string

BRBRBRBBB

then of the string

BRBRBRBBBRRBRBBRBBRRRBBRB

So whether the outcomes are all black is irrelevant.
 
  • #24
If that is true then counting all the way up to a thousand following your same logic would mean that the 1000 has the same probability as the 100. Something is not right here and it has to do with the tendency for the string itself to be random as opposed to like valued I.e. All blacks.
 
  • #25
Volkl said:
If that is true then counting all the way up to a thousand following your same logic would mean that the 1000 has the same probability as the 100. Something is not right here and it has to do with the tendency for the string itself to be random as opposed to like valued I.e. All blacks.

I do not understand what you're saying here.

Anyway, I'm going to stop discussing this. It's pointless. Please try your theory out on a computer. You'll see that it's incorrect. And if it turns out you're right, do send the findings to me!
 
  • #26
I think the OP's fallacy is his mis-interpretation of The Law Of Large Numbers which says the experimental probability will match the theoretical probability ... but only " as n => infinity ".

Saying the next outcome will be Red after a long string of 100 Blacks is to make the approximation that 100 is almost infinity.

One does so at one's own peril
 
  • #27
The outcomes or the string of like values matters only as a way of showing that randomness affects strings of like values in a way that ultimately causes the gamblers fallacy to be false - because we agree on this concept of the tendency towards different values as compared to like values (strings of the same colour,like all black for instance).
 
  • #28
Paulf - your comments came across as oxymoronic to me, because the comment about the law of large numbers is exactly why I set a limit of 50,000,000, however you go on to believe that when we are not talking about infinity that the expected outcomes is somehow similar to the outcome as if we were talking about infinity.
 
  • #29
Ignea_unda said:
Each time you have a 50% chance of guessing right. There is ZERO impact from the choice before it.

Why are smaller sets of blacks more prevalent than larger sets of blacks then?
 
  • #30
If the Strings of blacks have different probabilities, but yet the individual spin outcomes have the same probability, what accounts for the strings of blacks having different probabilities?
 
  • #31
A string of a certain length, of ANY composition will have the same probability, (1/S)^n, correct?

Then, any of those strings can be taken, and the probability of a string of length n+1 with either a black OR red at the end, would be (1/S)^n+1.
 
  • #32
Hi Vokl. Let's define "Volkl's Ratio" as the the (coin tossing) ratio of [marginal probability of head following immediately after another head] divide [overall probability of heads].

I've never seen this ratio published so you could make quite a name for yourself by experimentally measuring and documenting it. So get yourself a coin and get tossing. :tongue:
 
  • #33
Volkl said:
you go on to believe that when we are not talking about infinity that the expected outcomes is somehow similar to the outcome as if we were talking about infinity.
I think it is YOU who is doing this, not me. Otherwise where do you get this idea that the number of R's must = the number of B's ?
That is why you expect the next event to be R after a string of B's.
Unless you are postulating a new theorem of Probability, your expectation comes from the Law of Large Numbers. But you violate its foundation by limiting the events to 50K. Thus you can not invoke it and the next roll is determined by Probabilities from Physics, not those from Statistics.

Also, you frequently talk about randomness. True, the discrete variable of the outcomes is random. But this only insures/predicts the P(#B's) has a Gaussian Probability Density. It says nothing about a single, given event which is the focus of your assertions here..

Your idea would work for a HyperGeometric Distribution. That is the case say for drawing Red and Black colored balls of equal number from a bag, but NOT replacing them after each event. Then if you saw 5 B's you could conclude that the P(R) on the next draw is > P(B) because the Sample Space is changing and you know how.

Look at this another way. Your thesis implies that if you see a string of 30 Blacks, that the Probability that next one will be a Red is very high. At least higher than P(B). That is the same as saying that P(31 B's) is much different than P(30). But we know this is not true because we know the PDF is continuous and Gaussian. You can not know that you have just seen the end of the string. It could be a string of 60 that is coming. Or a string of 59, or 58, or 57, etc, etc. You add all those possibilities together and you can see P(B) is higher than it appears.
 
Last edited:
  • #34
Volkl said:
The logic here does not require a computer. The point is that there is a higher probability that smaller sets of like numbers occur than larger sets of like numbers, so there is a tendency for the next value to oppose the previous string of like values.

Baye's Theorem (conditional probability) solves this one.

The chances of red (R) and black (B) are the same, so

[itex]P(R) = P(B) = 0.5[/itex]

The same goes for any string of reds or blacks, with probability

[itex]P(nR) = P(nB) = \frac{1}{2^n}[/itex]

You want the probability of the (n+1)th result being black given that the previous n results were all black, which is

[itex]P((n+1)=B|\sum_{i=1}^{n} (i = B)) = \frac{P(\sum_{i=1}^{n} (i = B)|(n+1)=B)P(B)}{P(\sum_{i=1}^{n} (i = B)|(n+1)=B)P(B)+P(\sum_{i=1}^{n} (i = B)|(n+1)=R)P(R)}[/itex]

or in words, "the probability that the next one is black, given that the previous n were black, is equal to

the probability that the previous were black given that the next is black,

multiplied by the probability of any being black

divided by

the probability that the previous were all black given the next is black plus the probability that the previous were all red given the next is red."

This simplifies, since as per the first equation, the probability on any given try is the same for red and black. Also, the probability of n reds is the same as the probability for n blacks.

What you end up with is

[itex]P((n+1)=B|\sum_{i=1}^{n} (i = B))) = 0.5[/itex]

as everyone else in this thread has already told you.

Where you're getting confused is in the text I quoted above - it's true that you're less likely to find the string of (n+1) black than you are to find the (n) black within a larger string, but just because you've found the (n) it doesn't mean that *this time* the next one will be the longer string. Equally so, it doesn't mean that this time it will be the shorter one.

Formally, your confusion is ascribing some forcing on the part of http://en.wikipedia.org/wiki/Regression_toward_the_mean#Other_statistical_phenomenon". That is a result of the unlikeliness of having a long string, not the cause.
 
Last edited by a moderator:
  • #35
Volkl, saying that the next 10 will tend away from being all black is not the same as saying the next 1 will tend away from black. 10 will tend away from black, yes. The odds will be 1 in 1024 in general. But 1 won't tend away, it'll be black in 1/2 of cases generally.
 
<h2>1. What is the Gambler's Fallacy?</h2><p>The Gambler's Fallacy is the belief that previous outcomes in a game of chance can influence future outcomes, even though each event is statistically independent. For example, a gambler may believe that after a series of losses, they are "due" for a win.</p><h2>2. Why is it considered a fallacy?</h2><p>The Gambler's Fallacy is considered a fallacy because it goes against the laws of probability and statistics. Each event in a game of chance has the same probability of occurring, regardless of previous outcomes. The belief that previous outcomes can influence future outcomes is not supported by evidence.</p><h2>3. What are some examples of the Gambler's Fallacy?</h2><p>One example of the Gambler's Fallacy is in a game of roulette, where a player bets on black after a series of red outcomes, believing that black is "due" to come up. Another example is in a coin toss, where a person believes that after a series of heads, tails is more likely to occur.</p><h2>4. Can the Gambler's Fallacy be applied to other areas besides gambling?</h2><p>Yes, the Gambler's Fallacy can be applied to other areas besides gambling. It can also be seen in decision making, where people may believe that past failures will lead to future success, or in sports, where fans may believe that their team is "due" for a win after a series of losses.</p><h2>5. How can the Gambler's Fallacy be avoided?</h2><p>The Gambler's Fallacy can be avoided by understanding and accepting the laws of probability and statistics. Each event in a game of chance is independent and does not affect future outcomes. It is important to make decisions based on evidence and not on the belief that previous outcomes can influence future outcomes.</p>

1. What is the Gambler's Fallacy?

The Gambler's Fallacy is the belief that previous outcomes in a game of chance can influence future outcomes, even though each event is statistically independent. For example, a gambler may believe that after a series of losses, they are "due" for a win.

2. Why is it considered a fallacy?

The Gambler's Fallacy is considered a fallacy because it goes against the laws of probability and statistics. Each event in a game of chance has the same probability of occurring, regardless of previous outcomes. The belief that previous outcomes can influence future outcomes is not supported by evidence.

3. What are some examples of the Gambler's Fallacy?

One example of the Gambler's Fallacy is in a game of roulette, where a player bets on black after a series of red outcomes, believing that black is "due" to come up. Another example is in a coin toss, where a person believes that after a series of heads, tails is more likely to occur.

4. Can the Gambler's Fallacy be applied to other areas besides gambling?

Yes, the Gambler's Fallacy can be applied to other areas besides gambling. It can also be seen in decision making, where people may believe that past failures will lead to future success, or in sports, where fans may believe that their team is "due" for a win after a series of losses.

5. How can the Gambler's Fallacy be avoided?

The Gambler's Fallacy can be avoided by understanding and accepting the laws of probability and statistics. Each event in a game of chance is independent and does not affect future outcomes. It is important to make decisions based on evidence and not on the belief that previous outcomes can influence future outcomes.

Similar threads

Replies
11
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
5
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
5
Views
133
Replies
9
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
11
Views
1K
  • Set Theory, Logic, Probability, Statistics
7
Replies
212
Views
11K
  • Quantum Interpretations and Foundations
2
Replies
37
Views
1K
  • Set Theory, Logic, Probability, Statistics
2
Replies
48
Views
5K
Replies
3
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
10
Views
2K
Back
Top