Conceptual problem with fair games

  • Context: Graduate 
  • Thread starter Thread starter techmologist
  • Start date Start date
  • Tags Tags
    Conceptual Games
Click For Summary

Discussion Overview

The discussion revolves around the concept of fair games in probability, particularly focusing on scenarios involving infinite states and the implications of unlimited plays. Participants explore the expected value of a game where two players bet on coin tosses, questioning the fairness of the game and the calculations of expected values under different conditions.

Discussion Character

  • Exploratory
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • Some participants assert that Peter will "almost surely" end up 1 penny ahead in the coin toss game, while others emphasize the need to consider the expected value, which they claim is 1 cent, suggesting the game is unfair.
  • There is a discussion about the calculation of expected value, with one participant questioning how the potential for losing an unlimited amount of money affects the expected value, suggesting it could be zero or even minus infinity.
  • A participant introduces a different betting strategy where Peter doubles his bet after each loss, raising questions about the expected gain in this scenario and how it relates to the original game.
  • Another participant explains that in measure theory, an event with probability zero does not affect the expected value, which can lead to different results when considering limits versus expectations.
  • One participant expresses gratitude for the clarification regarding measure theory and its implications for betting systems, while also seeking to understand the necessity of finite bankrolls and maximum bet limits in proving that no betting system can turn an unfavorable game into a favorable one.
  • There is a mention of Ed Thorp's book on probability, with participants sharing links and expressing interest in its content.
  • A participant raises concerns about the independence of bets in the context of money management systems, questioning the assumptions made in the proof of the failure of classical gambling systems.

Areas of Agreement / Disagreement

Participants generally disagree on the expected value of the game and its implications for fairness. While some assert that the expected value is 1 cent and thus the game is unfair, others express uncertainty about how different conditions affect this value. The discussion remains unresolved regarding the implications of infinite bankrolls and the independence of bets.

Contextual Notes

Participants note the complexity of calculating expected values in scenarios involving infinite states and the potential discrepancies between the expectation of limits and the limit of expectations. There is also mention of the need for finite bankrolls and maximum bet limits in practical applications of betting systems.

techmologist
Messages
313
Reaction score
12
On p. 428 of Grinstead and Snell's Introduction to Probability, they say that fair* games with an infinite number of states (Markov states, if the game is a Markov process) need not remain fair if an unlimited number of plays is allowed. The example they give is two people, Peter and Paul, tossing a fair coin, betting $.01 per toss, until Peter is ahead one cent. If they both have unlimited funds, then Peter will surely end up 1 penny ahead.

I've got two questions. First, shouldn't we say that Peter will almost surely end up 1 penny ahead, since there is the possibility (probability 0) that he loses the first toss and never does better than get back to even? Second, does this game even have an expected value? If it does, and if it is zero, then why wouldn't the game still be fair?

I'm guessing the game does not have an expected value (of Peter's gain), since calculating it using different limiting processes gives different results. For example, if you first assume that Peter can only lose some finite amount B, then for a fair coin the expected gain is zero, no matter how large B is. If, however, you allow the coin to be biased, and let B tend to infinity, then Peter's expected gain tends toward minus infinity or +1 depending on whether the coin is biased in his favor or not. Finally, if you allow B to increase to infinity and the probability of "Heads" to approach 1/2 simultaneously in some way, then the answer depends on whether you apprach p(Heads)=1/2 from the left or the right.

* A game is said to be fair if your expected fortune after one round of the game is equal to your starting fortune. [/size]
 
Physics news on Phys.org
"Almost surely", yes.
This game does have expected value, namely 1 cent. So the game is unfair.
Your third paragraph seems irrelevant.
 
g_edgar said:
"Almost surely", yes.
This game does have expected value, namely 1 cent. So the game is unfair.
Your third paragraph seems irrelevant.


Okay. How do you calculate the expected value? E = 1*1 cent + 0*everything else = 1 cent? Since Peter is exposing himself to the possibility of losing an unlimited amount of money, even if that happens with probability zero, how do you know that doesn't make the expected value come out to zero or even minus infinity?


Edit:

To make it clearer why I'm having a hard time being sure that the expected value is 1 in the above game, consider a different game. Peter and Paul are still flipping a fair coin, but this time Peter doubles the bet after every loss, and quits when he finally wins one, which puts him ahead by 1 cent. This happens with probability 1. So is Peter's expected gain 1 cent?

If Peter starts with B = 2N-1 pennies, then he can lose at most N times in a row before he runs out of money. The probability that he ends up one penny ahead is:

P = (1/2)[1 + (1/2) + (1/2)2 + ... + (1/2)N-1] = 1-(1/2)N

Whereas the probability that he ends up 2N-1 pennies behind is 1-P = (1/2)N

So the expected value is

E = 1-(1/2)N - (1/2)N*(2N-1) = 0 for any value of N

What is a game in which Peter has infinite starting bankroll if it is not the limit of games in which he has an increasingly large but finite starting bankroll?
 
Last edited:
That's how probability zero works. In measure theory (which is the model used for probability) the expected value is zero on an event of probability zero for any random variable, even one with value minus infinity on that event.

your edit. The expectation of the limit different from the limit of the expectation. Yes. This can happen in measure theory. And it does happen in this example.
 
Thanks, g_edgar. I didn't know that about measure theory, that you can handle infinite games and game sequences directly without having to consider them as limits of finite games. That clears up a lot of confusion.

So, in the measure-theoretic approach to probability, the "Martingale" doubling-up betting system works in principle! The roulette players were right...except for that small technicality about finite bankrolls and maximum bet limits. :smile:

I have been trying to prove to myself the intuitive result that no betting system can turn game in which each individual bet is unfavorable into a game that is favorable in the long run. But I couldn't see why a max bet limit or finite bankroll was really necessary to ensure this. Now I can focus on proving the result for the realistic case in which both those conditions hold. Ed Thorp's book Elementary Probability is supposed to contain a proof of this, but I haven't been able to find a copy of it at nearby libraries.
 
You can find the book in full here:

http://www.edwardothorp.com/sitebuildercontent/sitebuilderfiles/ElementaryProbability.pdf

Have a look at the other stuff on the site too, the articles are very interesting.
Ed Thorp has really been an inspiration for me.
 
Last edited by a moderator:
The Investor said:
You can find the book in full here:

http://www.edwardothorp.com/sitebuildercontent/sitebuilderfiles/ElementaryProbability.pdf

Have a look at the other stuff on the site too, the articles are very interesting.
Ed Thorp has really been an inspiration for me.

Thank you! Exactly what I needed. :smile:
 
Last edited by a moderator:
Hmm. I am trying to follow the outline of the proof of the above result about betting systems given in problems 13 and 14 on page 85, and right from the start I get lost:

Elementary Probability said:
5.13 Failure of the classical gambling systems. A bet in a gambling game is a random variable. Most (but not all) of the standard gambling games consist of repeated independent trials, which means that the bets Bi are independent. Further, there is a constant K such that |Bi| <= K for all i.

I know that the outcomes, win or lose, are assumed to be independent for the game in question (betting on red at roulette, for example). But the bet size for a given trial generally depends on the outcomes of the earlier trials: that is the whole idea of a money management system. So if \epsilon_i, which takes on values of +1 or -1, is the random variable representing the outcome on the ith trial, and W_i(\epsilon_1, ...,\epsilon_{i-1}) is the amount wagered on that trial, then in Thorp's notation the random variable for the bet is

B_i = \epsilon_i W_i

So Bi and Bj are not usually independent. In the special case that the probability of success for each trial is 1/2, the covariance of Bi and Bj would be zero, but we are interested only in games where the expected value is negative for each trial.

Did I misunderstand something?

EDIT:
I have started a new thread for this since I have deviated from the original topic of fair games:

https://www.physicsforums.com/showthread.php?p=2645936#post2645936
 
Last edited:

Similar threads

  • · Replies 20 ·
Replies
20
Views
6K
  • · Replies 131 ·
5
Replies
131
Views
10K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 2 ·
Replies
2
Views
5K
Replies
4
Views
3K
  • · Replies 11 ·
Replies
11
Views
4K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 8 ·
Replies
8
Views
5K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K