# Conceptual problem with fair games

techmologist
On p. 428 of Grinstead and Snell's Introduction to Probability, they say that fair* games with an infinite number of states (Markov states, if the game is a Markov process) need not remain fair if an unlimited number of plays is allowed. The example they give is two people, Peter and Paul, tossing a fair coin, betting \$.01 per toss, until Peter is ahead one cent. If they both have unlimited funds, then Peter will surely end up 1 penny ahead.

I've got two questions. First, shouldn't we say that Peter will almost surely end up 1 penny ahead, since there is the possibility (probability 0) that he loses the first toss and never does better than get back to even? Second, does this game even have an expected value? If it does, and if it is zero, then why wouldn't the game still be fair?

I'm guessing the game does not have an expected value (of Peter's gain), since calculating it using different limiting processes gives different results. For example, if you first assume that Peter can only lose some finite amount B, then for a fair coin the expected gain is zero, no matter how large B is. If, however, you allow the coin to be biased, and let B tend to infinity, then Peter's expected gain tends toward minus infinity or +1 depending on whether the coin is biased in his favor or not. Finally, if you allow B to increase to infinity and the probability of "Heads" to approach 1/2 simultaneously in some way, then the answer depends on whether you apprach p(Heads)=1/2 from the left or the right.

* A game is said to be fair if your expected fortune after one round of the game is equal to your starting fortune.

g_edgar
"Almost surely", yes.
This game does have expected value, namely 1 cent. So the game is unfair.

techmologist
"Almost surely", yes.
This game does have expected value, namely 1 cent. So the game is unfair.

Okay. How do you calculate the expected value? E = 1*1 cent + 0*everything else = 1 cent? Since Peter is exposing himself to the possibility of losing an unlimited amount of money, even if that happens with probability zero, how do you know that doesn't make the expected value come out to zero or even minus infinity?

Edit:

To make it clearer why I'm having a hard time being sure that the expected value is 1 in the above game, consider a different game. Peter and Paul are still flipping a fair coin, but this time Peter doubles the bet after every loss, and quits when he finally wins one, which puts him ahead by 1 cent. This happens with probability 1. So is Peter's expected gain 1 cent?

If Peter starts with B = 2N-1 pennies, then he can lose at most N times in a row before he runs out of money. The probability that he ends up one penny ahead is:

P = (1/2)[1 + (1/2) + (1/2)2 + ... + (1/2)N-1] = 1-(1/2)N

Whereas the probability that he ends up 2N-1 pennies behind is 1-P = (1/2)N

So the expected value is

E = 1-(1/2)N - (1/2)N*(2N-1) = 0 for any value of N

What is a game in which Peter has infinite starting bankroll if it is not the limit of games in which he has an increasingly large but finite starting bankroll?

Last edited:
g_edgar
That's how probability zero works. In measure theory (which is the model used for probability) the expected value is zero on an event of probability zero for any random variable, even one with value minus infinity on that event.

your edit. The expectation of the limit different from the limit of the expectation. Yes. This can happen in measure theory. And it does happen in this example.

techmologist
Thanks, g_edgar. I didn't know that about measure theory, that you can handle infinite games and game sequences directly without having to consider them as limits of finite games. That clears up a lot of confusion.

So, in the measure-theoretic approach to probability, the "Martingale" doubling-up betting system works in principle! The roulette players were right...except for that small technicality about finite bankrolls and maximum bet limits. I have been trying to prove to myself the intuitive result that no betting system can turn game in which each individual bet is unfavorable into a game that is favorable in the long run. But I couldn't see why a max bet limit or finite bankroll was really necessary to ensure this. Now I can focus on proving the result for the realistic case in which both those conditions hold. Ed Thorp's book Elementary Probability is supposed to contain a proof of this, but I haven't been able to find a copy of it at nearby libraries.

The Investor
You can find the book in full here:

http://www.edwardothorp.com/sitebuildercontent/sitebuilderfiles/ElementaryProbability.pdf [Broken]

Have a look at the other stuff on the site too, the articles are very interesting.
Ed Thorp has really been an inspiration for me.

Last edited by a moderator:
techmologist
You can find the book in full here:

http://www.edwardothorp.com/sitebuildercontent/sitebuilderfiles/ElementaryProbability.pdf [Broken]

Have a look at the other stuff on the site too, the articles are very interesting.
Ed Thorp has really been an inspiration for me.

Thank you!! Exactly what I needed. Last edited by a moderator:
techmologist
Hmm. I am trying to follow the outline of the proof of the above result about betting systems given in problems 13 and 14 on page 85, and right from the start I get lost:

Elementary Probability said:
5.13 Failure of the classical gambling systems. A bet in a gambling game is a random variable. Most (but not all) of the standard gambling games consist of repeated independent trials, which means that the bets Bi are independent. Further, there is a constant K such that |Bi| <= K for all i.

I know that the outcomes, win or lose, are assumed to be independent for the game in question (betting on red at roulette, for example). But the bet size for a given trial generally depends on the outcomes of the earlier trials: that is the whole idea of a money management system. So if $$\epsilon_i$$, which takes on values of +1 or -1, is the random variable representing the outcome on the ith trial, and $$W_i(\epsilon_1, ...,\epsilon_{i-1})$$ is the amount wagered on that trial, then in Thorp's notation the random variable for the bet is

$$B_i = \epsilon_i W_i$$

So Bi and Bj are not usually independent. In the special case that the probability of success for each trial is 1/2, the covariance of Bi and Bj would be zero, but we are interested only in games where the expected value is negative for each trial.

Did I misunderstand something?

EDIT:
I have started a new thread for this since I have deviated from the original topic of fair games: