# Betting systems and independence

## Main Question or Discussion Point

This is a question about a problem (not homework) from Ed Thorp's book, http://www.edwardothorp.com/sitebuildercontent/sitebuilderfiles/ElementaryProbability.pdf [Broken]. Problem 13 on page 85 outlines a proof that betting systems designed to make an unfavorable game favorable cannot work when there is a maximum bet limit. Problem 14 outlines an even stronger version. Right from the start, I get lost when he talks about the independence of bets:

Elementary Probability said:
5.13 Failure of the classical gambling systems. A bet in a gambling game is a random variable. Most (but not all) of the standard gambling games consist of repeated independent trials, which means that the bets Bi are independent. Further, there is a constant K such that |Bi| <= K for all i.
I know that the outcomes, win or lose, are assumed to be independent for the game in question (betting on red at roulette, for example). But the bet size for a given trial generally depends on the outcomes of the earlier trials: that is the whole idea of a money management system. So if $$\epsilon_i$$, which takes on values of +1 or -1, is the random variable representing the outcome on the ith trial, and $$W_i(\epsilon_1, ...,\epsilon_{i-1})$$ is the amount wagered on that trial, then in Thorp's notation the random variable for the bet is

$$B_i = \epsilon_i W_i$$

So Bi and Bj are not usually independent. In the special case that the probability of success for each trial is 1/2, the covariance of Bi and Bj would be zero, but we are interested only in games where the expected value is negative for each trial.

Did I simply misunderstand Thorp's notation, or have am I making a conceptual error?

Last edited by a moderator:

Related Set Theory, Logic, Probability, Statistics News on Phys.org
mathman
There looks like there is some confusion between the outcomes of the bets (independent random variables) and the sizes of the bets, which can be determined by the bettor anyway he chooses.

There looks like there is some confusion between the outcomes of the bets (independent random variables) and the sizes of the bets, which can be determined by the bettor anyway he chooses.
There is, but is it on my part or the book's part?

mathman
I believe the book is talking about outcomes, which are independent. The size limit may be due to the fact that the better has only a finite amount of money.

Yes, it is true that the bettor would start out with a finite amount. But I was interpreting K as the the maximum bet allowed, a limit which applies regardless of how much money the bettor has. That way, the law of large numbers result would apply even if the player had enough money to continue playing (and, with probability 1, losing) forever. It looks like the total winnings after n rounds of betting is

$$S_n = \sum_{i=1}^nB_i$$

So the B_i must include the bet sizes as well as the outcome, positive or negative. The law of large numbers usually applies to independent variables, so I think that's why Thorp says the B_i are independent. But I don't see how they could be.

EDIT:

There is a link to the pdf for the book at the top of my first post, in case you need to read the rest of the problem for more context. It's on p. 85.

Last edited:
mathman
I looked at it. It is confusing in the way it is stated - the dependency of the bet size on past outcomes is unclear.

Okay, thanks for taking a look at it. I agree that the wording and notation are not very clear. In another book/article by Dr. Thorp called The Mathematics of Gambling, he talks about systems like the "Martingale" (doubling) and other systems where your wager on round i depends on the outcomes of the previous trials. He says that his book Elementary Probability, linked to above, gives a strong law of large numbers result showing these systems can't work if there is a maximum and minimum bet, and that you will go broke with probability 1 for any finite starting bankroll. So that's what leads me to believe that the B_i's can depend on previous outcomes, even if it doesn't say so in the book.

Other examples of betting systems are 1) the Labouchere, in which you start with a list of numbers, and bet the sum of the first and last: you cross those off if you win, but if you lose you add the amount you just lost to the list. 2) The "D'Alembert", my personal favorite, in which you increase your bet by one unit after a loss and decrease it after a win, waiting until you have had a sequence of n wins and n losses. That one is cool because Catalan numbers arise in analysing it. I have no idea if the physicist had anything to do with inventing it. Of course, these and like systems are useless. But I want a proof of this intuitive fact.

There are a few more betting systems here that you might be interested in: http://www.lolblackjack.com/blackjack/betting-systems/

It is true that you could never lose in blackjack or roulette if you had an infinite bankroll with no betting limits (this would certainly be ideal). Each trial is independent of eachother, but eventually, you should always win with the Martingale system given these two conditions. It would be like flipping a coin 100 times and betting heads on every turn, you would eventually land on it even if you lost 20 times in a row.

Check out the Parlay and Paroli systems on that link above. The Paroli is a special case of the Parlay where you double and wager all of your profits after each win (the exact opposite of Martingale but safer). I like the Parlay system because you only wager a fraction of your profits instead of risking it all in a double or nothing situation. 1-3-2-6 is also another one of my favorites.

Also another article about Dr. Thorp: http://www.lolblackjack.com/blackjack/professionals/edward-thorp/ He was really into blackjack more than roulette I think. read his book about card counting if you really want to "beat" the games.