Gamblers Ruin - Markov Chain problem

In summary, this problem discusses a gambler starting with a certain amount of money and making bets against the house. The probability of winning each bet is represented by p, while the probability of losing is represented by q = 1 - p. If the gambler's capital ever reaches £0, they are ruined and must stop playing. It is stated that no matter how much the gambler started with and for any value of p between 0 and 1, they will eventually reach £0 with a probability of one. This may seem obvious, but for very high values of p, it may not be clear why the gambler is bound to lose all their money. However, with enough time and attempts, all possible combinations of results will
  • #1
JesseC
251
2
This is probably a noob question, background in probability theory isn't great but I was shown this problem in a lecture:

"Suppose a gambler starts out with £n, and makes a series of £1 bets against the house.

Let the probability of winning each bet be p, and of loosing be q = 1 − p. If the gambler’s capital ever reaches £0, he is ruined and stops playing; he remains at zero."

Then this was stated:

"In fact, he will reach zero with probability one: eventually he is bound to lose all his money, no matter how much he started with given any value of p between 0 ≤ p < 1."

Apparently this is obvious, but I don't see why it is obvious. If p is large, say 0.99... why should he always, at some point reach the £0 boundary?
 
Physics news on Phys.org
  • #2
I'm no expert, but logically, if he plays for long enough, he will encounter all possible combinations of results.
Eventually he will have a string of losses long enough to bring him to 0, which forces him to stop. Having a high p will simply make this result more improbable, i.e. he will most likely play for longer. The final result is the same.
 
  • #3
Well I remember reading somewhere that in a simple one dimensional random walk, which I guess this problem comes under, given enough time every point will be reached. Thus at some time the £0 point must be reached, so he will always lose.

Haven't seen a proof anywhere... for the moment, think I'll try running a few MATLAB scripts to test out the idea.
 
  • #4
JesseC said:
for the moment, think I'll try running a few MATLAB scripts to test out the idea.

Cool, I'll try to do the same in python.
 
  • #5
Right now my guy starts off with $100, he bets $1 each time, and he has a 50% chance of winning. He got to 0 on the 21 millionth iteration

EDIT: Now I'm trying to see how many iterations it takes if he has an 80% chance of winning. (see you all tomorrow haha).

EDIT2: I got impatient and I stopped my program. My guy has played almost 100 million times and he's banked 58 million dollars. :rofl:

EDIT3: I think this logic makes sense if you really accept the idea of infinity. Question: how many times would I have to try flipping a coin 8 times to get 7 heads and 1 tails. If there is a finite answer (which of course there is), then there is no reason to think that this logic can't be extended. The answer to this type of question cannot be infinity, therefore the probability of any sequence appearing in an infinite amount of tries is 1.
 
Last edited:
  • #6
Well I did the same except with a £50 initial fund, 50% chance of winning... ran it a few times and got values between 100 and 500000 iterations.

Put it up to 60%, started it 5 minutes ago... its still running :D
 
  • #7
JesseC said:
This is probably a noob question, background in probability theory isn't great but I was shown this problem in a lecture:

"Suppose a gambler starts out with £n, and makes a series of £1 bets against the house.

Let the probability of winning each bet be p, and of loosing be q = 1 − p. If the gambler’s capital ever reaches £0, he is ruined and stops playing; he remains at zero."

Then this was stated:

"In fact, he will reach zero with probability one: eventually he is bound to lose all his money, no matter how much he started with given any value of p between 0 ≤ p < 1."

Apparently this is obvious, but I don't see why it is obvious. If p is large, say 0.99... why should he always, at some point reach the £0 boundary?

Pick up an applied probability book for a more fleshed out exposition of this problem.

One book that I'm familiar with is Introduction to Probability Models by Sheldon Ross.
 
  • #8
JesseC said:
This is probably a noob question, background in probability theory isn't great but I was shown this problem in a lecture:

"Suppose a gambler starts out with £n, and makes a series of £1 bets against the house.

Let the probability of winning each bet be p, and of loosing be q = 1 − p. If the gambler’s capital ever reaches £0, he is ruined and stops playing; he remains at zero."

Then this was stated:

"In fact, he will reach zero with probability one: eventually he is bound to lose all his money, no matter how much he started with given any value of p between 0 ≤ p < 1."

Apparently this is obvious, but I don't see why it is obvious. If p is large, say 0.99... why should he always, at some point reach the £0 boundary?

You didn't say how much he gets from each win, but assuming it's £1 on top of his original bet, the statement is definitely NOT true - for any p > 1/2 there's (theoretically) a finite chance that he can continue betting indefinitely. In fact, the exact probability can be expressed as a function of p and n, e.g. using branching processes and generating functions.
 
  • #9
bpet said:
You didn't say how much he gets from each win, but assuming it's £1 on top of his original bet, the statement is definitely NOT true - for any p > 1/2 there's (theoretically) a finite chance that he can continue betting indefinitely. In fact, the exact probability can be expressed as a function of p and n, e.g. using branching processes and generating functions.

It's been a VERY long time since I took Finite Math, but intuitively I'd say you HAVE to be right and if p is close to 1 then the chance of never losing should be quite high.
 
  • #10
The way you interpret the problem is probably different from the professors.

If P = .99, then over time it WILL diverge into a .99 success rate. If you have a million euros to start with, then the chance of you going back to 0 is virtually 0. Definitely not guaranteed.

I believe the professor only meant P = ≤.5. If that were the case then yes, almost surely, you would eventually go down to 0. But for anything higher than .5, no.
 
Last edited:
  • #11
JesseC said:
This is probably a noob question, background in probability theory isn't great but I was shown this problem in a lecture:

"Suppose a gambler starts out with £n, and makes a series of £1 bets against the house.

Let the probability of winning each bet be p, and of loosing be q = 1 − p. If the gambler’s capital ever reaches £0, he is ruined and stops playing; he remains at zero."

Then this was stated:

"In fact, he will reach zero with probability one: eventually he is bound to lose all his money, no matter how much he started with given any value of p between 0 ≤ p < 1."

Apparently this is obvious, but I don't see why it is obvious. If p is large, say 0.99... why should he always, at some point reach the £0 boundary?

There are many ways to prove this... but why not just try counting the probabilities of reaching -n in a simple random walk?
 
  • #12
lavinia said:
There are many ways to prove this... but why not just try counting the probabilities of reaching -n in a simple random walk?

So, you disagree w/ all the rest of the answers in the post. Why is that? What is the math that says you are right?
 
  • #13
phinds said:
So, you disagree w/ all the rest of the answers in the post. Why is that? What is the math that says you are right?

Why not think the problem through for yourself? That's all I was saying. The simplest example is a fair coin. You should be able to generalize from that example. good luck.
 
  • #14
lavinia said:
Why not think the problem through for yourself? That's all I was saying. The simplest example is a fair coin. You should be able to generalize from that example. good luck.

But I HAVE thought it through. I am convinced that if the probability of winning is .99, then the probability of ever getting to zero is negligible (not zero but infinitesimal)

I do not understand why you think otherwise, and I do not see that a fair coin is evey remotely applicable to this situation because it has a probability of .5, not of .99

IF the probability of winning is .5 then I would agree w/ you, but it is not so I do not.

I ask again, What math makes you think that if the probability of winning is .99 you would have any significant change of getting to zero?
 
  • #15
After thinking a bit longer and doing some reading I'm in agreement with phinds, bpet and Firefight.

I believe the Gamblers ruin only applies to problems whereby the expectation value of a bet is zero or less, so the chance of winning is .5 or less. For p > .5, the expectation value of a bet is positive and given a great number of bets the gambler will always make more money.

The idea of betting against the house is that, given an opponent with limitless money, no matter what £n the gambler starts with, so long as his expectation value of winning is .5 or less he will always lose eventually. Apparently casinos make good use of this... they can make money off a fair game just by the fact they always have more money. Although casino games are rarely fair.

Means I've probably misheard what the lecturer was saying, will ask him tomorrow exactly what he meant.
 
  • #16
JesseC said:
... Means I've probably misheard what the lecturer was saying, will ask him tomorrow exactly what he meant.

Be open to the possiblity that he made a mistake and you didn't misunderstand him. Even profs make mistakes from time to time. ( Except Feynman. I really don't think he ever made mistakes. :smile: )
 
  • #17
phinds said:
Be open to the possiblity that he made a mistake and you didn't misunderstand him. Even profs make mistakes from time to time. ( Except Feynman. I really don't think he ever made mistakes. :smile: )

If p(win)=0.99 and q(loss)=0.01, and each trial is independent for a fixed bet, then the probability of losing after n trials is [itex] 1 - 0.99^{n}[/itex]. It's clear that the probability of losing at least once becomes large for large n. However, the game is over if and only if the gambler losses all his money, and there is always a non-zero probability the gambler will lose for every trial.

We have only specified an end point for losing, but not for winning. That's why the gambler's ruin rule holds in theory. Obviously 99 to 1 is pretty good odds, In the real world, you're lucky to get 50-50.
 
Last edited:
  • #18
SW VandeCarr said:
If p(win)=0.99 and q(loss)=0.01, and each play is independent for a fixed bet, then the probability of losing after n trials is [itex] 1 - 0.99^{n}[/itex]. It's clear that the probability of losing at least once becomes large for large n. However, the game is over if and only if the gambler losses all his money, and there is always a non-zero probability the gambler will lose for every trial.

We have only specified an end point for losing, but not for winning. That's why the gambler's ruin rule holds in theory. Obviously 99 to 1 is pretty good odds, In the real world, you're lucky to get 50-50.

I don't follow what this has to do with answering the original question. Of COURSE the odds of loosing once are significant. My point is that if you start of with much of anything your chances of ever getting down to zero are INsignificant.
 
  • #19
phinds said:
I don't follow what this has to do with answering the original question. Of COURSE the odds of loosing once are significant. My point is that if you start of with much of anything your chances of ever getting down to zero are INsignificant.

That depends on how much you bet on each trial as a percentage of your resources, and how long you play. The original question was a theoretical one.
 
Last edited:
  • #20
SW VandeCarr said:
That depends on how much you bet on each trial as a percentage of your resources, and how long you play. The original question was a theoretical one.

Fair enough. I was assuming you start with n dollars and bet one dollar per round. If n is at all significant, then my statement stands.
 
  • #21
phinds said:
I don't follow what this has to do with answering the original question. Of COURSE the odds of loosing once are significant. My point is that if you start of with much of anything your chances of ever getting down to zero are INsignificant.

Insignificant at any finite point in time yes, but 1 as n --> inf.
 
  • #22
phinds said:
Fair enough. I was assuming you start with n dollars and bet one dollar per round. If n is at all significant, then my statement stands.

I think a popular version of the problem is based on on the gambler doubling his bet the next round if he loses in the one gone by to recap for his/her losses that they incurred.

But there are different variants to this problem.
 
  • #23
SW VandeCarr said:
If p(win)=0.99 and q(loss)=0.01, and each trial is independent for a fixed bet, then the probability of losing after n trials is [itex] 1 - 0.99^{n}[/itex]. It's clear that the probability of losing at least once becomes large for large n.

Does this mean, for e.g., if p(win)=0.67, we should stop betting after 2 trials since
1-0.66^2~0.66 or for e.g, if p(win)=0.8, we should stop betting after 6 trials since
1-0.8^6~0.8? In general, Stop betting when prob. of losing after n trials = p(win)?
 
  • #24
scalpmaster said:
Does this mean, for e.g., if p(win)=0.67, we should stop betting after 2 trials since
1-0.66^2~0.66 or for e.g, if p(win)=0.8, we should stop betting after 6 trials since
1-0.8^6~0.8? In general, Stop betting when prob. of losing after n trials = p(win)?

You can choose the probability at which you're no longer comfortable playing. My point is that if a Markov process has only one terminal state, and that state is "broke", the process must inevitably end in "broke". Mathematically, you don't care about how long it takes.
 
Last edited:
  • #25
SW VandeCarr said:
You can choose the probability at which you're no longer comfortable playing. My point is that if a Markov process has only one terminal state, and that state is "broke", the process must inevitably end in "broke". Mathematically, you don't care about how long it takes.

This reasoning does only hold for a process with a finite number of states. If there's no limit to the reserve the gambler can build up, the probability of losing is not 1, if p>0.5
 

1. What is the Gamblers Ruin problem in the context of Markov Chains?

The Gamblers Ruin problem is a mathematical concept that models the probability of a gambler losing all their money in a series of bets. In the context of Markov Chains, it refers to a specific type of random walk where the gambler's fortune (represented by a state in the Markov Chain) changes based on the outcome of each bet.

2. How does the Gamblers Ruin problem relate to real-life gambling situations?

The Gamblers Ruin problem is a simplified version of real-life gambling situations where a player's fortune depends on a series of random events. It can be applied to games like roulette, where each spin is a random event, and the player's fortune changes accordingly. The concept can also be extended to situations like stock market investments, where the value of stocks is subject to unpredictable market fluctuations.

3. What is a Markov Chain and how is it used to model the Gamblers Ruin problem?

A Markov Chain is a mathematical model that describes a system with a finite set of states, where each state has a certain probability of transitioning to another state. In the context of the Gamblers Ruin problem, the Markov Chain is used to model the different possible states of the gambler's fortune and the probabilities of transitioning between these states based on the outcome of each bet.

4. What factors influence the outcome of the Gamblers Ruin problem?

The outcome of the Gamblers Ruin problem is influenced by several factors, including the initial amount of money the gambler has, the size of their bets, and the probability of winning each bet. These factors determine the transition probabilities in the Markov Chain and ultimately affect the likelihood of the gambler losing all their money.

5. Can the Gamblers Ruin problem be solved using other mathematical concepts?

Yes, the Gamblers Ruin problem can be solved using other mathematical concepts such as probability theory and stochastic processes. Markov Chains provide a useful framework for modeling this problem, but other mathematical tools can also be applied to analyze and solve it.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
13
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
3K
  • Calculus and Beyond Homework Help
Replies
12
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
5
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
5K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
14
Views
2K
Back
Top