This is probably a noob question, background in probability theory isn't great but I was shown this problem in a lecture:(adsbygoogle = window.adsbygoogle || []).push({});

"Suppose a gambler starts out with £n, and makes a series of £1 bets against the house.

Let the probability of winning each bet be p, and of loosing be q = 1 − p. If the gambler’s capital ever reaches £0, he is ruined and stops playing; he remains at zero."

Then this was stated:

"In fact, he will reach zero with probability one: eventually he is bound to lose all his money, no matter how much he started with given any value of p between 0 ≤ p < 1."

Apparently this is obvious, but I don't see why it is obvious. If p is large, say 0.99... why should he always, at some point reach the £0 boundary?

**Physics Forums - The Fusion of Science and Community**

The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

# Gamblers Ruin - Markov Chain problem

Loading...

Similar Threads - Gamblers Ruin Markov | Date |
---|---|

A Risk of ruin | Oct 2, 2016 |

B Inverse gamblers fallacy | Jun 9, 2016 |

Gambler's Ruin | Sep 11, 2014 |

Variation of Gambler's ruin problem | Dec 3, 2011 |

Poisson Martingales and Gambler's Ruin | Feb 7, 2010 |

**Physics Forums - The Fusion of Science and Community**