- #1
JesseC
- 251
- 2
This is probably a noob question, background in probability theory isn't great but I was shown this problem in a lecture:
"Suppose a gambler starts out with £n, and makes a series of £1 bets against the house.
Let the probability of winning each bet be p, and of loosing be q = 1 − p. If the gambler’s capital ever reaches £0, he is ruined and stops playing; he remains at zero."
Then this was stated:
"In fact, he will reach zero with probability one: eventually he is bound to lose all his money, no matter how much he started with given any value of p between 0 ≤ p < 1."
Apparently this is obvious, but I don't see why it is obvious. If p is large, say 0.99... why should he always, at some point reach the £0 boundary?
"Suppose a gambler starts out with £n, and makes a series of £1 bets against the house.
Let the probability of winning each bet be p, and of loosing be q = 1 − p. If the gambler’s capital ever reaches £0, he is ruined and stops playing; he remains at zero."
Then this was stated:
"In fact, he will reach zero with probability one: eventually he is bound to lose all his money, no matter how much he started with given any value of p between 0 ≤ p < 1."
Apparently this is obvious, but I don't see why it is obvious. If p is large, say 0.99... why should he always, at some point reach the £0 boundary?