Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

B How to determine when to take a bet?

  1. Apr 13, 2017 #1
    Is expected value all that matters? I have heard of the Kelly criterion but what should you do if you cannot allocate the optimal amount?

    For example, if you have a 0.01% chance of winning $100,000,000 but a 99.9% chance of losing $10,000 and you could only bet once, would you accept the bet?

    From a mathematical perspective, if the expected return is positive, as in the case above, should you always take the bet? If not why? How do you determine whether to bet or not from the information above (from a rational, statistical approach).
  2. jcsd
  3. Apr 13, 2017 #2


    Staff: Mentor

    My feeling is it depends on what 10,000 means to you.

    If its all you have then no its not a good bet. If its 1% of your money then perhaps its okay.

    You can see this easier if you scale it down to $1 bet and &10,000 winnings.

    There was a gambling strategy that said bet no more than half your money. If you lose and want to try again then bet half. Basically never go negative and never double your bet to win back what you lost in a previous bet.
  4. Apr 13, 2017 #3
    Thank you for your response. Is there a reason why you should use half your money? Why not a two thirds or a quarter? I would really appreciate it if you could provide a mathematical/statistical explanation why this is the case.
  5. Apr 13, 2017 #4
    Also, how much would you need to have in your bank in order for the case above to be a "good" bet. Where is the threshold? How do you determine that?

    Sorry if I seem a little inquisitive, I have limited understanding of statistics and I am really curious.
  6. Apr 13, 2017 #5


    Staff: Mentor

  7. Apr 13, 2017 #6


    Staff: Mentor

    Last edited by a moderator: May 8, 2017
  8. Apr 13, 2017 #7


    Staff: Mentor

  9. Apr 13, 2017 #8


    User Avatar
    Science Advisor
    Gold Member
    2017 Award

    A single bet only requires knowing the expected value to you. (If losing 10,000 would be disastrous, then you shouldn't only consider the $ value.) The more complicated problem is to determine a policy for a series of bets where you might go broke and have to stop playing. That is a different problem. The Kelly criterion addresses those problems. It attempts to determine how much to bet (or invest) in a series of bets.
  10. Apr 13, 2017 #9


    User Avatar
    Science Advisor
    Gold Member

    High level the case for evaluating a one-off bet is arithmetic mean driven...

    That said, there may be special handling needed for pathological cases where ##ExpectedValue = \infty##. Also there are some finite case 'replications' of this (see Pascal's Mugging) that you should be careful about.
  11. Apr 14, 2017 #10


    User Avatar
    Science Advisor

    Maybe it's more psychology than mathematics, but I think that amount of happiness you feel increases only logarithmically (not linearly) with the amount of money you have. That should also be taken into account in deciding when to bet.

    Another relevant psychological effect is the endowment effect
  12. May 2, 2017 #11
    Shouldn't that be a .1% chance of winning, rather than a .01% chance of winning, given a 99.9% chance of losing? If not, how do you account for the other .09%? If it's .1%, that's 1 in 1,000 -- with $10,000 bet against $100,000,000 that's the same ratio as 1 against 10,000 -- if I were to be allowed to play 10,000 times, instead of just once, I should expect to win ≈10 times, and lose ≈9,990 times, so gaining ≈$1,000,000,000 -- and losing ≈$99,900,000 -- for a net gain of ≈$900,100,000 -- having put $100,000,000 at risk. One thing that might stop me from making such a series of bets would be not having enough money to be able to afford to lose until I win. Given the one-time-only constraint, and my own finances, I'd probably go with $1,000 against $10,000,000 and recruit a wealthier friend to provide the other $9,000 for me to bet against the other $90,000,000 for him.
    Last edited: May 2, 2017
  13. May 5, 2017 #12


    User Avatar
    Science Advisor

    Hey beamthegreat.

    You should look at the martingale strategy [which is very similar to what you are talking about] but when you do keep in mind that you don't have infinite capital to gamble.

    The martingale strategy is taught in graduate probability and has the same idea as what you are mentioning which is that given infinite capital, you can hit the roulette table and come out ahead eventually if you double down all the time.

    Because infinite capital doesn't really exist - you will eventually lose which is why you don't just use things like expectation to make decisions.
  14. May 5, 2017 #13


    User Avatar
    2017 Award

    Staff: Mentor

    The keyword here is Utility, more precisely expected utility.

    Assign a real number (the utility) to every possible outcome describing how preferable it is, then look at the expected result of a bet vs. not taking the bet, and choose whatever leads to a higher expected utility.
    People usually have a utility function that is not linear with their money: most people prefer a guaranteed $1 million over a 10% chance to get $10 million. Unless you have multiple billions, it will lead to an expected utility loss - don't take the bet offered in the first post.
    The Kelly criterion is a special case that maximizes the expected logarithm of the money - it is optimal if your utility function is logarithmic.

    That rarely works as easy as it sounds. It can be shown that people do not have consistent utility functions.
    - You can lead test persons to a "I prefer A over B, B over C, and C over A" situation, something that doesn't work with real numbers (a>b>c>a?). By letting them change from C to B to A to C for a small amount of money you can have them losing money voluntarily without gaining anything.
    - You can influence the preference of A over B by adding or not adding another option C ("irrelevant alternative")

    And even if you would have a consistent utility function: Usually you do not have full knowledge of all probabilities involved. Knowledge you gain in the future might change how favorable an outcome is for you. As an example, the person offering the bet in post 1 could reward you with $100,000 for taking a risk.
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted