How to determine when to take a bet?

  • Context: High School 
  • Thread starter Thread starter beamthegreat
  • Start date Start date
  • Tags Tags
    Statistics
Click For Summary
SUMMARY

The discussion centers on the decision-making process for taking bets based on expected value and the Kelly criterion. Participants debate the implications of a hypothetical bet with a 0.01% chance of winning $100,000,000 against a 99.9% chance of losing $10,000. The consensus is that while expected value is crucial, personal financial circumstances and psychological factors, such as utility functions and the endowment effect, significantly influence the decision to accept or decline a bet. The Kelly criterion is highlighted as a method for determining optimal bet sizes over a series of bets, emphasizing the importance of managing risk and understanding personal utility.

PREREQUISITES
  • Understanding of expected value in probability theory
  • Familiarity with the Kelly criterion for bet sizing
  • Basic knowledge of utility functions and their impact on decision-making
  • Awareness of psychological effects like the endowment effect
NEXT STEPS
  • Research the mathematical foundations of the Kelly criterion and its applications in gambling
  • Explore the concept of expected utility and how it differs from expected value
  • Investigate the martingale betting strategy and its limitations in real-world scenarios
  • Study psychological biases in decision-making, particularly in financial contexts
USEFUL FOR

This discussion is beneficial for gamblers, financial analysts, and anyone interested in risk management and decision-making under uncertainty. It provides insights into how mathematical principles and psychological factors intersect in the context of betting and investment strategies.

beamthegreat
Messages
116
Reaction score
7
Is expected value all that matters? I have heard of the Kelly criterion but what should you do if you cannot allocate the optimal amount?

For example, if you have a 0.01% chance of winning $100,000,000 but a 99.9% chance of losing $10,000 and you could only bet once, would you accept the bet?

From a mathematical perspective, if the expected return is positive, as in the case above, should you always take the bet? If not why? How do you determine whether to bet or not from the information above (from a rational, statistical approach).
 
Physics news on Phys.org
My feeling is it depends on what 10,000 means to you.

If its all you have then no its not a good bet. If its 1% of your money then perhaps its okay.

You can see this easier if you scale it down to $1 bet and &10,000 winnings.

There was a gambling strategy that said bet no more than half your money. If you lose and want to try again then bet half. Basically never go negative and never double your bet to win back what you lost in a previous bet.
 
Thank you for your response. Is there a reason why you should use half your money? Why not a two thirds or a quarter? I would really appreciate it if you could provide a mathematical/statistical explanation why this is the case.
 
Also, how much would you need to have in your bank in order for the case above to be a "good" bet. Where is the threshold? How do you determine that?

Sorry if I seem a little inquisitive, I have limited understanding of statistics and I am really curious.
 
A single bet only requires knowing the expected value to you. (If losing 10,000 would be disastrous, then you shouldn't only consider the $ value.) The more complicated problem is to determine a policy for a series of bets where you might go broke and have to stop playing. That is a different problem. The Kelly criterion addresses those problems. It attempts to determine how much to bet (or invest) in a series of bets.
 
  • Like
Likes   Reactions: StoneTemplePython
FactChecker said:
A single bet only requires knowing the expected value to you. (If losing 10,000 would be disastrous, then you shouldn't only consider the $ value.) The more complicated problem is to determine a policy for a series of bets where you might go broke and have to stop playing. That is a different problem. The Kelly criterion addresses those problems. It attempts to determine how much to bet (or invest) in a series of bets.

High level the case for evaluating a one-off bet is arithmetic mean driven...

That said, there may be special handling needed for pathological cases where ##ExpectedValue = \infty##. Also there are some finite case 'replications' of this (see Pascal's Mugging) that you should be careful about.
 
  • #10
Maybe it's more psychology than mathematics, but I think that amount of happiness you feel increases only logarithmically (not linearly) with the amount of money you have. That should also be taken into account in deciding when to bet.

Another relevant psychological effect is the endowment effect
https://en.wikipedia.org/wiki/Endowment_effect
 
  • #11
beamthegreat said:
Is expected value all that matters? I have heard of the Kelly criterion but what should you do if you cannot allocate the optimal amount?

For example, if you have a 0.01% chance of winning $100,000,000 but a 99.9% chance of losing $10,000 and you could only bet once, would you accept the bet?

From a mathematical perspective, if the expected return is positive, as in the case above, should you always take the bet? If not why? How do you determine whether to bet or not from the information above (from a rational, statistical approach).
Shouldn't that be a .1% chance of winning, rather than a .01% chance of winning, given a 99.9% chance of losing? If not, how do you account for the other .09%? If it's .1%, that's 1 in 1,000 -- with $10,000 bet against $100,000,000 that's the same ratio as 1 against 10,000 -- if I were to be allowed to play 10,000 times, instead of just once, I should expect to win ≈10 times, and lose ≈9,990 times, so gaining ≈$1,000,000,000 -- and losing ≈$99,900,000 -- for a net gain of ≈$900,100,000 -- having put $100,000,000 at risk. One thing that might stop me from making such a series of bets would be not having enough money to be able to afford to lose until I win. Given the one-time-only constraint, and my own finances, I'd probably go with $1,000 against $10,000,000 and recruit a wealthier friend to provide the other $9,000 for me to bet against the other $90,000,000 for him.
 
Last edited:
  • #12
Hey beamthegreat.

You should look at the martingale strategy [which is very similar to what you are talking about] but when you do keep in mind that you don't have infinite capital to gamble.

The martingale strategy is taught in graduate probability and has the same idea as what you are mentioning which is that given infinite capital, you can hit the roulette table and come out ahead eventually if you double down all the time.

Because infinite capital doesn't really exist - you will eventually lose which is why you don't just use things like expectation to make decisions.
 
  • Like
Likes   Reactions: FactChecker
  • #13
The keyword here is Utility, more precisely expected utility.

Assign a real number (the utility) to every possible outcome describing how preferable it is, then look at the expected result of a bet vs. not taking the bet, and choose whatever leads to a higher expected utility.
People usually have a utility function that is not linear with their money: most people prefer a guaranteed $1 million over a 10% chance to get $10 million. Unless you have multiple billions, it will lead to an expected utility loss - don't take the bet offered in the first post.
The Kelly criterion is a special case that maximizes the expected logarithm of the money - it is optimal if your utility function is logarithmic.

That rarely works as easy as it sounds. It can be shown that people do not have consistent utility functions.
- You can lead test persons to a "I prefer A over B, B over C, and C over A" situation, something that doesn't work with real numbers (a>b>c>a?). By letting them change from C to B to A to C for a small amount of money you can have them losing money voluntarily without gaining anything.
- You can influence the preference of A over B by adding or not adding another option C ("irrelevant alternative")

And even if you would have a consistent utility function: Usually you do not have full knowledge of all probabilities involved. Knowledge you gain in the future might change how favorable an outcome is for you. As an example, the person offering the bet in post 1 could reward you with $100,000 for taking a risk.
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 28 ·
Replies
28
Views
8K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 4 ·
Replies
4
Views
10K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 5 ·
Replies
5
Views
11K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 17 ·
Replies
17
Views
4K