One 50% bet is worse than fifty 1% bets?

  • Thread starter Thread starter iDimension
  • Start date Start date
  • #51
Taking the game of roulette as an example, should one bet a large amount on black or a small about on 23, many times? The argument goes that one will miss 23 60% of the time, while 40% of the time one will hit 23 one or more times.

I think one loses 2.5% to the house in roulette, so you would lose something like 63% of the time with that strategy, almost 2/3rds. But when you won, you would win a lot back, thanks to multiple hits on some occasions, and the average over time would be the -2.5%.

Betting on black, you'd win ~49% of the time (because it lands on 0 sometimes).
 
Mathematics news on Phys.org
  • #52
verty said:
Taking the game of roulette as an example, should one bet a large amount on black or a small about on 23, many times? The argument goes that one will miss 23 60% of the time, while 40% of the time one will hit 23 one or more times.

I think one loses 2.5% to the house in roulette, so you would lose something like 63% of the time with that strategy, almost 2/3rds. But when you won, you would win a lot back, thanks to multiple hits on some occasions, and the average over time would be the -2.5%.

Betting on black, you'd win ~49% of the time (because it lands on 0 sometimes).
Using your roulette example, Say the numbers 1-50 are black and the numbers 51-100 are red. I was unsure if the problem as being phrased whether it was better to bet black/red, or to take 50 individual numbers (with one spin). There is no difference there. If there were 51 tables and you could bet the number 1 on 50 tables, or black/red on the 51st, I would take the 50 bets on #1.

My reasoning is that the odds of a single win SEEM the same, and then there is a slight chance of hitting twice. Or even the #1 coming up on all 50 tables.

When the odds are slightly against you, the best strategy is to bet small amounts and to have a loss limit. Anyone can have a losing streak or a winning streak. Just don't let a losing streak take you to the poorhouse.
 
  • #53
votingmachine said:
When the odds are slightly against you, the best strategy is to bet small amounts and to have a loss limit. Anyone can have a losing streak or a winning streak. Just don't let a losing streak take you to the poorhouse.
If the odds are against you then a strategy of making a single minimum bet and then walking away is indeed near optimal. However, a strategy that reverses the order of those two steps is even better.
 
  • Like
Likes mfb
  • #54
jbriggs444 said:
If the odds are against you then a strategy of making a single minimum bet and then walking away is indeed near optimal. However, a strategy that reverses the order of those two steps is even better.
Again, that is an all-or-nothing strategy. If you play a game with odds near to a coin flip, you are better off making many small bets. The odds of an outcome far removed from the middle gets small.

This is diverging into secondary motivations. If you enjoy gambling, then gamble. If you don't, then don't. I agree that gambling is not all that fun. I've been in Las Vegas for a conference, and I played a small amount of craps. I would not go out of my way to gamble. If you are in Vegas, the best return is to play very small bets, and get a few free drinks (if you like what they have ... I got a free diet coke, and a free coffee). If you don't hit a losing streak, but have mixed results, you walk away about even. And if you like that, then there you go.

I was trying to separate the secondary from the original question, by phrasing it as $500 that goes into a single betting set-up, and the results take the same amount of time. That removes he questions of whether you should just walk away with the money you have in your wallet intact ... which is what I would do with $500. But if you are obliged to choose, then pick the best option. I think it is the 50 bets.

In a REAL casino, there is no betting strategy that assures you of a win. If your loss limit is $500, and you enjoy the process, then spread the bets out.

But in the synthetic question, there is no enjoyment difference to consider. Just the choice.

In a real casino, you can make a different choice also. Instead of 50 bets at 1%, you can make 50 bets at 49%. The expected outcome for that is a small loss. In a real casino, if you have a $500 loss limit, make many small bets (at the house minimum), and hope for a winning streak. But again, that was not the question, or the assumptions.
 
  • #55
votingmachine said:
Again, that is an all-or-nothing strategy. If you play a game with odds near to a coin flip, you are better off making many small bets. The odds of an outcome far removed from the middle gets small.
A single minimum bet nearly minimizes variance. Playing additional games increases variance. The variance of a sum is the sum of the variances. You minimize that by minimizing the number of [minimum] bets. Betting zero times is optimal if your goal is to minimize variance.
 
  • #56
There is a large amount of mathematics on this problem. Here is a nice reference which shows that the basic problem still remains open. There are some nice theorems saying that bold play is optimal but they don't cover all interesting cases. https://www.researchgate.net/publication/266704722_The_re-opening_of_Dubins_and_Savage_casino_in_the_era_of_diversification
 
Last edited by a moderator:
  • #57
with an expected value >0 the Kelly Criterion describes the optimal strategy, on each bet where the payoff is 1-1, the proportion of your capital that should be wagered is 2p-1, where 0.50<p<1 is the odds of winning

https://en.wikipedia.org/wiki/Kelly_criterion
 
  • #58
Why are we introducing more variables like psychology of the better, casino environment, roulette etc?

None of this exists in my game, it's purely numbers. The goal of the game is to maximise your chances of NOT walking away with $0 after your $500 has been bet so the question is simple. Which option gives the better chances?

1 bet of $500
50 games of $10 (played individually, stopping after you win once)
 
  • #59
iDimension said:
Why are we introducing more variables like psychology of the better, casino environment, roulette etc?

None of this exists in my game, it's purely numbers. The goal of the game is to maximise your chances of NOT walking away with $0 after your $500 has been bet so the question is simple. Which option gives the better chances?

1 bet of $500
50 games of $10 (played individually, stopping after you win once)

Suppose it was a claw game, 1 in 2 or 1 in 100. The 2nd strategy would be better because you would save some money if you win the prize early.
 
  • #60
verty said:
Suppose it was a claw game, 1 in 2 or 1 in 100. The 2nd strategy would be better because you would save some money if you win the prize early.
Note that this time around the goal has been clearly stated: maximize the probability of not losing everything. With that spelled out, the first strategy is clearly superior. The result is probability 0.5 of not losing everything versus a probability of 0.395 with the second method.
 
  • #61
jbriggs444 said:
Note that this time around the goal has been clearly stated: maximize the probability of not losing everything. With that spelled out, the first strategy is clearly superior. The result is probability 0.5 of not losing everything versus a probability of 0.395 with the second method.
That is a difference. I wanted to eliminate the non-numerical factors by phrasing it as a single bet both times, just a distributed bet, vs a single bet. If the goal is to not lose, then bet 1 bet at 50%. Loss avoidance favors that.

(0.5^1)<(0.99^50)The scenario is now phrased as sequential single bets, stopping after one-win, or losing the total. That rules out a scenario I had ruled out, where the bets are simultaneously placed on the same outcome ... the roulette wheel with 100 numbers and you bet 50 of them, which is the same odds as taking the red-black split. Those 50 bets are not independent events.

The most favorable outcome could still be argued (loss avoidance can not). Because the winning scenario with the sequential bets has an outcome of the $500 MINUS the number of bets PLUS $990. The other has the outcome of $1000.
 
  • #62
That's silly though. If the goal is to walk away with something, just bet nothing and walk away with $500 guaranteed.
 
  • #63
xman720 said:
That's silly though. If the goal is to walk away with something, just bet nothing and walk away with $500 guaranteed.
I agree. That is why the scenario's and goals need careful definition. It is better to not put large amounts of money into games of chance. Any selection of the options in this thread should have the warning that it is a selection among artificially limited, bad choices.
 
  • #64
The best option is to not gamble sure, but I don't think I need to point this out for the 8th time, this is just a thought question and not actually a real life scenario which is why I keep telling people to ignore human state of mind, roulette, casino odds etc. It's PURELY about the numbers.
 
  • #65
iDimension said:
The best option is to not gamble sure ...
What makes you say that? The expected outcome of not gambling is $50 in your wallet. The expected value of betting it all on a 50-50 $100 payoff is $50 in your wallet. The expected outcome of making $1 bets for a 1-99 $100 payoff until you either win once or your wallet is empty is $50 in your wallet. The expected outcome of making 50 of those $1 bets, win or lose, is $50 in your wallet.

All four scenarios have the same expected outcome. So what is it that now makes you say the best option is to not gamble? (This is a rhetorical question. The answer follows the next quote.)

... but I don't think I need to point this out for the 8th time, this is just a thought question and not actually a real life scenario which is why I keep telling people to ignore human state of mind, roulette, casino odds etc. It's PURELY about the numbers.
Numbers don't distinguish between the four scenarios in this hypothetical problem; each has the same expected outcome. Human state of mind is the key thing that distinguishes between these four scenarios. Someone who is extremely risk averse will just walk away and not bet at all. A moderate risk taker might instead go for the single 50-50 bet. An even more aggressive risk taker might well go for the 50 1-99 bets.

Your example is a zero sum game. There's nothing that makes the gamble worthwhile. Let's make the gamble worthwhile. Suppose you've saved up half of a million dollars. Unless you can live on $20,000 per year, you can't retire on that. Suppose you can set it aside for a decade. One investment advisor gives you a rock solid guarantee that that half of a million will double in that decade. Another advisor offers you a 50-50 gamble. Your half million has a 50% chance of becoming three million in a decade, but it also has a 50% of dwindling to nothing in that time. Note that the expected utility of this gamble is considerably more than the rock solid guarantee. Which will you choose?
 
  • Like
Likes pbuk
  • #66
D H said:
Numbers don't distinguish between the four scenarios in this hypothetical problem;
I disagree: you can only say that the Expected Value doesn't distinguish between the four scenarios in this hypothetical problem.

D H said:
Human state of mind is the key thing that distinguishes between these four scenarios. Someone who is extremely risk averse will just walk away and not bet at all. A moderate risk taker might instead go for the single 50-50 bet. An even more aggressive risk taker might well go for the 50 1-99 bets.
I disagree again: we can replace the human state of mind with a utility function that provides numbers to distinguish between these four scenarios.

D H said:
Your example is a zero sum game. There's nothing that makes the gamble worthwhile. Let's make the gamble worthwhile. Suppose you've saved up half of a million dollars. Unless you can live on $20,000 per year, you can't retire on that. Suppose you can set it aside for a decade. One investment advisor gives you a rock solid guarantee that that half of a million will double in that decade. Another advisor offers you a 50-50 gamble. Your half million has a 50% chance of becoming three million in a decade, but it also has a 50% of dwindling to nothing in that time. Note that the expected utility of this gamble is considerably more than the rock solid guarantee. Which will you choose?
No, the expected value of the gamble is considerably more than the rock solid guarantee. You can't say anything about the expected value of the utility function until you have defined that utility function: for many people the prospect of destitution in retirement is unacceptable and so this carries a big negative penalty, however if they have at least $20,000 in pension they could work part time for a few years to top this up to an acceptable level. This might be represented by a utility function something like this.

The thread could be moved to the Statistics and Probability forum, and it could also be locked.
 
  • Like
Likes jbriggs444
  • #67
MrAnchovy said:
I disagree again: we can replace the human state of mind with a utility function that provides numbers to distinguish between these four scenarios.

But that would make the theory useless. I'll give an example.

Suppose you have a classroom and you want to evaluate teaching methods. So you develop a theory about two styles of teaching and wonder which is better. Now if your theory says that the more visual method appeals to more visual learners and the more auditory method appeals to more auditory learners, and the success of either method will depend on the share of visual and auditory learners, it hasn't given you a prediction. You now need to input how many students are visual learners and how many are auditory learners. And you will get answer like this. "If 40% benefit more from visual learning and 60% benefit more from auditory learning, 40% will prefer the visual method and 60% will prefer the auditory method, and the expected value is 40% times the chosen method's effectiveness for visually preferred learners and 60% of the effectiveness of the chosen method for auditory-preferred learners. Do that calculation and you'll know which method is better." It's not a prediction. A prediction says "method X will be better". And if the theory must wait on psychological profiling of the agent, it isn't a useful theory.

It's like asking, what is the chance this person is lying, and getting the answer, if they have lied in the past, if they dislike you, if they are maniacal, if they distrust you, they are probably lying. But if they don't have those feelings or traits, they are probably telling the truth. This is not a prediction. And if you try to model all of that, you still have to feed in all that information, it's not predicting anything.

And it also makes a grave mistake. You may start to think that learners are either visual or auditory, or that people are either liars or truth tellers. But of course it may come down to the teacher which method is better. It may be that a good teacher can teach either method and a bad teacher can teach neither. It may be that pupils choose the auditory method on the survey because they think it'll mean easier homework, and pupils choose the visual method because they think it'll mean exams are easier to study for. Who knows, right? There is so much variation there, the theory becomes useless at predicting anything.

Yes, it gives you a number that looks good but that number is meaningless and useless, it means nothing.

And in this case, the answer that the gamble, if Poisson, would mean leaving empty-handed 60% of the time, is useless. One doesn't know if it is Poisson because that is a real concern and the whole idea was to not have real concerns be involved. One has to say, "what if the scenario is Roulette, what if the scenario is a claw game?" And then the answer is, take multibets in a claw game and single bets in Roulette. But to me that is not an answer when one has decided not to deal with real concerns. And anyway, it is only useful when one has a real concern in mind and then the question should have been, "How can I maximise my odds at Roulette?", "How can I beat this claw game?", etc. The question as posed is unanswerable and it is wrong to suppose that one can simply add a utility function and have a useful theory. If one has to add a utility function, that means the theory can't predict anything and is splintering into many pieces that each predict one tiny thing, but together they are simply a questionaire awaiting the real concerns to be supplied.

And, the view that if one plugs in the real concerns, the theory will model the real world, is called Aristotleanism. It says the world is black and white, yes or no, there are no mixtures, no shades. But we know that's not true. Is an apple really red? Is a wave in the ocean really a sine wave or a Fourier composite of sinusoids? Isn't that just an excuse to say it is unpredictable? We still predict the height of the waves when there is an earthquake but is there any theory that predicts how many waves will arrive in an hour and the spacing of them? I think not. And for similar reasons, I reject the height prediction as well. The real world is never going to match the model, the world is not Aristotelian.

When it comes to gambling, there is nothing predictable about it. The whole point is to be unpredictable. If there is ever a game that becomes predictable, it'll be changed or updated so that it isn't anymore. There's a kind of uncertainty principle: as soon as you can predict, you can predict that you will lose.

There is market research that looks at number of houses, buyers in a vicinity, competition, etc, that can produce a number of dollars of potential spend on a product type. But I class that as something different, I don't think of it as the probability of making X dollars selling this product. And I would argue that any theory like that is useless.
 
  • #68
You're entitled to your opinion, but in the real world we have to make decisions based on incomplete information. If we don't decide on which teaching method to use, the class doesn't get taught. If we don't do market research we end up supplying the wrong quantity of product at the wrong price to a market and either we sell out early and have not maximised profit or we end up writing off overstocks.

If it is your argument that decision theory is useless as in every case would would be no better off using decision theory than if we just guess what to do, I don't think that argument is supportable.
 
  • #69
verty said:
Suppose you have a classroom and you want to evaluate teaching methods. So you develop a theory about two styles of teaching and wonder which is better. Now if your theory says that the more visual method appeals to more visual learners and the more auditory method appeals to more auditory learners, and the success of either method will depend on the share of visual and auditory learners, it hasn't given you a prediction. You now need to input how many students are visual learners and how many are auditory learners. And you will get answer like this. "If 40% benefit more from visual learning and 60% benefit more from auditory learning, 40% will prefer the visual method and 60% will prefer the auditory method, and the expected value is 40% times the chosen method's effectiveness for visually preferred learners and 60% of the effectiveness of the chosen method for auditory-preferred learners. Do that calculation and you'll know which method is better." It's not a prediction. A prediction says "method X will be better". And if the theory must wait on psychological profiling of the agent, it isn't a useful theory.
The utility function is the classroom composition.

The best teaching method will depend on the class composition, there is no way to avoid that. You can find an average class composition, in the same way you can find a typical utility function* of humans. Individual persons might still have different utility functions, in the same way individual classes can have different compositions.

* this has been tried, of course, and it has been shown that humans do not follow consistent utility functions. The way equivalent choices are phrased is important, and there are systems where most prefer option A over option B, option B over option C and ... option C over option A.
 
  • #70
In a game of black jack I went with the single-time high stake bet and it paid off.

Most games are designed so that the house wins just a little more than the player.
With that logic, I decided that my best odds were to bet it all on one hand and not suffer the almost inevitable attrition of subsequent plays.
I did. I won.

And then I walked away.
 
  • #71
MrAnchovy said:
I disagree again: we can replace the human state of mind with a utility function that provides numbers to distinguish between these four scenarios.
And how do you obtain that utility function? The answer is of course by interviewing people to ascertain their state of mind. And making overly simplified assumptions. Expected utility theory is, at least per Rabin and Thaler (http://www.researchgate.net/profile/Richard_Thaler/publication/4902162_Anomalies_Risk_Aversion/links/09e4151030d3ce0fdf000000.pdf ), an ex-hypothesis. It is a Norwegian blue (they used exactly those words).

No, the expected value of the gamble is considerably more than the rock solid guarantee.
I agree, I used the wrong word. I should have said expected outcome rather than expected utility. A good portion of people will choose the rock solid guarantee even though the expected outcome of that option is considerably less than the expected outcome of the gamble. That's why I asked IDimension which option he/she would choose.
verty said:
When it comes to gambling, there is nothing predictable about it.
Sure there is. I worked lots of jobs to get myself through college 35-40 years ago: Factory worker, farm hand, cook, pot washer, waiter, college tutor. None of those paid well (most paid minimum wage). My most lucrative job of all was tutoring the naive rich kids at the hoity-toity school I attended about how to play poker. Never big (those online poker games are downright scary); the winnings formed the bulk of my eating, drinking, and womanizing budget. Every once in a while this forced me for a week or two to be a teetotalling monk who sustained himself with cereal, ramen & noodles, and mac&cheese. But most of the time, college was quite nice. If you know the odds, if you know how to read people, and if you know how to hide from your opponents that you have a full boat (or alternatively, jack high), poker is anything but uncertain.

That was small potatoes compared to the legalized gambling I've been doing for the last 35 years. I have a nice little nest egg thanks to being a bit of a cheapskate and also thanks to being a bit risk taking and not having stuffed my hard-earned, cheapskate dollars into my mattress. It's very hard to save up enough for a decent retirement by stuffing your hard-earned savings in your mattress (or even into a bank account). One has to take some risks in life in the face of uncertain and imperfect information.

On an even grander scale, decision making under uncertainty is what makes modern society tick. Modern society would not be what it is were it not for the legalized gambling that goes on in Wall Street, in venture capital firms, in companies big and small that decide whether to bid on some contract, and in the huge number of mom-and-pop companies where the founder said "Forget this working for someone else. I'm going to start the business I've always wanted to run."
 
Last edited by a moderator:
  • #72
D H said:
And how do you obtain that utility function? The answer is of course by interviewing people to ascertain their state of mind. And making overly simplified assumptions.
That's not how I obtain my utility function for any given scenario, although there are necessarily some simplifications.
D H said:
Expected utility theory is, at least per Rabin and Thaler (http://www.researchgate.net/profile/Richard_Thaler/publication/4902162_Anomalies_Risk_Aversion/links/09e4151030d3ce0fdf000000.pdf ), an ex-hypothesis.
I skimmed that paper. It seems to argue that because a particular naively constructed utility function leads to anomalies when applied over a large, possibly infinite, domain then there is no utility function that does not lead to anomalies over any domain. I reject this.
 
Last edited by a moderator:
Back
Top