The Advantage of Both: A Game of Wallets

  • Thread starter shaan_aragorn
  • Start date
In summary, the conversation discusses the idea of playing a game where the participants put their money on the table and whoever has more has to give it to the other. The speaker reasons that since their chances of winning are 50% and the potential gain is greater than the potential loss, they should play the game. However, the other person also thinks the same way and the conversation delves into the complexities of probability and expected winnings for both players. The concept of ties and the distribution of money in each player's wallet is also taken into consideration. Ultimately, it is shown that the game cannot be advantageous to both players, and there is no clear solution.
  • #1
shaan_aragorn
43
0
You and your friend put your wallets on the table. The money is to be counted and whoever has more has to give it to the other. You reason that if you lose you’ll lose the money that you have in your wallet, but if you win then you know you’ll win more than that amount. Meaning, what you stand to gain is greater that what you stand to lose and since your chances of winning must be 50%, you figure you should play the game. But your friend thinks the same way. How can the game be to the advantage of both.
 
Physics news on Phys.org
  • #2
Very interesting question. Though I can't think of an obvious solution nor, in fact, one that I can work all the way through I've decided after mature consideration to go over what I think so far.

I believe the answer lies in the assumption that the odds of winning is 50% regardless of the amount of money you posess. This depends highly on the odds of the other person in fact having various amounts. If, for instance, you have $0 (your wallet is empty) your odds of winning are 100% (minus possibly what ever the odds of the other person also having 0 is) and there is no possibillity of ever losing. If you have an infinite amount or what ever is set as the maximum amount possible to have, your odds are 0% (similarly minus odds of the other having same) and there is no way to win. Clearly, 50% for all amounts do not hold true.

If you are both carrying an "average" amount (regardless of distribution around it) then yes, you are 50% to win. But in that case, is there really a long term gain? Both wins and losses will in the long term be a little more then the true "average". Here is where I'm out of my league, because I'm not quite sure how to show this generally with any distribution, even a simple one, right off the bat.
 
  • #3
Suppose we have a probability function for your opponent, P(x) and one for yourself p(x), so, for example, the probability that you have between a and b dollars in your wallet is:

[tex]\int _a ^b p(x)\,dx[/tex]

Suppose you have m dollars in your wallet and he has M. Then you expect to win:

[tex]\int _m ^{\infty} xP(x)\, dx - \int _0 ^m mP(x)\, dx[/tex]

He expects to win:

[tex]\int _M ^{\infty} xp(x)\, dx - \int _0 ^M Mp(x)\, dx[/tex]

If M = m and P = p, then you should have an equal expected winning, which means you must both expect to win 0. This means that:

[tex]\int _M ^{\infty} xp(x)\, dx = \int _0 ^M Mp(x)\, dx[/tex]

But this would have to hold for any M, since no matter what M is, as long as we choose m = M, the above must hold. But as M increase, the left side of the equation decreases as the right side increases, so this equation cannot hold for all M. What's wrong here? Maybe we can't assume p = P. Have I made some mistake?
 
  • #4
I've had probably a half dozen false starts on this. I see somthing wrong and then, after some thought, realize that it's right.

The only problems I see are ones that I can't quite formalize. In the OP there is really no solid definition for what is supposed to happen at m=M, i.e. a tie. One would assume that both sides keep their money, but that's not really specified. In the equations, that ought to be the case since both sides count the border case thus considering it both a win and a loss.

I'm wondering if p (and thus P) can really be applied this way for m=M. After all, if m=M, then isn't p(x)=1 for x=M and 0 otherwise and vice versa for P and m rather then distributed? Can one really assume the final equation holds true when in essense it's already been stated that all x!=m are not, in fact, happening (as I, perhaps wrongly, understand is the reason former equation=0)?

This is, I realize, a rather convoluted case (despite my efforts, I couldn't even manage to do it in LaTex so it'd be at least pretty) but it just seems that m=M sort of introduces non-randomness - basically saying that p(x) is the distribution but we are only considering cases where m and M fell on at the same place. I'm thinking more about it and brushing up on LaTex..
 
  • #5
No, if you roll a dice and it lands 6, then the probability that a dice will land 2 does not become 0. If you want, think of this by first saying that IF you had m dollars, then you'd expect to win XYZ. That is, the second equation I gave is the expected winnings when you have m dollars, and you can treat m as a variable. All this means is that your expected winnings are a function only of how much money you have (whereas your ACTUAL winnings are a function of how much you have AND how much he has). The expected winnings of your opponent is also a function of how much money he has. Now if the probability that you have between a and b dollars in your pocket is the same as the probability that your opponent has between a and b dollars in his pocket, for any choice of a and b, then for the both of you, your expected winnings are identical functions of the amount of money in your pocket. Since they are identical functions, then if the expected winnings when you have m is positive, then the expected winnings when he has m is also positive. However, if you both have m dollars, then you should both expect to win, i.e. the game would be advantageous to both, which doesn't make sense. Similarly, if E(m) (the expected winnings when you have m dollars) is negative, then the game is disadvantageous to both. Thus the only possibility (assuming it doesn't make sense to have the game be advantageous to both) is that the expected winnings are neither negative nor positive, i.e. they're zero.
 
  • #6
<nods> You're right, it was a very bad way to put it.

I still feel, despite the P(x) changing thing being bogus, that m=M violates the equation. You will, long term, win

[tex]\int _m ^{\infty} xP(x)\, dx - \int _0 ^m mP(x)\, dx[/tex]

*but* that assumes that M is randomly distributed per P(x). Consider, for instance, the special case of P(x) being .5 for x=0,1 and 0 otherwise. Your gain for $0 is, per the original function is .5:

[tex]\int _0 ^{\infty} xP(x)\, dx = 0*.5 + 1*.5[/tex]
[tex] \int _0 ^0 0P(x)\, dx = 0[/tex]

This is true and proper for both sides, with $0 you can expect to win $.5 - provided the other side is randomized between 0 and 1 (as is the only options in this rather simple distribution). If the other side isn't at all random but in fact locked to your amount that's a whole other story. So essentially, if p=P then m=M cannot be true for all cases (well, except for some silly distributions where only one thing is possible and for those I think the equation holds anyway). If m=M for all then one of p and P must be a rather narrow distribution tied to the opposite m or M. If p=P then m is only equal to M p(m) of the time, not 1 of the time (unless, as above, it's a very narrow distribution where P(m)=1).

I think I got it out a little better, but it might still be blurry because I thought I had it last time and now that I reread it it sounds terrible (despite rereading it dozens of times before posting it).

[EDIT] Rephrased the limit on the final "if p=P" statement to make it a little more general and more clear. It should still mean the same thing (I hope).

[EDIT2] As a side note, in the above sample distribution, expected loss on $1 is -.5 so expected loss/gain for all is 0, i.e.:

[tex]\int _0 ^{\infty} (\int _m ^{\infty} xP(x)\, dx - \int _0 ^m mP(x)\, dx)\, dm = 0[/tex]

... making the originally implied loss/gain in the paradox untrue. I wouldn't be a bit surprised if it holds for sane guesses as to p(x) for the original situation, perhaps even all. I'm too tired though and I might not be able to check even the average distributions so I'll have to get back to it (general case I am sure - not going to be able to).
 
Last edited:
  • #7
I'm not sure what you're saying. Do we agree that p and P don't depend on m and M? m and M are the actual amounts of money we have at the time, the probability distributions p and P are fixed. I'm not saying that m = M all the time, but when m = M, then the expected winnings should be such that game is not advantageous to both. There should be no situation where the game is advantageous to both, including the situation m = M. Can that be agreed upon? If so, then the only way that the game is not advantageous to both when m = M is if their expected winnings are both 0, since their expected winnings happen to be the same when m = M. But since expected winnings are just a function of how much money I have at the time and not how much the other guy has, then if my expected winnings are 0 when m = M, then they are always 0. In other words, my expected winnings are a constant if treated as a function of M (his money). So, if I happened to find that when M = 2m that my expected winnings were a, then my expected winnings should be a for every value of m, since the fact that M = 2m doesn't really matter, since my expected winnings don't vary with M. In yet other words, if I have a function which I know to be a constant, and I find that f(x) = a, then I know that f(y) = a for any y. Assuming this reasoning is sound, then the fact that my expected winnings are 0 when m = M, my expected winnings are 0 for any m. In other words, pick any m. Then since my expected winnings don't vary with M, if we specify an M value and then calculate my expected winnings when I have m dollars and he has M, then this gives my expected winnings when I have m dollars and he has any amount. But whenever m = M, then the expected winnings should be 0, so the expected winnings should always be 0, which leads to some contradictions, since we get an equation of two things but where one of those things increases while the other decreases.

Of course, there is some problem in the reasoning here, but what? I guess it's just in the assumption that a game cannot be advantageous to both. I guess we just have to look at what it means to be advantageous to both, and see that this isn't a contradiction. When Tom Brady is healthy, the Patriots usually win. When Roethlisberger is healthy, the Steelers usually win. If the Patriots and Steelers meet to play football and both have healthy quarterbacks, then probability suggests that both can expect to win, however, this isn't a contradiction. If the Patriots have an 80% chance to win this game, then the Steelers have a 20% chance to win this game. However, I think what our statistics tell us is that the Patriots have an 80% chance of winning games like this one, and the Steelers can very well at the same time have an 80% chance of winning games like this one. They can't both have an 80% chance of winning this particular game, but we have to interpret our statistics to say that they have a good chance of winning games like this, not this particular one.

So going back to our wallet game, when they each have m dollars, they may both have a good chance of winning games like this one (games where they have m dollars) but they can't both have a good chance of winning this game. When you talk about your chances in a specific game, you talk about what the other person has in his wallet (or how healthy their quarterback is), and only when you take both values into account does it become a contradiction to say that the game is advantageous to both. The problem of the game being advantageous to both in general (i.e. regardless of m) was already done away with by rejecting the assumption that p(m < M) = p(m > M), and leaving p variable.
 
  • #8
Well, what I'm essentially saying is that your original function is only true when M is distributed. It's not at all true for a fixed M, it's a function to calculate the gain for *all* M when they are distributed over p. If you fix M to anything (lincluding m) then it's no longer distributed. When looking at a special case for M, the function isn't true - your average gain isn't the average of all M it's only a constant.

This looks tricky because if m=M it looks kind of like it's distributed over p. It's not, however, if it was they wouldn't always be equal. If p=P and m and M are distributed over p then m=M only p(m) of the time.

You're right in the "mutual advantage" is a little poorly defined. If both have 0, both will appear to have an advantage. This isn't, however, becasue they both do - the gain is obviously 0 (no money even involved) - it's based on the false assumption that the other person may possibly not have $0. There is a difference in the available information here, the opposite side is assumed to be random (per p) otherwise the assumption that you will make

[tex]\int _m ^{\infty} xP(x)\, dx - \int _0 ^m mP(x)\, dx[/tex]

isn't valid. It's only valid because you're (essentially, with 1/inf slices) summing up all cases and dividing them by the number of cases. m=M does not represent all cases, only a small subset. That's why I brought up the dicreet case, which is much easier to see through. It's ok for both sides to appear to have an advantage from their (rather limited) vantage point, what isn't ok is to have an actual long term gain when both are taken into account. The sum over all m must be zero when each is summed over all M, but the sum for specific sets of m and M (such as m=M, m=0, m=2M, etc) can certainly be unbalanced. They aren't based on the same set of assumptions, one is based on that m is known and M is random, the other is based on that M is known and m is random. It's assumed that

[tex]\int _m ^{\infty} xP(x)\, dx - \int _0 ^m mP(x)\, dx = 0[/tex]

because m=M, but it's assumed that the function is valid in the first place on the grounds that all M (including those !=m) are taken into account. If only the M that are =m are taken into account, then there is no need to integrate - the function just boils down to mp(m)-Mp(m) aka 0. If the above integral is used, then lots of cases where m!=M is counted (according to their probabillity) even though they cannot in fact happen by virtue of m=M.

Ok, I got another one. Essentially, the original function should be written as

[tex]\int _m ^{\infty} MP(M)\, dM - \int _0 ^m mP(M)\, dM[/tex]

to make it clear what it is you're integrating over here. Thinking about it like that, you obviously can't fix M based on m (or, in fact, anything) and still stubbornly use it as a variable. The 0 to Inf span you're integrating over is M, so if it's to be limited with respect to m then you've just busted it, it no longer makes sense to use it as a variable.

[EDIT] Took out a few typos. There are probably more.
 
Last edited:
  • #10
shaan_aragorn said:
You and your friend put your wallets on the table. The money is to be counted and whoever has more has to give it to the other. You reason that if you lose you’ll lose the money that you have in your wallet, but if you win then you know you’ll win more than that amount. Meaning, what you stand to gain is greater that what you stand to lose and since your chances of winning must be 50%, you figure you should play the game. But your friend thinks the same way. How can the game be to the advantage of both.

There's a 50-50 chance of winning but the assumption you are risking less than than what you stand to gain is totally wrong. It sounds good, like that riddle with the 3 guys and where's the $2? 50% of the time (when you win) 'what you have to gain is greater than what you stand to lose' is true but the other 50% of the time, when you lose, 'what you stood to gain was less than what you stood to lose'.
 
  • #11
I'm going to think outside the box here. what if one person has a $100 dollar bill but the other person has...oh let's say a $5 bill, some quarters and a few ones. One person would have more money in value, but the other person would have more money in..well quantity of objects. so whose actually got more? if you think about it that way, neither will win, unless you just didn't specify this enough, and you meant whoever has the more money in value.
 
  • #12
You each have a 50% chance of more than doubling your money, the exact amount undefined.

You each have a 50% chance of losing all your money.

There is an INCENTIVE for each of you to play. But both of you share the same odds, thus there is no ADVANTAGE to either of you.

The problem is one of flawed terminology, here.
 

1. What is the premise of "The Advantage of Both: A Game of Wallets"?

The game is a simulation of the stock market, where players compete to build the most profitable portfolio by making strategic investments.

2. How many players can participate in the game?

The game can be played with 2-4 players.

3. What is the advantage of playing with both physical and digital wallets?

Using both physical and digital wallets adds an extra layer of strategy to the game, as players must strategically decide which wallet to use for each transaction based on their current resources and the market conditions.

4. How does the game simulate real-life stock market conditions?

The game uses a combination of random events and player decisions to mimic the unpredictable nature of the stock market. It also includes various market trends and news events that can impact the value of different investments.

5. Is "The Advantage of Both: A Game of Wallets" suitable for all ages?

The game is recommended for ages 12 and up, as it involves strategic thinking and understanding of basic financial concepts. However, younger players can also enjoy the game with guidance from an adult.

Similar threads

Replies
9
Views
963
  • General Discussion
Replies
5
Views
747
  • General Discussion
Replies
9
Views
1K
Replies
10
Views
987
  • Engineering and Comp Sci Homework Help
Replies
15
Views
1K
  • Precalculus Mathematics Homework Help
Replies
1
Views
1K
  • General Discussion
Replies
20
Views
3K
  • General Discussion
Replies
31
Views
3K
  • Set Theory, Logic, Probability, Statistics
Replies
11
Views
1K
Replies
9
Views
2K
Back
Top