Question: Proposed Solution to Two Envelope Paradox

In summary, the conversation discusses the two envelope paradox and whether it can be resolved by considering the perspectives of both players instead of just one. The paradox is formulated by considering one person's odds when choosing to swap an envelope. The conversation also explores the idea of incorporating a two-person perspective and whether there is an incentive to swap. The original player needs someone or something to swap with, but the fact that one's gain is the other's loss is consistent with all versions of the paradox. The conversation also includes a hypothetical scenario of two players and the potential outcome of switching envelopes. Ultimately, the conversation concludes that the paradox is not resolved and there is still no incentive to swap.
  • #1
AplanisTophet
89
4
Su, Francis, et. al. have a short description of the paradox here: https://www.math.hmc.edu/funfacts/ffiles/20001.6-8.shtmlI used that link because it concisely sets forth the paradox both in the basic setting but also given the version where the two envelopes contain [itex]( \,\$2^k, \$2^{k+1}) \,[/itex] with probability [itex]\frac{( \,\frac{2}{3}) \,^k}{3}) \,[/itex] for each integer [itex]k \geq 0[/itex].Where the paradox is formulated by considering one person’s odds when choosing to swap an envelope, my question is whether the paradox might be resolved by considering the paradox from both swapper’s perspectives instead of just one (i.e. for one person to swap, there must be another person for the original to swap with).From a single person’s perspective, the paradoxical odds are traditionally given by the equation:

[itex] 0.5( \,0.5x) \, + 0.5( \,2x) \, = 1.25x[/itex]

To incorporate a two-person perspective, the equation would be one person’s odds for gain less “their opponent’s” odds for gain because their opponent’s gain comes at their expense. In other words, if you stand a 50/50 shot of losing exactly as much as you stand to gain, there is no longer incentive to swap envelopes:

[itex][ \,0.5( \,0.5x) \, + 0.5( \,2x) \,] \, - [ \,0.5( \,0.5x) \, + 0.5( \,2x) \,] \, = 0[/itex]

The result is that neither person improves their odds by swapping. Paradox resolved.Comments, suggestions, agree, disagree… I’m just fishing here. Thank you?
 
Last edited:
Physics news on Phys.org
  • #2
We can also break it down into the four possible scenarios when swapping where, when ambiguously starting with [itex]x[/itex], we have 25% of the time you’ll lose [itex]0.5x[/itex] to your opponent, 25% of the time you’ll gain [itex]0.5x[/itex] from your opponent, 25% of the time you’ll lose [itex]x[/itex] to your opponent, and 25% of the time you’ll gain [itex]x[/itex] from your opponent. The same is true for your opponent.[itex]x - 0.25( \,0.5x) \, + 0.25( \,0.5x) \, - 0.25( \,x) \, + 0.25( \,x) \, = x - 0.25( \,0.5x) \, + 0.25( \,0.5x) \, - 0.25( \,x) \, + 0.25( \,x) \,[/itex][itex]x = x[/itex]In addition to what is noticed in the original question, the above shows that there is no incentive to swap.The same general logic works for all consistent versions of the paradox that prey on the ambiguity of the variable [itex]x[/itex] (see the link in the original question for more insight into that statement).I assert that the paradox is resolved.
 
  • #3
AplanisTophet said:
We can also break it down into the four possible scenarios when swapping where, when ambiguously starting with [itex]x[/itex], we have 25% of the time you’ll lose [itex]0.5x[/itex] to your opponent, 25% of the time you’ll gain [itex]0.5x[/itex] from your opponent, 25% of the time you’ll lose [itex]x[/itex] to your opponent, and 25% of the time you’ll gain [itex]x[/itex] from your opponent. The same is true for your opponent.[itex]x - 0.25( \,0.5x) \, + 0.25( \,0.5x) \, - 0.25( \,x) \, + 0.25( \,x) \, = x - 0.25( \,0.5x) \, + 0.25( \,0.5x) \, - 0.25( \,x) \, + 0.25( \,x) \,[/itex][itex]x = x[/itex]In addition to what is noticed in the original question, the above shows that there is no incentive to swap.The same general logic works for all consistent versions of the paradox that prey on the ambiguity of the variable [itex]x[/itex] (see the link in the original question for more insight into that statement).I assert that the paradox is resolved.
The two envelope problem does not require two players. You must solve it for one player.
 
  • #4
Zafa Pi said:
The two envelope problem does not require two players. You must solve it for one player.

The original player needs someone (or at least something) to swap with. That someone or something has the same odds when swapping as the swapper and one's gain is the other's loss. These two facts are consistent with all versions of the two envelope paradox, are they not? If that is true, then why can't I use those facts in deriving a solution?
 
  • #5
AplanisTophet said:
The original player needs someone (or at least something) to swap with. That someone or something has the same odds when swapping as the swapper and one's gain is the other's loss. These two facts are consistent with all versions of the two envelope paradox, are they not? If that is true, then why can't I use those facts in deriving a solution?
If I am offered a $1 bill and there is a $10 on the table and Bob asks me if I would like to switch, I would say yes. Now you would point out that the other hypothetical player would lose what I gain so I shouldn't bother?
 
  • #6
Zafa Pi said:
If I am offered a $1 bill and there is a $10 on the table and Bob asks me if I would like to switch, I would say yes. Now you would point out that the other hypothetical player would lose what I gain so I shouldn't bother?

You leave no room for ambiguity as to the value of x (the amount in your and/or Bob's envelopes) in your example. The paradox is built on that ambiguity.

If the only thing you knew was that Bob's 'envelope' contained either 1/10 of yours or 10 times yours, and that similarly all Bob knew about your 'envelope' was that it contained either 1/10 of his envelope or 10 times his envelope, then yes, I would say there is no incentive to switch for either player. Note that I am dealing with the closed envelope version of the paradox (which should have been apparent because I never specified values) which states that you and Bob will just keep switching envelopes forever and never actually get around to opening them.

Note that in the open envelope version of the paradox, no more than one swap could take place. Let's just stick to the ambiguous closed envelope version for now though, fair enough?

So, I ask again the same questions that I did above.
 
  • #7
AplanisTophet said:
So, I ask again the same questions that I did above.
What you have done with your hypothetical other player (B) is show that A switching doesn't do her any good. But we knew this from the beginning.
To resolve the paradox it is necessary to find the flaw in A's calculation that convinced her to switch. You have not done that.
Over and out.
 
  • #8
AplanisTophet said:
We can also break it down into the four possible scenarios when swapping where, when ambiguously starting with [itex]x[/itex], we have 25% of the time you’ll lose [itex]0.5x[/itex] to your opponent, 25% of the time you’ll gain [itex]0.5x[/itex] from your opponent, 25% of the time you’ll lose [itex]x[/itex] to your opponent, and 25% of the time you’ll gain [itex]x[/itex] from your opponent. The same is true for your opponent.[itex]x - 0.25( \,0.5x) \, + 0.25( \,0.5x) \, - 0.25( \,x) \, + 0.25( \,x) \, = x - 0.25( \,0.5x) \, + 0.25( \,0.5x) \, - 0.25( \,x) \, + 0.25( \,x) \,[/itex][itex]x = x[/itex]In addition to what is noticed in the original question, the above shows that there is no incentive to swap.The same general logic works for all consistent versions of the paradox that prey on the ambiguity of the variable [itex]x[/itex] (see the link in the original question for more insight into that statement).I assert that the paradox is resolved.

I don't see how it resolves anything.

Go back to the original problem. Let me make it concrete, using the article's numbers (which work out nicer than the 50/50 probability of the original problem, but are equally paradoxical).

Suppose I do the following:
  1. I secretly roll a 6-sided die repeatedly until I get a 1 or a 2. (Typically, it'll take 3 rolls, but it could be more or fewer than that.)
  2. I count up how many rolls it takes. Let [itex]N[/itex] be that number.
  3. I put $[itex]2^N[/itex] into one envelope, and $[itex]2^{N-1}[/itex] into the other envelope.
  4. I mix up the envelopes thoroughly, and let you choose one (assume you have no way of knowing which envelope is which)
  5. You open your envelope and find $x.
  6. I allow you to either keep those $x or swap it for the other envelope. Which do you do?
The answer is certainly not that it doesn't make any difference whether you switch or not. For example, what if x=1? Then you know that the other envelope can't contain 1/2 a dollar. So in that case, it's definitely to your advantage to switch. Do you agree with that?

What if, instead, x=2? Then there are two possibilities:
  1. N = 1 (in that case, if you switch, you will lose $1)
  2. N = 2 (in that case, if you switch you will gain $2)
[EDIT: I got the numbers wrong, originally] If you work out the posterior probabilities, case 1 has probability 60%, and case 2 has probability 40%. So your expectation value for gain/loss when switching is

40% x $2 - 60% x $1 = + $0.20

So even though you're not certain that switching will help (unlike the case where x=1), you really should switch, to maximize your expectation value. Do you agree with that?

So the question is: For what value of x would you say it doesn't make any difference whether you switch or not?
 
Last edited:
  • #9
Zafa Pi said:
What you have done with your hypothetical other player (B) is show that A switching doesn't do her any good. But we knew this from the beginning.
To resolve the paradox it is necessary to find the flaw in A's calculation that convinced her to switch. You have not done that.
Over and out.

As I said in another post, suppose the odds for the larger amount are as follows:

  1. With probability 1/3, it is $2
  2. With probability 2/9, it is $4
  3. With probability 4/27, it is $8
  4. With probability 8/81, it is $16
  5. etc.
Now, if you see $2 in your envelope, then there is a probability of 3/5 that the largest amount is $2 (so switching loses you $1), and a probability of 2/5 that the largest amount is $4 (so switching gains you $2). So the expected loss/gain from switching is 2/5 x $2 - 3/5 x $1 = $0.20.

So if you see $2 in your envelope, then there is absolutely nothing "flawed" about the reasoning that you are better off switching.

For each value of x, if is provably true that if you see $x in your envelope, you are better off switching. For each fixed x, you can either prove it mathematically, using conditional probability and Bayesian reasoning, or you could run a simulation to demonstrate it.

The paradox is why this doesn't imply that "Always switch without even looking at your envelope" is the right thing to do?
 
  • #10
stevendaryl said:
As I said in another post, suppose the odds for the larger amount are as follows:

  1. With probability 1/3, it is $2
  2. With probability 2/9, it is $4
  3. With probability 4/27, it is $8
  4. With probability 8/81, it is $16
  5. etc.
Now, if you see $2 in your envelope, then there is a probability of 3/5 that the largest amount is $2 (so switching loses you $1), and a probability of 2/5 that the largest amount is $4 (so switching gains you $2). So the expected loss/gain from switching is 2/5 x $2 - 3/5 x $1 = $0.20.

So if you see $2 in your envelope, then there is absolutely nothing "flawed" about the reasoning that you are better off switching.

For each value of x, if is provably true that if you see $x in your envelope, you are better off switching. For each fixed x, you can either prove it mathematically, using conditional probability and Bayesian reasoning, or you could run a simulation to demonstrate it.

The paradox is why this doesn't imply that "Always switch without even looking at your envelope" is the right thing to do?
This is a cute stuffing of the St. Petersburg Paradox into an envelope. You can pick your favorite solution from that link. The solution offered by Peters reminded me of Everett's MWI interpretation. :rolleyes:
 
  • #11
stevendaryl said:
So the question is: For what value of x would you say it doesn't make any difference whether you switch or not?

You are addressing the "open envelope" version of the two envelope paradox in your first post. In that case, people know the value of their envelope so they will swap once at most, which is important to distinguish because...

stevendaryl said:
The paradox is why this doesn't imply that "Always switch without even looking at your envelope" is the right thing to do?

...in the "closed envelope" version of the paradox people are still incentivized to swap envelopes, only infinitely many times. To quote Su, Francis, et al. cited in the OP: "But if you do switch, a similar argument would instruct you to switch back... and therefore keep switching!... Some of you may object that the prior distribution has infinite mean, but this does not fully resolve the paradox, since in theory if such a distribution exists, one would still have to wrestle with the paradox of continually switching envelopes!"

I address the closed envelope version of the paradox in my first two posts. In other words, why do we derive a result that says we should swap infinitely many times?

I acknowledged a two person perspective that also preys on the ambuiguity of [itex]x[/itex] (something that can't be done in the open envelope version) to draw up a perfectly valid equation given the ambiguity of [itex]x[/itex] which can be generalized to any version of the 'closed envelope' two envelope paradox:

[Person one's odds] = [Person two's odds] = 1.25x

[itex][ \,0.5( \,0.5x) \, + 0.5( \,2x) \,] \, = [ \,0.5( \,0.5x) \, + 0.5( \,2x) \,] \, = 1.25x[/itex]

[itex][ \, x - 0.5( \,0.5x) \, + 0.5( \,x) \,] \, = [ \, x - 0.5( \,0.5x) \, + 0.5( \,x) \,] \, = 1.25x[/itex]

The above equations obviously prey on the ambiguity of [itex]x[/itex]. If we allow such ambiguity, then we can look at things step-by-step to contruct a different equation showing each person's expected outcomes when swapping. We start by asserting that (step 1) when person one loses 0.5x, then person two gains 0.5x (the ambiguous x is the larger of the two values in this case):

[Person one's odds] = [Person two's odds]

[itex]x - 0.5x ... = x + 0.5x ...[/itex]

We continue (step 2) and assert that when person two loses 0.5x then person one gains 0.5x (the ambiguous x is again the larger of the two values in this case):

[itex]x - 0.5x + 0.5x ... = x + 0.5x - 0.5x ...[/itex]

We continue (step 3) and assert that when person one loses x then person two gains x (the ambiguous x is the lesser of the two values in this case):

[itex]x - 0.5x + 0.5x - x ... = x + 0.5x - 0.5x + x ...[/itex]

We conclude the base equation (step 4) by asserting that when person two loses x then person one gains x (the ambiguous x is the lesser of the two values in this case):

[itex]x - 0.5x + 0.5x - x + x = x + 0.5x - 0.5x + x - x = x[/itex]

We now assign probabilities to each of these events. In the basic version of the paradox there is a 25% chance for each step to occur. In other versions we might say that swapping leads to an expected value of [itex]\frac{11}{10}x[/itex]. Notice that this has no impact on the equation (If person one has a Y% chance of swapping for +0.5x then person two has an equal probability Y% of swapping for -0.5x and vis-versa:

[itex]x - ( \,Y) \,0.5x + ( \,Y) \,0.5x - ( \,Y) \,x + ( \,Y) \,x = x + ( \,Y) \,0.5x - ( \,Y) \,0.5x + ( \,Y) \,x - ( \,Y) \,x = x[/itex]
 
  • #12
AplanisTophet said:
...in the "closed envelope" version of the paradox people are still incentivized to swap envelopes, only infinitely many times.
Thank God it isn't more.
 
  • #13
AplanisTophet said:
You are addressing the "open envelope" version of the two envelope paradox in your first post. In that case, people know the value of their envelope so they will swap once at most, which is important to distinguish because...

Okay. I agree that the reasoning leading to blind swapping is clearly fallacious, and it's interesting to see exactly where the mistake lies.

But to me, what's more interesting is the combination of facts that:
  1. Blind swapping is clearly fallacious --- it can't possibly improve your situation.
  2. In the open envelope case, we can argue (in the case of a definite probability distribution) that swapping is clearly advantageous.
  3. But since no matter what you see in your envelope, you come to the same conclusion, then why not do blind swapping?
We can restate this as the following seeming paradox:
  1. There are an infinite number of "situations": Situation 0 = "find $1 in your envelope", Situation 1 = "find $2 in your envelope", Situation N = "find $2N in your envelope.
  2. For each situation, it is provably to your advantage to switch.
  3. But if you have no idea which situation you're in (closed envelope case), there is no advantage to switching.
If you always end up switching, then how can it help you to look in your envelope?
 
Last edited:
  • Like
Likes AplanisTophet
  • #14
stevendaryl said:
We can restate this as the following seeming paradox:
  1. There are an infinite number of "situations": Situation 1 = "find $1 in your envelope", Situation 2 = "find $2 in your envelope", Situation N = "find $2N in your envelope.
  2. For each situation, it is provably to your advantage to switch.
  3. But if you have no idea which situation you're in (closed envelope case), there is no advantage to switching.
If you always end up switching, then how can it help you to look in your envelope?

I was considering going on to address the open envelope version of the paradox as well...

Before I contemplate that, we have to agree that the math, when applied properly, suggests that there is no incentive to swapping in the closed envelope version of the paradox as opposed to just what we take as obvious. You've asserted twice that there is no incentive to swap in the closed envelope version, but the math says otherwise in that the traditional formula yields an expected value of 1.25x when swapping regardless of what is in your envelope (barring my above restatement of the traditional formula or others' attempts, to my knowledge none of which are accepted in general).

So, if you accept my restatement of the traditional formula (post 11 probably being the clearest explanation), then perhaps I'll toy with the open envelope version, but if my restatement is not readily acceptable then I see no point in trying to move on yet. For the record, I don't expect you or anyone to readily accept my restatement. It's only a proposed solution and was looking for feedback on it.
 
  • #15
Zafa Pi said:
What you have done with your hypothetical other player (B) is show that A switching doesn't do her any good. But we knew this from the beginning.
To resolve the paradox it is necessary to find the flaw in A's calculation that convinced her to switch. You have not done that.
Over and out.

That's interesting because B's expected value when swapping is the same as A's, so in saying that B doesn't benefit according to the math but A does (I think that's what you mean, if not that A sees no benefit in swapping according to the math but her calculation says she does... ?), you are saying we have math that indicates B doesn't benefit, but A does, even though their expected values are equal? Very confusing as to what you mean...

Also, how did we know [what] from the beginning? I didn't assume anything to begin with personally. My intuition almost always let's me down given the type of math that I enjoy (paradoxes, infinite sets, etc.).

When you say, "[t]o resolve the paradox it is necessary to find the flaw in A's calculation that convinced her to switch. You have not done that." Fair enough, but can you shed some light on why my rationale (that the traditional formula for the expected value of 1.25x is incomplete) does not suffice in demonstrating that A's calculation is flawed (I think post 11 being my clearest exposition)?
 
  • #16
stevendaryl said:
The paradox is why this doesn't imply that "Always switch without even looking at your envelope" is the right thing to do?
I'm not sure there is a paradox here. If Alice looks in her envelope and then switches, she doesn't then get to switch back. So if she doesn't look and switches, are you allowing her to then switch again? If so then there is a paradox, she will keep switching forever, or until her wavering function collapses, which ever comes first. However, if she can only switch once where is the paradox? Bob (at the other envelope) will also switch, and if they both looked, then the sum of their expectations is more than in the two envelopes, but so what. That's somewhat unintuitive, but not a paradox.

All of this is a consequence of the infinite mean of the distribution.
 
  • #17
Zafa Pi said:
All of this is a consequence of the infinite mean of the distribution.

To quote again Su, Francis, et al., cited in the OP: "Some of you may object that the prior distribution has infinite mean, but this does not fully resolve the paradox, since in theory if such a distribution exists, one would still have to wrestle with the paradox of continually switching envelopes!"

While there is no uniform cumulative distribution function over [itex]\mathbb{N}[/itex], it is nevertheless possible to devise a theoretical method by which an element of [itex]\mathbb{N}[/itex] is selected randomly and all elements of [itex]\mathbb{N}[/itex] have an equal (however undefined) probability of being selected. That is to say, no one positive integer is favored over any other in terms of its chances of being selected. The method is to partition [0, 1) into a countable number of Vitali sets [itex]V^1, V^2, V^3, ...[/itex], select an element [itex]x[/itex] of [0, 1) uniformly at random, and then relate [itex]x[/itex] to a natural number by asserting that if [itex]x \in V^n[/itex], then [itex]f( \,x) \, = n[/itex].

So, there is your distribution. I believe it doesn't have infinite mean though, but rather an undefined mean.
 
  • #18
AplanisTophet said:
While there is no uniform cumulative distribution function over NN\mathbb{N}, it is nevertheless possible to devise a theoretical method by which an element of NN\mathbb{N} is selected randomly and all elements of NN\mathbb{N} have an equal (however undefined) probability of being selected. That is to say, no one positive integer is favored over any other in terms of its chances of being selected. The method is to partition [0, 1) into a countable number of Vitali sets V1,V2,V3,...V1,V2,V3,...V^1, V^2, V^3, ..., select an element xxx of [0, 1) uniformly at random, and then relate xxx to a natural number by asserting that if x∈Vnx∈Vnx \in V^n, then f(x)=nf(x)=nf( \,x) \, = n.

So, there is your distribution. I believe it doesn't have infinite mean though, but rather an undefined mean.
While this has little to do with OP problem, it is interesting. We could let V be a Vitali set and let Vk = V + qk (where qk is the kth rational), so all the Vk look alike, if this helps. And then have x uniform over the rationals (which is impossible as well)
"So there is your distribution." ? I don't see any distribution.
"That is to say, no one positive integer is favored over any other in terms of its chances of being selected." How do you find the chances?
"all elements of NN\mathbb{N} have an equal (however undefined) probability of being selected. " Sounds like the Jabberwocky.
 
  • #19
AplanisTophet said:
From a single person’s perspective, the paradoxical odds are traditionally given by the equation:

[itex] 0.5( \,0.5x) \, + 0.5( \,2x) \, = 1.25x[/itex]
There is no fixed situation and fixed value of x for which this equation is true.
 
  • #20
FactChecker said:
There is no fixed situation and fixed value of x for which this equation is true.

Yes, that's why I prefer the concrete probability distribution (for the smaller of the two amounts):
  1. P($1) = 1/3
  2. P($2) = 2/9
  3. P($4) = 4/27
  4. P($2n) = 1/3 * (2/3)n
For that distribution, if you see $2n in your envelope, then there are two possibilities:
  1. With probability 3/5, that's the lower of the two amounts.
  2. With probability 2/5, that's the higher of the two amounts.
There is no way to generate amounts so that the likelihood is 50/50 either way. But the paradox doesn't depend on it being 50/50.
 
  • #21
Zafa Pi said:
"That is to say, no one positive integer is favored over any other in terms of its chances of being selected." How do you find the chances?

In my example (post 17), I was very clear in asserting that the probability of selecting any given natural is undefined. This is because Vitali sets are non-measurable sets. That is why we are left with no cumulative distribution function despite being able to accurately claim that the method results in the random selection of a natural from all of [itex]\mathbb{N}[/itex] with no single natural being favored over any other in terms of its probability of being selected. This is common knowledge though (not Jabberwocky...?).

Then again, we assert when selecting our real uniformly at random from [0, 1) that each real has a 0 probability of being selected, but that the sum of these probabilities equates to one (the old "we're sure to hit the dart board but each real number on the dart board has a 0 probability of being struck" paradox). Yeah, sounds like Jabberwocky to me, but it's nevertheless standard math that I don't care to challenge.
 
  • #22
stevendaryl said:
There is no way to generate amounts so that the likelihood is 50/50 either way. But the paradox doesn't depend on it being 50/50.

Exactly (well, barring using the axiom of choice as demonstrated in my post 17 which isn't a direct construction...). Since you definitely get it, and since we then still "prey on the ambiguity of x" in these cases, any thoughts on my post #11?
 
  • #23
AplanisTophet said:
In my example (post 17), I was very clear in asserting that the probability of selecting any given natural is undefined. This is because Vitali sets are non-measurable sets. That is why we are left with no cumulative distribution function despite being able to accurately claim that the method results in the random selection of a natural from all of NN\mathbb{N} with no single natural being favored over any other in terms of its probability of being selected. This is common knowledge though (not Jabberwocky...?).
You chose x from [0.1) with distribution F(x) = x (uniform). Would it have made a difference if the distribution function was G(x) = x2?

"in terms of its probability of being selected. " What notion of probability are you using here?

My friend Bob just told me that 7 was more favored to be selected that 4. How do I prove him wrong?

I find your construction interesting nonetheless, along with filling 3-space with disjoint unit circles, and the Tarski-Banach theorem.
AplanisTophet said:
Then again, we assert when selecting our real uniformly at random from [0, 1) that each real has a 0 probability of being selected, but that the sum of these probabilities equates to one (the old "we're sure to hit the dart board but each real number on the dart board has a 0 probability of being struck" paradox). Yeah, sounds like Jabberwocky to me, but it's nevertheless standard math that I don't care to challenge.
As you well know, for one familiar with Lebesgue theory there is no paradox.
 
  • #24
Zafa Pi said:
You chose x from [0.1) with distribution F(x) = x (uniform). Would it have made a difference if the distribution function was G(x) = x2?

"in terms of its probability of being selected. " What notion of probability are you using here?

My friend Bob just told me that 7 was more favored to be selected that 4. How do I prove him wrong?

I find your construction interesting nonetheless, along with filling 3-space with disjoint unit circles, and the Tarski-Banach theorem.

As you well know, for one familiar with Lebesgue theory there is no paradox.

I'm glad you find this stuff interesting. If you want to pick at it a little more, then consider my original and independent paper. I think that will answer some of your questions.A Consideration of the Fabled Uniform Distribution Over the NaturalsIntroduction:It is helpful to consider a simple observation pertaining to finite sets in order to get started. Let [itex]A = \{1, 2\}[/itex] and [itex]B = \{3, 4, 5, 6\}[/itex]. Let [itex]f(x) = 1[/itex] if [itex]x[/itex] is odd and [itex]f(x) = 2[/itex] if [itex]x[/itex] is even. Then, function [itex]f[/itex] is a surjection from [itex]B[/itex] onto [itex]A[/itex] that is “uniform” in the sense that selecting an element [itex]x \in B[/itex] uniformly at random will result in the selection of [itex]f(x) \in A[/itex] uniformly at random as well. Note that in order to do this, we have effectively partitioned [itex]B[/itex] into subsets the same size as [itex]A[/itex] so that we could biject those subsets with [itex]A[/itex]. Likewise, this work shows how we can partition [itex][ \,0, 1) \,[/itex] into countable sets that are then bijected with the natural numbers. Consideration is then given to whether selecting an element of [itex][ \,0, 1) \,[/itex] uniformly at random allows for the selection of a natural number uniformly at random as well.Definitions:Let [itex]V^{( \,0.5, 1) \,}[/itex] be a set containing one and only one element from each Vitali equivalence class on the interval [itex]( \, 0.5, 1 ) \,[/itex] (Vitali equivalence classes are equivalence classes of the real numbers that partition [itex]\mathbb{R}[/itex] under the relation [itex]x \equiv y \iff ( \, \exists q \in \mathbb{Q} ) \, ( \, x - y = q ) \,[/itex]). The axiom of choice allows for such a selection.For any real number [itex]r[/itex], let [itex]d(r)[/itex] equal the one and only one element [itex]v \in V^{( \,0.5, 1) \,}[/itex] such that [itex]r - v \in \mathbb{Q}[/itex].Let [itex]h : \mathbb{N} \longmapsto \mathbb{Q} [ \, 0, 1 ) \,[/itex] be bijective.Let [itex]k : [ \, 0, 1 ) \, \longmapsto \mathbb{N}[/itex] be surjective:

[itex]k(x) = \begin{cases}

h^{-1}(x - d(x) + 0.5) && x \geq d(x) - 0.5 \\

h^{-1}((x + 1) - d(x) + 0.5) && x < d(x) - 0.5

\end{cases}[/itex].Let [itex]V^n = \{ x \in [ \,0,1) \, : k(x) = n \}[/itex] for each [itex]n \in \mathbb{N}[/itex]. We then have [itex]V^{( \,0.5, 1) \,} = V^{h^{-1}(0.5)}[/itex], for example. Each [itex]V^{n}[/itex] will be a Vitali set by definition with the collection [itex]\{ V^{n} : n \in \mathbb{N} \}[/itex] forming a partition of [itex][ \,0, 1) \,[/itex].Let [itex]x[/itex] be an element of [itex][ \,0, 1) \,[/itex] selected uniformly at random.Let [itex]p = k(x)[/itex]. By definition, [itex]p[/itex] has now been selected uniformly from [itex]\mathbb{N}[/itex].Comments on Uniformity:A uniform distribution is a concept of translation invariance. For example, if [itex]S[/itex] is a measurable set, we may want the probability of [itex]S[/itex] to be the same as the probability of [itex]\{y : y = z + n, z \in S \}[/itex] for each natural number [itex]n[/itex]. In the case of function [itex]k[/itex] over the domain [itex][ \,0, 1) \,[/itex], however, we end up mapping each element of each non-measurable Vitali set [itex]V^{n}[/itex] to a distinct natural number [itex]n[/itex]. Where [itex]a, b \in \mathbb{N}[/itex], it is easy to see that the probability of [itex]x[/itex] falling within [itex]V^{a}[/itex] is equal to the probability of [itex]x[/itex] falling in [itex]V^{b}[/itex] (thus the probability that [itex]p = a[/itex] equals the probability that [itex]p = b[/itex]), but we cannot rely on a Lebesgue measure as a means of establishing probability or creating any sort of cumulative distribution function on [itex]\mathbb{N}[/itex]. The probability of selecting any given natural remains undefined.

B.J.K. April 19, 2017
 
  • #25
AplanisTophet said:
easy to see that the probability of xxx falling within VaVaV^{a} is equal to the probability of xxx falling in Vb
Sorry, Kolmogorov patented "probability", you can't use here.
AplanisTophet said:
thus the probability that p = a equals the probability that p = b
and
AplanisTophet said:
The probability of selecting any given natural remains undefined.
The two lines go together well.

You still didn't answer my three questions in post #23.
 
  • #26
Zafa Pi said:
You chose x from [0.1) with distribution F(x) = x (uniform). Would it have made a difference if the distribution function was G(x) = x2?

"in terms of its probability of being selected. " What notion of probability are you using here?

My friend Bob just told me that 7 was more favored to be selected that 4. How do I prove him wrong?

I find your construction interesting nonetheless, along with filling 3-space with disjoint unit circles, and the Tarski-Banach theorem.

Q1) I specifically did not use a [itex]x^2[/itex] distribution function to avoid any possibility of Bertrand-Paradox-like confusion. https://en.wikipedia.org/wiki/Bertrand_paradox_(probability) . If I used a different set up, where [itex]V^{( \,0.5,1) \,}[/itex] was not my initial Vitali set used to create all the others, then I believe we could create a situation where curvature of the distribution function would not matter, however. Where Vitali sets are not continuous on any interval, only dense, then it may be the case that a [itex]x^2[/itex] distribution function would not disrupt my result even with my given setup, but I don't care to go there.

Q2) What is a "notion of probability?" All I know is that probability is something Kolmogorav (or anyone else) couldn't possibly patent. What are the chances that something will happen? That's all probability is to me, no more, no less.

Q3) I'd start by asking your friend Bob to show his work in deriving that conclusion (knowing full well that you, er, I mean Bob, would flunk out on that question). I'd then remind Bob that the chances of [itex]x[/itex] falling in any given [itex]V^n[/itex] are equal despite being undefined because Vitali sets all have equal measure despite the fact that they are non-measurable. We don't know what P(1), P(2), P(3), ... are, but we do know that P(1)+P(2)+P(3)... = 1 and we do know that P(1)=P(2)=P(3)...
 
  • #27
AplanisTophet said:
A Consideration of the Fabled Uniform Distribution Over the NaturalsIntroduction:It is helpful to consider a simple observation pertaining to finite sets in order to get started. Let [itex]A = \{1, 2\}[/itex] and [itex]B = \{3, 4, 5, 6\}[/itex]. Let [itex]f(x) = 1[/itex] if [itex]x[/itex] is odd and [itex]f(x) = 2[/itex] if [itex]x[/itex] is even. Then, function [itex]f[/itex] is a surjection from [itex]B[/itex] onto [itex]A[/itex] that is “uniform” in the sense that selecting an element [itex]x \in B[/itex] uniformly at random will result in the selection of [itex]f(x) \in A[/itex] uniformly at random as well. Note that in order to do this, we have effectively partitioned [itex]B[/itex] into subsets the same size as [itex]A[/itex] so that we could biject those subsets with [itex]A[/itex]. Likewise, this work shows how we can partition [itex][ \,0, 1) \,[/itex] into countable sets that are then bijected with the natural numbers. Consideration is then given to whether selecting an element of [itex][ \,0, 1) \,[/itex] uniformly at random allows for the selection of a natural number uniformly at random as well.

As opposed to partitioning [itex][ \,0, 1) \,[/itex] as described above using non-measurable Vitali sets, we can instead partition the Cantor ternary set [itex]\mathcal{C}[/itex] into countable subsets* that are then bijected with [itex]\mathbb{N}[/itex]. Unlike the non-measurable Vitali sets used above (see post 24), each subset of [itex]\mathcal{C}[/itex] has a measure of 0 because [itex]\mathcal{C}[/itex] itself has a measure of 0. Selecting an element of [itex]\mathcal{C}[/itex] uniformly at random then results in the selection of a natural number uniformly at random as well. Only this time, the result is that the probability of selecting any given natural number [itex]p[/itex] becomes 0 because the measure of the subset of [itex]\mathcal{C}[/itex] mapping to [itex]p[/itex] is 0.

Then:

1) [itex]P(p = n) = 0[/itex] for all [itex]n \in \mathbb{N}[/itex].
2) Much like selecting a real uniformly at random from [itex][ \,0, 1) \,[/itex], we still paradoxically get [itex]P( \,p=1) \,+P( \,p=2) \,+P( \,p=3) \,+... = 1[/itex].
3) And finally, [itex]P( \,p=1) \,=P( \,p=2) \,=P( \,p=3) \,=...[/itex].

I believe this is a true uniform distribution over the natural numbers in the classical sense.

* My method of partitioning [itex]\mathcal{C}[/itex] is to partition it into a countable number of uncountable sets that contain one and only one element from each equivalence class contained in a collection of equivalence classes that also form a partition of [itex]\mathcal{C}[/itex] (similar to how we can partition [itex][ \,0, 1) \,[/itex] into a countable number of uncountable Vitali sets that contain one and only one element from each Vitali equivalence class where the collection of Vitali equivalence classes also partitions [itex][ \,0, 1) \,[/itex] when restricted to [itex][ \,0, 1) \,[/itex]).

On this, I definitely want some feedback please and thank you!
 
  • #28
AplanisTophet said:
On this, I definitely want some feedback please and thank you!
You have three statements that I don't think fit well together:
P(p=1)+P(p=2)+P(p=3)+...=1
0 = P(p=1)=P(p=2)=P(p=3)=...
I believe this is a true uniform distribution over the natural numbers in the classical sense.
 
  • #29
Zafa Pi said:
You have three statements that I don't think fit well together:
P(p=1)+P(p=2)+P(p=3)+...=1
0 = P(p=1)=P(p=2)=P(p=3)=...
I believe this is a true uniform distribution over the natural numbers in the classical sense.

Ok, yes. When we are guaranteed to select a natural, it seems as though the sum of the probabilities assigned to each natural must equal 1. That is the "dart board paradox" I referred to above, but you stated it's not paradoxical. I recently watched a Numberphile video where they refer to it as a paradox (see 3:37 in ). Numberphile's conclusion is that we don't have a satisfying answer to this yet.

I suppose it's one of two things. Either the probability of selecting a natural isn't really 0 or the sum is 0. It's standard to say, based on measure theory, that the probability assigned to each natural is 0 if I'm interpreting this correctly. So, we assume that the sum is 0 despite us being 'guaranteed to hit the dart board.' I did say that I don't care to challenge standard math though and I don't. So that's about as far as I can take this.

Is that agreeable?
 
  • #30
AplanisTophet said:
Ok, yes. When we are guaranteed to select a natural, it seems as though the sum of the probabilities assigned to each natural must equal 1. That is the "dart board paradox" I referred to above, but you stated it's not paradoxical. I recently watched a Numberphile video where they refer to it as a paradox (see 3:37 in ). Numberphile's conclusion is that we don't have a satisfying answer to this yet.

I suppose it's one of two things. Either the probability of selecting a natural isn't really 0 or the sum is 0. It's standard to say, based on measure theory, that the probability assigned to each natural is 0 if I'm interpreting this correctly. So, we assume that the sum is 0 despite us being 'guaranteed to hit the dart board.' I did say that I don't care to challenge standard math though and I don't. So that's about as far as I can take this.

Is that agreeable?

Numberphile made a mess of the dart board paradox. The uniform distribution on [0,1) yields a probability on the σ-algebra S of Lebesgue measurable subsets of [0,1). The probability of a member of S is its measure. The measure of a point is 0, the measure of [0,1) is 1, and the measure is countably additive. This works because of the properties of [0,1), including that the cardinality of [0,1) is more than that of N.

There is no countably additive measure on N where p(n) = 0 for each n in N and p(N) = 1. There are finitely additive measures on all subsets of N where p(n) = 0 for all n, and p(N) =1. But in that case there are some infinite subsets of N with measure 1 and some with measure 0, which is not what you want.
 

1. What is the Two Envelope Paradox?

The Two Envelope Paradox is a thought experiment that involves two envelopes, one of which contains twice the amount of money as the other. The paradox arises when you are given one of the envelopes and offered the chance to switch to the other envelope. The question is whether or not it is beneficial to switch, as the amount of money in the envelopes is unknown.

2. What is the proposed solution to the Two Envelope Paradox?

The proposed solution to the Two Envelope Paradox is to consider the expected value of the amount of money in the envelopes. By taking the average of the two possible amounts, it is shown that switching envelopes does not guarantee a higher amount of money. Therefore, it is not beneficial to switch envelopes.

3. How does the proposed solution address the paradox?

The proposed solution addresses the paradox by showing that the expected value of the amount of money in the envelopes is the same regardless of whether or not you switch envelopes. This means that there is no advantage to switching and the paradox is resolved.

4. Are there any criticisms of the proposed solution?

Yes, there are criticisms of the proposed solution to the Two Envelope Paradox. Some argue that the expected value calculation does not accurately reflect real-world scenarios and that it is possible to construct a scenario where switching envelopes is beneficial. Others argue that the paradox can be resolved by considering the information provided in the scenario, such as the fact that one envelope contains twice the amount of the other.

5. How is the Two Envelope Paradox relevant to other areas of science?

The Two Envelope Paradox is relevant to other areas of science, particularly in decision-making and probability theory. It highlights the importance of considering all available information and potential outcomes when making decisions. It also raises questions about the limitations of mathematical models and their applicability to real-world scenarios.

Similar threads

  • Special and General Relativity
3
Replies
75
Views
3K
  • General Math
2
Replies
39
Views
6K
  • Set Theory, Logic, Probability, Statistics
2
Replies
39
Views
10K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
3K
Back
Top