# Question: Proposed Solution to Two Envelope Paradox

Su, Francis, et. al. have a short description of the paradox here: https://www.math.hmc.edu/funfacts/ffiles/20001.6-8.shtml

I used that link because it concisely sets forth the paradox both in the basic setting but also given the version where the two envelopes contain $( \,\2^k, \2^{k+1}) \,$ with probability $\frac{( \,\frac{2}{3}) \,^k}{3}) \,$ for each integer $k \geq 0$.

Where the paradox is formulated by considering one person’s odds when choosing to swap an envelope, my question is whether the paradox might be resolved by considering the paradox from both swapper’s perspectives instead of just one (i.e. for one person to swap, there must be another person for the original to swap with).

From a single person’s perspective, the paradoxical odds are traditionally given by the equation:

$0.5( \,0.5x) \, + 0.5( \,2x) \, = 1.25x$

To incorporate a two-person perspective, the equation would be one person’s odds for gain less “their opponent’s” odds for gain because their opponent’s gain comes at their expense. In other words, if you stand a 50/50 shot of losing exactly as much as you stand to gain, there is no longer incentive to swap envelopes:

$[ \,0.5( \,0.5x) \, + 0.5( \,2x) \,] \, - [ \,0.5( \,0.5x) \, + 0.5( \,2x) \,] \, = 0$

The result is that neither person improves their odds by swapping. Paradox resolved.

Comments, suggestions, agree, disagree… I’m just fishing here. Thank you?

Last edited:

We can also break it down into the four possible scenarios when swapping where, when ambiguously starting with $x$, we have 25% of the time you’ll lose $0.5x$ to your opponent, 25% of the time you’ll gain $0.5x$ from your opponent, 25% of the time you’ll lose $x$ to your opponent, and 25% of the time you’ll gain $x$ from your opponent. The same is true for your opponent.

$x - 0.25( \,0.5x) \, + 0.25( \,0.5x) \, - 0.25( \,x) \, + 0.25( \,x) \, = x - 0.25( \,0.5x) \, + 0.25( \,0.5x) \, - 0.25( \,x) \, + 0.25( \,x) \,$

$x = x$

In addition to what is noticed in the original question, the above shows that there is no incentive to swap.

The same general logic works for all consistent versions of the paradox that prey on the ambiguity of the variable $x$ (see the link in the original question for more insight into that statement).

I assert that the paradox is resolved.

We can also break it down into the four possible scenarios when swapping where, when ambiguously starting with $x$, we have 25% of the time you’ll lose $0.5x$ to your opponent, 25% of the time you’ll gain $0.5x$ from your opponent, 25% of the time you’ll lose $x$ to your opponent, and 25% of the time you’ll gain $x$ from your opponent. The same is true for your opponent.

$x - 0.25( \,0.5x) \, + 0.25( \,0.5x) \, - 0.25( \,x) \, + 0.25( \,x) \, = x - 0.25( \,0.5x) \, + 0.25( \,0.5x) \, - 0.25( \,x) \, + 0.25( \,x) \,$

$x = x$

In addition to what is noticed in the original question, the above shows that there is no incentive to swap.

The same general logic works for all consistent versions of the paradox that prey on the ambiguity of the variable $x$ (see the link in the original question for more insight into that statement).

I assert that the paradox is resolved.
The two envelope problem does not require two players. You must solve it for one player.

The two envelope problem does not require two players. You must solve it for one player.

The original player needs someone (or at least something) to swap with. That someone or something has the same odds when swapping as the swapper and one's gain is the other's loss. These two facts are consistent with all versions of the two envelope paradox, are they not? If that is true, then why can't I use those facts in deriving a solution?

The original player needs someone (or at least something) to swap with. That someone or something has the same odds when swapping as the swapper and one's gain is the other's loss. These two facts are consistent with all versions of the two envelope paradox, are they not? If that is true, then why can't I use those facts in deriving a solution?
If I am offered a $1 bill and there is a$10 on the table and Bob asks me if I would like to switch, I would say yes. Now you would point out that the other hypothetical player would lose what I gain so I shouldn't bother?

If I am offered a $1 bill and there is a$10 on the table and Bob asks me if I would like to switch, I would say yes. Now you would point out that the other hypothetical player would lose what I gain so I shouldn't bother?

You leave no room for ambiguity as to the value of x (the amount in your and/or Bob's envelopes) in your example. The paradox is built on that ambiguity.

If the only thing you knew was that Bob's 'envelope' contained either 1/10 of yours or 10 times yours, and that similarly all Bob knew about your 'envelope' was that it contained either 1/10 of his envelope or 10 times his envelope, then yes, I would say there is no incentive to switch for either player. Note that I am dealing with the closed envelope version of the paradox (which should have been apparent because I never specified values) which states that you and Bob will just keep switching envelopes forever and never actually get around to opening them.

Note that in the open envelope version of the paradox, no more than one swap could take place. Let's just stick to the ambiguous closed envelope version for now though, fair enough?

So, I ask again the same questions that I did above.

So, I ask again the same questions that I did above.
What you have done with your hypothetical other player (B) is show that A switching doesn't do her any good. But we knew this from the beginning.
To resolve the paradox it is necessary to find the flaw in A's calculation that convinced her to switch. You have not done that.
Over and out.

stevendaryl
Staff Emeritus
We can also break it down into the four possible scenarios when swapping where, when ambiguously starting with $x$, we have 25% of the time you’ll lose $0.5x$ to your opponent, 25% of the time you’ll gain $0.5x$ from your opponent, 25% of the time you’ll lose $x$ to your opponent, and 25% of the time you’ll gain $x$ from your opponent. The same is true for your opponent.

$x - 0.25( \,0.5x) \, + 0.25( \,0.5x) \, - 0.25( \,x) \, + 0.25( \,x) \, = x - 0.25( \,0.5x) \, + 0.25( \,0.5x) \, - 0.25( \,x) \, + 0.25( \,x) \,$

$x = x$

In addition to what is noticed in the original question, the above shows that there is no incentive to swap.

The same general logic works for all consistent versions of the paradox that prey on the ambiguity of the variable $x$ (see the link in the original question for more insight into that statement).

I assert that the paradox is resolved.

I don't see how it resolves anything.

Go back to the original problem. Let me make it concrete, using the article's numbers (which work out nicer than the 50/50 probability of the original problem, but are equally paradoxical).

Suppose I do the following:
1. I secretly roll a 6-sided die repeatedly until I get a 1 or a 2. (Typically, it'll take 3 rolls, but it could be more or fewer than that.)
2. I count up how many rolls it takes. Let $N$ be that number.
3. I put $$2^N$ into one envelope, and$$2^{N-1}$ into the other envelope.
4. I mix up the envelopes thoroughly, and let you choose one (assume you have no way of knowing which envelope is which)
5. You open your envelope and find $x. 6. I allow you to either keep those$x or swap it for the other envelope. Which do you do?
The answer is certainly not that it doesn't make any difference whether you switch or not. For example, what if x=1? Then you know that the other envelope can't contain 1/2 a dollar. So in that case, it's definitely to your advantage to switch. Do you agree with that?

What if, instead, x=2? Then there are two possibilities:
1. N = 1 (in that case, if you switch, you will lose $1) 2. N = 2 (in that case, if you switch you will gain$2)
[EDIT: I got the numbers wrong, originally] If you work out the posterior probabilities, case 1 has probability 60%, and case 2 has probability 40%. So your expectation value for gain/loss when switching is

40% x $2 - 60% x$1 = + $0.20 So even though you're not certain that switching will help (unlike the case where x=1), you really should switch, to maximize your expectation value. Do you agree with that? So the question is: For what value of x would you say it doesn't make any difference whether you switch or not? Last edited: stevendaryl Staff Emeritus Science Advisor What you have done with your hypothetical other player (B) is show that A switching doesn't do her any good. But we knew this from the beginning. To resolve the paradox it is necessary to find the flaw in A's calculation that convinced her to switch. You have not done that. Over and out. As I said in another post, suppose the odds for the larger amount are as follows: 1. With probability 1/3, it is$2
2. With probability 2/9, it is $4 3. With probability 4/27, it is$8
4. With probability 8/81, it is $16 5. etc. Now, if you see$2 in your envelope, then there is a probability of 3/5 that the largest amount is $2 (so switching loses you$1), and a probability of 2/5 that the largest amount is $4 (so switching gains you$2). So the expected loss/gain from switching is 2/5 x $2 - 3/5 x$1 = $0.20. So if you see$2 in your envelope, then there is absolutely nothing "flawed" about the reasoning that you are better off switching.

For each value of x, if is provably true that if you see $x in your envelope, you are better off switching. For each fixed x, you can either prove it mathematically, using conditional probability and Bayesian reasoning, or you could run a simulation to demonstrate it. The paradox is why this doesn't imply that "Always switch without even looking at your envelope" is the right thing to do? As I said in another post, suppose the odds for the larger amount are as follows: 1. With probability 1/3, it is$2
2. With probability 2/9, it is $4 3. With probability 4/27, it is$8
4. With probability 8/81, it is $16 5. etc. Now, if you see$2 in your envelope, then there is a probability of 3/5 that the largest amount is $2 (so switching loses you$1), and a probability of 2/5 that the largest amount is $4 (so switching gains you$2). So the expected loss/gain from switching is 2/5 x $2 - 3/5 x$1 = $0.20. So if you see$2 in your envelope, then there is absolutely nothing "flawed" about the reasoning that you are better off switching.

For each value of x, if is provably true that if you see $x in your envelope, you are better off switching. For each fixed x, you can either prove it mathematically, using conditional probability and Bayesian reasoning, or you could run a simulation to demonstrate it. The paradox is why this doesn't imply that "Always switch without even looking at your envelope" is the right thing to do? This is a cute stuffing of the St. Petersburg Paradox into an envelope. You can pick your favorite solution from that link. The solution offered by Peters reminded me of Everett's MWI interpretation. So the question is: For what value of x would you say it doesn't make any difference whether you switch or not? You are addressing the "open envelope" version of the two envelope paradox in your first post. In that case, people know the value of their envelope so they will swap once at most, which is important to distinguish because... The paradox is why this doesn't imply that "Always switch without even looking at your envelope" is the right thing to do? ...in the "closed envelope" version of the paradox people are still incentivized to swap envelopes, only infinitely many times. To quote Su, Francis, et al. cited in the OP: "But if you do switch, a similar argument would instruct you to switch back... and therefore keep switching!... Some of you may object that the prior distribution has infinite mean, but this does not fully resolve the paradox, since in theory if such a distribution exists, one would still have to wrestle with the paradox of continually switching envelopes!" I address the closed envelope version of the paradox in my first two posts. In other words, why do we derive a result that says we should swap infinitely many times? I acknowledged a two person perspective that also preys on the ambuiguity of $x$ (something that can't be done in the open envelope version) to draw up a perfectly valid equation given the ambiguity of $x$ which can be generalized to any version of the 'closed envelope' two envelope paradox: [Person one's odds] = [Person two's odds] = 1.25x $[ \,0.5( \,0.5x) \, + 0.5( \,2x) \,] \, = [ \,0.5( \,0.5x) \, + 0.5( \,2x) \,] \, = 1.25x$ $[ \, x - 0.5( \,0.5x) \, + 0.5( \,x) \,] \, = [ \, x - 0.5( \,0.5x) \, + 0.5( \,x) \,] \, = 1.25x$ The above equations obviously prey on the ambiguity of $x$. If we allow such ambiguity, then we can look at things step-by-step to contruct a different equation showing each person's expected outcomes when swapping. We start by asserting that (step 1) when person one loses 0.5x, then person two gains 0.5x (the ambiguous x is the larger of the two values in this case): [Person one's odds] = [Person two's odds] $x - 0.5x ... = x + 0.5x ...$ We continue (step 2) and assert that when person two loses 0.5x then person one gains 0.5x (the ambiguous x is again the larger of the two values in this case): $x - 0.5x + 0.5x ... = x + 0.5x - 0.5x ...$ We continue (step 3) and assert that when person one loses x then person two gains x (the ambiguous x is the lesser of the two values in this case): $x - 0.5x + 0.5x - x ... = x + 0.5x - 0.5x + x ...$ We conclude the base equation (step 4) by asserting that when person two loses x then person one gains x (the ambiguous x is the lesser of the two values in this case): $x - 0.5x + 0.5x - x + x = x + 0.5x - 0.5x + x - x = x$ We now assign probabilities to each of these events. In the basic version of the paradox there is a 25% chance for each step to occur. In other versions we might say that swapping leads to an expected value of $\frac{11}{10}x$. Notice that this has no impact on the equation (If person one has a Y% chance of swapping for +0.5x then person two has an equal probability Y% of swapping for -0.5x and vis-versa: $x - ( \,Y) \,0.5x + ( \,Y) \,0.5x - ( \,Y) \,x + ( \,Y) \,x = x + ( \,Y) \,0.5x - ( \,Y) \,0.5x + ( \,Y) \,x - ( \,Y) \,x = x$ ...in the "closed envelope" version of the paradox people are still incentivized to swap envelopes, only infinitely many times. Thank God it isn't more. stevendaryl Staff Emeritus Science Advisor You are addressing the "open envelope" version of the two envelope paradox in your first post. In that case, people know the value of their envelope so they will swap once at most, which is important to distinguish because... Okay. I agree that the reasoning leading to blind swapping is clearly fallacious, and it's interesting to see exactly where the mistake lies. But to me, what's more interesting is the combination of facts that: 1. Blind swapping is clearly fallacious --- it can't possibly improve your situation. 2. In the open envelope case, we can argue (in the case of a definite probability distribution) that swapping is clearly advantageous. 3. But since no matter what you see in your envelope, you come to the same conclusion, then why not do blind swapping? We can restate this as the following seeming paradox: 1. There are an infinite number of "situations": Situation 0 = "find$1 in your envelope", Situation 1 = "find $2 in your envelope", Situation N = "find$2N in your envelope.
2. For each situation, it is provably to your advantage to switch.
3. But if you have no idea which situation you're in (closed envelope case), there is no advantage to switching.
If you always end up switching, then how can it help you to look in your envelope?

Last edited:
AplanisTophet
We can restate this as the following seeming paradox:
1. There are an infinite number of "situations": Situation 1 = "find $1 in your envelope", Situation 2 = "find$2 in your envelope", Situation N = "find $2N in your envelope. 2. For each situation, it is provably to your advantage to switch. 3. But if you have no idea which situation you're in (closed envelope case), there is no advantage to switching. If you always end up switching, then how can it help you to look in your envelope? I was considering going on to address the open envelope version of the paradox as well... Before I contemplate that, we have to agree that the math, when applied properly, suggests that there is no incentive to swapping in the closed envelope version of the paradox as opposed to just what we take as obvious. You've asserted twice that there is no incentive to swap in the closed envelope version, but the math says otherwise in that the traditional formula yields an expected value of 1.25x when swapping regardless of what is in your envelope (barring my above restatement of the traditional formula or others' attempts, to my knowledge none of which are accepted in general). So, if you accept my restatement of the traditional formula (post 11 probably being the clearest explanation), then perhaps I'll toy with the open envelope version, but if my restatement is not readily acceptable then I see no point in trying to move on yet. For the record, I don't expect you or anyone to readily accept my restatement. It's only a proposed solution and was looking for feedback on it. What you have done with your hypothetical other player (B) is show that A switching doesn't do her any good. But we knew this from the beginning. To resolve the paradox it is necessary to find the flaw in A's calculation that convinced her to switch. You have not done that. Over and out. That's interesting because B's expected value when swapping is the same as A's, so in saying that B doesn't benefit according to the math but A does (I think that's what you mean, if not that A sees no benefit in swapping according to the math but her calculation says she does... ???), you are saying we have math that indicates B doesn't benefit, but A does, even though their expected values are equal? Very confusing as to what you mean... Also, how did we know [what] from the beginning? I didn't assume anything to begin with personally. My intuition almost always lets me down given the type of math that I enjoy (paradoxes, infinite sets, etc.). When you say, "[t]o resolve the paradox it is necessary to find the flaw in A's calculation that convinced her to switch. You have not done that." Fair enough, but can you shed some light on why my rationale (that the traditional formula for the expected value of 1.25x is incomplete) does not suffice in demonstrating that A's calculation is flawed (I think post 11 being my clearest exposition)? The paradox is why this doesn't imply that "Always switch without even looking at your envelope" is the right thing to do? I'm not sure there is a paradox here. If Alice looks in her envelope and then switches, she doesn't then get to switch back. So if she doesn't look and switches, are you allowing her to then switch again? If so then there is a paradox, she will keep switching forever, or until her wavering function collapses, which ever comes first. However, if she can only switch once where is the paradox? Bob (at the other envelope) will also switch, and if they both looked, then the sum of their expectations is more than in the two envelopes, but so what. That's somewhat unintuitive, but not a paradox. All of this is a consequence of the infinite mean of the distribution. All of this is a consequence of the infinite mean of the distribution. To quote again Su, Francis, et al., cited in the OP: "Some of you may object that the prior distribution has infinite mean, but this does not fully resolve the paradox, since in theory if such a distribution exists, one would still have to wrestle with the paradox of continually switching envelopes!" While there is no uniform cumulative distribution function over $\mathbb{N}$, it is nevertheless possible to devise a theoretical method by which an element of $\mathbb{N}$ is selected randomly and all elements of $\mathbb{N}$ have an equal (however undefined) probability of being selected. That is to say, no one positive integer is favored over any other in terms of its chances of being selected. The method is to partition [0, 1) into a countable number of Vitali sets $V^1, V^2, V^3, ...$, select an element $x$ of [0, 1) uniformly at random, and then relate $x$ to a natural number by asserting that if $x \in V^n$, then $f( \,x) \, = n$. So, there is your distribution. I believe it doesn't have infinite mean though, but rather an undefined mean. While there is no uniform cumulative distribution function over NN\mathbb{N}, it is nevertheless possible to devise a theoretical method by which an element of NN\mathbb{N} is selected randomly and all elements of NN\mathbb{N} have an equal (however undefined) probability of being selected. That is to say, no one positive integer is favored over any other in terms of its chances of being selected. The method is to partition [0, 1) into a countable number of Vitali sets V1,V2,V3,...V1,V2,V3,...V^1, V^2, V^3, ..., select an element xxx of [0, 1) uniformly at random, and then relate xxx to a natural number by asserting that if x∈Vnx∈Vnx \in V^n, then f(x)=nf(x)=nf( \,x) \, = n. So, there is your distribution. I believe it doesn't have infinite mean though, but rather an undefined mean. While this has little to do with OP problem, it is interesting. We could let V be a Vitali set and let Vk = V + qk (where qk is the kth rational), so all the Vk look alike, if this helps. And then have x uniform over the rationals (which is impossible as well) "So there is your distribution." ??? I don't see any distribution. "That is to say, no one positive integer is favored over any other in terms of its chances of being selected." How do you find the chances? "all elements of NN\mathbb{N} have an equal (however undefined) probability of being selected. " Sounds like the Jabberwocky. FactChecker Science Advisor Gold Member From a single person’s perspective, the paradoxical odds are traditionally given by the equation: $0.5( \,0.5x) \, + 0.5( \,2x) \, = 1.25x$ There is no fixed situation and fixed value of x for which this equation is true. stevendaryl Staff Emeritus Science Advisor There is no fixed situation and fixed value of x for which this equation is true. Yes, that's why I prefer the concrete probability distribution (for the smaller of the two amounts): 1. P($1) = 1/3
2. P($2) = 2/9 3. P($4) = 4/27
4. P($2n) = 1/3 * (2/3)n For that distribution, if you see$2n in your envelope, then there are two possibilities:
1. With probability 3/5, that's the lower of the two amounts.
2. With probability 2/5, that's the higher of the two amounts.
There is no way to generate amounts so that the likelihood is 50/50 either way. But the paradox doesn't depend on it being 50/50.

"That is to say, no one positive integer is favored over any other in terms of its chances of being selected." How do you find the chances?

In my example (post 17), I was very clear in asserting that the probability of selecting any given natural is undefined. This is because Vitali sets are non-measurable sets. That is why we are left with no cumulative distribution function despite being able to accurately claim that the method results in the random selection of a natural from all of $\mathbb{N}$ with no single natural being favored over any other in terms of its probability of being selected. This is common knowledge though (not Jabberwocky...?).

Then again, we assert when selecting our real uniformly at random from [0, 1) that each real has a 0 probability of being selected, but that the sum of these probabilities equates to one (the old "we're sure to hit the dart board but each real number on the dart board has a 0 probability of being struck" paradox). Yeah, sounds like Jabberwocky to me, but it's nevertheless standard math that I don't care to challenge.

There is no way to generate amounts so that the likelihood is 50/50 either way. But the paradox doesn't depend on it being 50/50.

Exactly (well, barring using the axiom of choice as demonstrated in my post 17 which isn't a direct construction...). Since you definitely get it, and since we then still "prey on the ambiguity of x" in these cases, any thoughts on my post #11?

In my example (post 17), I was very clear in asserting that the probability of selecting any given natural is undefined. This is because Vitali sets are non-measurable sets. That is why we are left with no cumulative distribution function despite being able to accurately claim that the method results in the random selection of a natural from all of NN\mathbb{N} with no single natural being favored over any other in terms of its probability of being selected. This is common knowledge though (not Jabberwocky...?).
You chose x from [0.1) with distribution F(x) = x (uniform). Would it have made a difference if the distribution function was G(x) = x2?

"in terms of its probability of being selected. " What notion of probability are you using here?

My friend Bob just told me that 7 was more favored to be selected that 4. How do I prove him wrong?

I find your construction interesting nonetheless, along with filling 3-space with disjoint unit circles, and the Tarski-Banach theorem.
Then again, we assert when selecting our real uniformly at random from [0, 1) that each real has a 0 probability of being selected, but that the sum of these probabilities equates to one (the old "we're sure to hit the dart board but each real number on the dart board has a 0 probability of being struck" paradox). Yeah, sounds like Jabberwocky to me, but it's nevertheless standard math that I don't care to challenge.
As you well know, for one familiar with Lebesgue theory there is no paradox.

You chose x from [0.1) with distribution F(x) = x (uniform). Would it have made a difference if the distribution function was G(x) = x2?

"in terms of its probability of being selected. " What notion of probability are you using here?

My friend Bob just told me that 7 was more favored to be selected that 4. How do I prove him wrong?

I find your construction interesting nonetheless, along with filling 3-space with disjoint unit circles, and the Tarski-Banach theorem.

As you well know, for one familiar with Lebesgue theory there is no paradox.

I'm glad you find this stuff interesting. If you want to pick at it a little more, then consider my original and independent paper. I think that will answer some of your questions.

A Consideration of the Fabled Uniform Distribution Over the Naturals

Introduction:

It is helpful to consider a simple observation pertaining to finite sets in order to get started. Let $A = \{1, 2\}$ and $B = \{3, 4, 5, 6\}$. Let $f(x) = 1$ if $x$ is odd and $f(x) = 2$ if $x$ is even. Then, function $f$ is a surjection from $B$ onto $A$ that is “uniform” in the sense that selecting an element $x \in B$ uniformly at random will result in the selection of $f(x) \in A$ uniformly at random as well. Note that in order to do this, we have effectively partitioned $B$ into subsets the same size as $A$ so that we could biject those subsets with $A$. Likewise, this work shows how we can partition $[ \,0, 1) \,$ into countable sets that are then bijected with the natural numbers. Consideration is then given to whether selecting an element of $[ \,0, 1) \,$ uniformly at random allows for the selection of a natural number uniformly at random as well.

Definitions:

Let $V^{( \,0.5, 1) \,}$ be a set containing one and only one element from each Vitali equivalence class on the interval $( \, 0.5, 1 ) \,$ (Vitali equivalence classes are equivalence classes of the real numbers that partition $\mathbb{R}$ under the relation $x \equiv y \iff ( \, \exists q \in \mathbb{Q} ) \, ( \, x - y = q ) \,$). The axiom of choice allows for such a selection.

For any real number $r$, let $d(r)$ equal the one and only one element $v \in V^{( \,0.5, 1) \,}$ such that $r - v \in \mathbb{Q}$.

Let $h : \mathbb{N} \longmapsto \mathbb{Q} [ \, 0, 1 ) \,$ be bijective.

Let $k : [ \, 0, 1 ) \, \longmapsto \mathbb{N}$ be surjective:

$k(x) = \begin{cases} h^{-1}(x - d(x) + 0.5) && x \geq d(x) - 0.5 \\ h^{-1}((x + 1) - d(x) + 0.5) && x < d(x) - 0.5 \end{cases}$.

Let $V^n = \{ x \in [ \,0,1) \, : k(x) = n \}$ for each $n \in \mathbb{N}$. We then have $V^{( \,0.5, 1) \,} = V^{h^{-1}(0.5)}$, for example. Each $V^{n}$ will be a Vitali set by definition with the collection $\{ V^{n} : n \in \mathbb{N} \}$ forming a partition of $[ \,0, 1) \,$.

Let $x$ be an element of $[ \,0, 1) \,$ selected uniformly at random.

Let $p = k(x)$. By definition, $p$ has now been selected uniformly from $\mathbb{N}$.

A uniform distribution is a concept of translation invariance. For example, if $S$ is a measurable set, we may want the probability of $S$ to be the same as the probability of $\{y : y = z + n, z \in S \}$ for each natural number $n$. In the case of function $k$ over the domain $[ \,0, 1) \,$, however, we end up mapping each element of each non-measurable Vitali set $V^{n}$ to a distinct natural number $n$. Where $a, b \in \mathbb{N}$, it is easy to see that the probability of $x$ falling within $V^{a}$ is equal to the probability of $x$ falling in $V^{b}$ (thus the probability that $p = a$ equals the probability that $p = b$), but we cannot rely on a Lebesgue measure as a means of establishing probability or creating any sort of cumulative distribution function on $\mathbb{N}$. The probability of selecting any given natural remains undefined.

B.J.K. April 19, 2017

easy to see that the probability of xxx falling within VaVaV^{a} is equal to the probability of xxx falling in Vb
Sorry, Kolmogorov patented "probability", you can't use here.
thus the probability that p = a equals the probability that p = b
and
The probability of selecting any given natural remains undefined.
The two lines go together well.

You still didn't answer my three questions in post #23.