# I Selecting a Natural and a Real Uniformly at Random

1. Apr 16, 2017

### AplanisTophet

This work involves partitioning $[ \, 0, 1 ] \,$ into an uncountable number of subsets, using choice to select a single element from each subset, and then defining a bijection from $\mathbb{N}$ onto each subset using that selected element as a reference. The framework allows for proof of two statements.

First Statement to Prove:

Given

1) the axiom of choice,

2) a randomly selected infinite binary sequence $S = s_1, s_2, s_3,$... (created via the theoretical flipping of a coin infinitely many times where H = 1 and T = 0), and

3) a definition of "select an element of an infinite set uniformly at random" that simply means, in layman's terms, that one and only one element of an infinite set will be selected using a process where all elements of the set have an equal chance of being selected,

then it is possible to select a natural number uniformly at random.

Definitions:

Let $V$ be a set containing one and only one element from each Vitali equivalence class on the interval $[ \, 0.5, 1 ] \,$ (Vitali equivalence classes are equivalence classes of the real numbers that partition $\mathbb{R}$ under the relation $x \equiv y \iff ( \, \exists q \in \mathbb{Q} ) \, ( \, x - y = q ) \,$). The axiom of choice allows for such a selection.

Let $D = \{ r \in [ \, 0, 1 ) \, : r$ is a dyadic rational $\}$ (dyadic rationals are rational numbers whose denominator is a power of two and, as a result, whose binary expansion is finite).

Let $E = \{ r \in \mathbb{Q} ( \, 0, 1 ) \, : r \notin D \}$.

Let $f : \mathbb{N} \longmapsto D$ (here $\longmapsto$ denotes a bijection).

Let $g : \mathbb{N} \longmapsto E$.

Let $h : \mathbb{N} \longmapsto \mathbb{Q} [ \, 0, 1 ) \,$.

Let $x$ be a real number in $[ \, 0, 1 ] \,$ selected at random via the binary sequence from (2) above such that $x = 0.s_1s_2s_3$... .

Let $n \in \mathbb{N}$ denote the natural number that is selected uniformly at random as a result of the following process.

Process for Selecting $n$ Uniformly at Random:

There are three possibilities for $x$. It may be a dyadic rational, a non-dyadic rational, or an irrational.

1) If $x \in D$ or $x = 1$:

Each dyadic rational not equal to 0 or 1 has two binary expansions: one finite and one infinite (e.g. 0.1 = 0.0111…). The randomly selected number $x$ may take the form of either expansion. Let $n$ be such that $f(n) = x$ if $0 < x < 1$ and $f(n) = 0$ if $x = 0$ or $x = 1$. At this point, there are two and only two possible ways of selecting each possible natural number.

2) If $x \in E$:

Let $n$ be such that $g(n) = x$. At this point, there are now three and only three ways of selecting each possible natural number.

3) If $x \in [ \, 0, 1 ] \, \setminus \mathbb{Q}$:

There will exist one and only one element $v \in V$ such that $v - x \in \mathbb{Q}$. For each rational $q$ on the interval $[ \, 0, 1 ) \,$, there will be one and only one corresponding irrational $i$ on the interval $[ \, v - 0.5, v + 0.5 ) \,$ such that $q - 0.5 = i - v$.

There are two possibilities for $x$. It will either fall in the interval $[ \, v - 0.5, 1 ) \,$ or in the interval $( \, 0, v - 0.5 ) \,$. For each irrational $y$ on the interval $( \, 0, v - 0.5 ) \,$, there will be one and only one irrational $z$ on the interval $( \, 1, v + 0.5 ) \,$ where $y + 1 = z$. In this fashion, we can biject all possible values for $x$ on the interval $[ \, 0, 1 ) \,$ that are within the Vitali equivalence class containing $v$ with all possible irrationals on the interval $[ \, v - 0.5, v + 0.5 ) \,$ that are also in the Vitali equivalence class containing $v$ (i.e., if $x \in [ \, v - 0.5, 1 ) \,$, then utilize $x$ itself whereas, if $x < v - 0.5$, utilize $x + 1$ instead). Select a suitable $n$ as follows:

If $x \in [ \, v - 0.5, 1 ) \,$, let $n$ be such that $h(n) - 0.5 = x - v$.

If $x \in ( \, 0, v - 0.5 ) \,$, let $n$ be such that $h(n) - 0.5 = (x + 1) - v$.

At this point, all possible values for $x$ will lead to the selection of a distinct value for $n$ and all $n \in \mathbb{N}$ have an equal chance of being selected.

End proof.

Second Statement to Prove:

It is possible to select an element of $\mathbb{R}$ uniformly at random given the first proof above.

Definitions:

Let $j : \mathbb{N} \longmapsto \mathbb{Z}$.

Let $n$ be selected uniformly at random via the above process.

Let $x$ be a real number in $[ \, 0, 1 ) \,$ selected uniformly at random. Given the selection process used in the first proof, this requires two additional steps. First, since each dyadic rational has two binary expansions, each of the expansions must be related to a distinct and separate real number (this is trivial in that the dyadics may be listed along with their possible expansions, allowing for a mapping by indexes). Second, it is important that $x \neq 1$, which is again accomplished trivially by bijecting $[ \, 0, 1 ] \,$ with $[ \, 0, 1 ) \,$.

Let $r \in \mathbb{R}$ denote the real number that is selected uniformly at random as a result of the following process.

Process for Selecting $r$ Uniformly at Random:

$r = j(n) + x$.

End proof.

2. Apr 17, 2017

### AplanisTophet

I assume there is either something wrong or some form of clarification should accompany the above because it appears to contradict the basic idea that there is no uniform distribution over $\mathbb{N}$. I understand that a uniform distribution over $\mathbb{N}$ is considered impossible because the probability $f(n)$ assigned to each natural $n$ would be both positive and equivalent to the probability assigned to every other natural, so it is perceived that the $\sum_{n \in \mathbb{N}} f(n) = \infty$.

Nevertheless, I believe the above is very clean, concise, and easy to understand, so I would like some feedback. Thank you!

3. Apr 18, 2017

### AplanisTophet

I've been told the argument in the OP here is not very clear, so I should do a better job to get some feedback. Can we start with this? It's much better:

Let the infinite binary sequence $S = s_1, s_2, s_3, …$ be selected uniformly at random from the set of all infinite binary sequences. My suggested method is to assert the existence of a function $b$ that, given an integer input, returns an element uniformly at random from the sample space $\Omega = \{0, 1\}$. Then, $S = b(1), b(2), b(3), …$.

Let $x = 0.s_1s_2s_3…$. By definition, $x$ has now been randomly selected from $\Omega = [ \,0, 1 ]\,$.

Using $x$, the goal is to try and select a natural number uniformly at random, so let $k(x) = n$.

We now note that there is one and only one element $v$ in our Vitali set $V$ such that $v – x \in \mathbb{Q}$. Remember that every element of $V$ falls in the interval $[ \,0.5, 1 ] \,$, as was specified in the OP.

Also remember that function $h$ from the OP is a bijection from $\mathbb{N}$ onto $\mathbb{Q}[ \,0, 1) \,$, so its inverse $h^{-1}$ is a bijection from $\mathbb{Q}[ \,0, 1) \,$ onto $\mathbb{N}$.

Assuming $x$ is irrational, we can assert that:

If $x < v - 0.5$, then $k(x) = h^{-1} ( \,(x + 1) - v + 0.5) \,$.

If $x \geq v - 0.5$, then $k(x) = h^{-1} ( \,x - v + 0.5) \,$.

If we simply let the domain of function $k$ be the irrationals on the interval $[ \,0, 1] \,$, then function $k$ is a surjection from those irrationals onto $\mathbb{N}$ that is uniform. This is because the Vitali equivalence classes partition $\mathbb{R}$ into countable subsets and the intersection of each Vitali equivalence class and the interval $[ \,0, 1] \,$ is bijected back onto $\mathbb{N}$. Accordingly, if $x$ was restricted to just the irrationals on the interval $[ \,0, 1] \,$, then we would be done because $k(x) = n$ has been selected uniformly at random.

Make sense so far? Please comment!

4. Apr 18, 2017

### Stephen Tashi

There would be no contradiction in having a "uniform" probability measure that is defined on $\mathbb{N}$ and assigns each natural number a probability zero of being realized. People, including myself, have a double standard when considering random sampling. On the one hand, we are willing to consider problems that involve picking a real number r from a uniform distribution on [0,1] (which is an event with probability zero) , but on the other hand, we baulk when someone proposes a problem where we "pick a natural number at random" and insist that they show us a distribution on $\mathbb{N}$ where each number has a non-zero probability of being chosen.

Putting aside the unfair objection to having events of probability zero, the difficulty of defining a uniform distribution on the natural numbers is to define and implement the ideas of "distribution" and "uniform". The idea of a "distribution" is more specific than the notion of a "probability measure". A distribution on $\mathbb{N}$ is a probability measure that is specified by a cumulative distribution function $F(n)$. It assigns probability $F(n+k) - F(n)$ to the set $\{n+1,n+2,..n+k\}$.

The concept of "uniform" needs to go beyond the thought that each individual "point" in the probability space has the same probability - for example, a gaussian distribution gives each point the same probability - namely zero. What we need for "uniform" distribution is a concept of translation invariance. If $S$ is a measurable set, we want the probability of $S$ to be the same as the probability of $\{ y: y = x + k, x \in S\}$ for each natural number $k$.

If you are claiming to have constructed a uniform probability distribution on the natural numbers, you need to specify the distribution function $F$ and show the probability it assigns to sets is translation invariant.

In the original post, it isn't clear what it means to be given:
You probably intend that we are "given" more than one single such binary sequence. I think you are assuming we use the "coin toss" measure ( https://ocw.mit.edu/courses/electri...all-2008/lecture-notes/MIT6_436JF08_lec02.pdf ) on the set of infinite binary sequences.

Your definition of "uniform" does not agree with the usual definition of a uniform probability measure.
According to that definition, a gaussian distribution would be uniform.

5. Apr 18, 2017

### AplanisTophet

Thank you very much for your response. I have considered it thoroughly.

Using the above equations, each natural number $n$ has its own unique non-measurable Vitali set $V^n$ such that, if $x$ falls in $V^n$, then $k(x) = n$. E.g., $V$ as defined above (the initial index Vitali set) is such that $V = V^{h^{-1}(0.5)}$, so whenever $x \in V \rightarrow x – v = 0$, we get $k(x) = h^{-1}(0.5) = n$.

Given that the reals are partitioned into non-measureable sets such that every element of each non-measurable set $V^n$ will map to a particular $n$, then you should see why when you say…

…that your suggestion does not (or perhaps cannot be) applied using my model, yes?

6. Apr 18, 2017

### Stephen Tashi

I don't know because I don't see that you have defined any probability measure on the natural numbers yet - much less a probability distribution on them. You aren't dealing with probability theory. Your arguments rely on an intuitive notion that a selection process that is symmetrical (in some sense) is therefore "uniform".

I don't even see that your selection process is symmetrical. For example, in the OP, using the uniform distribution $\mu$ on [0,1], we have $\mu(E) = 0$ and $\mu(D) = 0$, so that gives probability 1 to case 3). Do you have some argument that that there is symmetry in the way a natural number is selected - give that cases 1), 2), 3) don't have symmetry in their probability of occurence?

As I understand your general idea, you wish to employ a uniform distribution on the reals ( in your case a probability measure defined by the "coin-toss" measure) to generate a uniform distribution on the natural numbers. You accomplish this by a series of mappings from the real numbers to other sets, and finally to the natural numbers. The procedure for picking a natural number "at random" is to pick a real number r at random from a uniform distribution on the reals, then map this real number through the chain of mappings to natural number n.

For this to work, you need a theorem like: " If $\mu_S$ is a probability measure defined on the set $S$ and $f$ is a function such that the set $f(S) = T$ then the function $u_T$ defined on subsets of $T$ by $\mu_T(t) = \mu_S( f^{-1}( t))$ is a probability measure on $T$."

That type of theorem is true (by definition) for "measurable functions" $f$, but not for arbitrary functions. So what you need to show is that each function in your chain of mappings is a measurable function.

-------------

A valid reason that a distribution on the natural numbers can't assign probability zero to each number (which I didn't state in the previous post) is that a probability measure is required to set the probability of the union of a countable number of disjoint measurable sets equal to the sum of their individual probabilities. So if we assign probability zero to each individual natural number, the probability of the entire set of natural numbers would not be 1 because it can be expressed as the countable union of the individual numbers and each term in the sum of the associated probabilities is zero.

The definition of "probability measure" focuses on countable unions. In contrast to the case of the natural numbers, a gaussian probability distribution can assign probability zero to each real number and not run afoul of the properties of a probability measure because the set of real numbers isn't expressible as a countable disjoint union of sets of single real numbers.

At some point in your chain of mappings, you are mapping a domain whose elements are each uncountable sets of real numbers to a co-domain whose elements are each a single natural number. In the measure $\mu_S$ defined on the domain of this function, it is permissable that the probability of each of the uncountable sets of numbers be zero provided the are an uncountable number of these sets. But in the co-domain of the function, it is not permissible to assign each natural number a probability of zero

7. Apr 18, 2017

### AplanisTophet

That was very helpful. Also, as novice here I see how my notation was confusing, how I took some unnecessary additional steps, etc. Thank you for wading through it. I believe I understand now. To sum it up in my own words:

Where Vitali sets are not measurable and the set of irrationals being mapped to each specific natural using my model takes the form of a Vitali set, the probability assignable to each natural is undefined as one might have expected. This is perhaps a backwards way of demonstrating why Vitali sets are not measurable (if they were, we'd be able to assign a probability to each natural here). That is why I included the layman's definition of uniformly at random in the OP.

Is it possible that the infinite sum of the undefinable probability assigned to each natural using this (or a cleaner version of this) model must still sum to 1? To me that's taking the impossible, like the square root of -1 or somehow showing -1/12 is a meaningful answer to an infinite sum, and trying to give it a purpose. Perhaps moreso, because it's fairly clear that each natural should have a uniform chance of being selected and the sum of those chances should be 1, even if that uniform chance is not definable or able to be expressed as a real number.

8. Apr 18, 2017

### AplanisTophet

PS, on Page 10 of that link, it demonstrates exactly what I was trying to do. Nice.