Selecting a Natural and a Real Uniformly at Random

In summary, this conversation discusses a framework for proving two statements. The first statement involves using the axiom of choice, a randomly selected infinite binary sequence, and a definition of selecting an element of an infinite set uniformly at random to select a natural number uniformly at random. The second statement involves selecting a real number uniformly at random using a similar process as in the first statement. The conversation also includes definitions for various sets and functions.
  • #1
AplanisTophet
89
4
This work involves partitioning [itex][ \, 0, 1 ] \,[/itex] into an uncountable number of subsets, using choice to select a single element from each subset, and then defining a bijection from [itex]\mathbb{N}[/itex] onto each subset using that selected element as a reference. The framework allows for proof of two statements.First Statement to Prove:Given1) the axiom of choice,2) a randomly selected infinite binary sequence [itex]S = s_1, s_2, s_3,[/itex]... (created via the theoretical flipping of a coin infinitely many times where H = 1 and T = 0), and3) a definition of "select an element of an infinite set uniformly at random" that simply means, in layman's terms, that one and only one element of an infinite set will be selected using a process where all elements of the set have an equal chance of being selected,then it is possible to select a natural number uniformly at random.
Definitions:Let [itex]V[/itex] be a set containing one and only one element from each Vitali equivalence class on the interval [itex][ \, 0.5, 1 ] \,[/itex] (Vitali equivalence classes are equivalence classes of the real numbers that partition [itex]\mathbb{R}[/itex] under the relation [itex]x \equiv y \iff ( \, \exists q \in \mathbb{Q} ) \, ( \, x - y = q ) \,[/itex]). The axiom of choice allows for such a selection.Let [itex]D = \{ r \in [ \, 0, 1 ) \, : r[/itex] is a dyadic rational [itex]\}[/itex] (dyadic rationals are rational numbers whose denominator is a power of two and, as a result, whose binary expansion is finite).Let [itex]E = \{ r \in \mathbb{Q} ( \, 0, 1 ) \, : r \notin D \}[/itex].Let [itex]f : \mathbb{N} \longmapsto D[/itex] (here [itex]\longmapsto[/itex] denotes a bijection).Let [itex]g : \mathbb{N} \longmapsto E[/itex].Let [itex]h : \mathbb{N} \longmapsto \mathbb{Q} [ \, 0, 1 ) \,[/itex].Let [itex]x[/itex] be a real number in [itex][ \, 0, 1 ] \,[/itex] selected at random via the binary sequence from (2) above such that [itex]x = 0.s_1s_2s_3[/itex]... .Let [itex]n \in \mathbb{N}[/itex] denote the natural number that is selected uniformly at random as a result of the following process.
Process for Selecting [itex]n[/itex] Uniformly at Random:There are three possibilities for [itex]x[/itex]. It may be a dyadic rational, a non-dyadic rational, or an irrational.1) If [itex]x \in D[/itex] or [itex]x = 1[/itex]:Each dyadic rational not equal to 0 or 1 has two binary expansions: one finite and one infinite (e.g. 0.1 = 0.0111…). The randomly selected number [itex]x[/itex] may take the form of either expansion. Let [itex]n[/itex] be such that [itex]f(n) = x[/itex] if [itex]0 < x < 1[/itex] and [itex]f(n) = 0[/itex] if [itex]x = 0[/itex] or [itex]x = 1[/itex]. At this point, there are two and only two possible ways of selecting each possible natural number.
2) If [itex]x \in E[/itex]:Let [itex]n[/itex] be such that [itex]g(n) = x[/itex]. At this point, there are now three and only three ways of selecting each possible natural number.
3) If [itex]x \in [ \, 0, 1 ] \, \setminus \mathbb{Q}[/itex]:There will exist one and only one element [itex]v \in V[/itex] such that [itex]v - x \in \mathbb{Q}[/itex]. For each rational [itex]q[/itex] on the interval [itex][ \, 0, 1 ) \,[/itex], there will be one and only one corresponding irrational [itex]i[/itex] on the interval [itex][ \, v - 0.5, v + 0.5 ) \,[/itex] such that [itex]q - 0.5 = i - v[/itex].There are two possibilities for [itex]x[/itex]. It will either fall in the interval [itex][ \, v - 0.5, 1 ) \,[/itex] or in the interval [itex]( \, 0, v - 0.5 ) \,[/itex]. For each irrational [itex]y[/itex] on the interval [itex]( \, 0, v - 0.5 ) \,[/itex], there will be one and only one irrational [itex]z[/itex] on the interval [itex]( \, 1, v + 0.5 ) \,[/itex] where [itex]y + 1 = z[/itex]. In this fashion, we can biject all possible values for [itex]x[/itex] on the interval [itex][ \, 0, 1 ) \,[/itex] that are within the Vitali equivalence class containing [itex]v[/itex] with all possible irrationals on the interval [itex][ \, v - 0.5, v + 0.5 ) \,[/itex] that are also in the Vitali equivalence class containing [itex]v[/itex] (i.e., if [itex]x \in [ \, v - 0.5, 1 ) \,[/itex], then utilize [itex]x[/itex] itself whereas, if [itex]x < v - 0.5[/itex], utilize [itex]x + 1[/itex] instead). Select a suitable [itex]n[/itex] as follows:If [itex]x \in [ \, v - 0.5, 1 ) \,[/itex], let [itex]n[/itex] be such that [itex]h(n) - 0.5 = x - v[/itex].If [itex]x \in ( \, 0, v - 0.5 ) \,[/itex], let [itex]n[/itex] be such that [itex]h(n) - 0.5 = (x + 1) - v[/itex].At this point, all possible values for [itex]x[/itex] will lead to the selection of a distinct value for [itex]n[/itex] and all [itex]n \in \mathbb{N}[/itex] have an equal chance of being selected.End proof.
Second Statement to Prove:It is possible to select an element of [itex]\mathbb{R}[/itex] uniformly at random given the first proof above.Definitions:Let [itex]j : \mathbb{N} \longmapsto \mathbb{Z}[/itex].Let [itex]n[/itex] be selected uniformly at random via the above process.Let [itex]x[/itex] be a real number in [itex][ \, 0, 1 ) \,[/itex] selected uniformly at random. Given the selection process used in the first proof, this requires two additional steps. First, since each dyadic rational has two binary expansions, each of the expansions must be related to a distinct and separate real number (this is trivial in that the dyadics may be listed along with their possible expansions, allowing for a mapping by indexes). Second, it is important that [itex]x \neq 1[/itex], which is again accomplished trivially by bijecting [itex][ \, 0, 1 ] \,[/itex] with [itex][ \, 0, 1 ) \,[/itex].Let [itex]r \in \mathbb{R}[/itex] denote the real number that is selected uniformly at random as a result of the following process.Process for Selecting [itex]r[/itex] Uniformly at Random:[itex]r = j(n) + x[/itex].End proof.
 
Physics news on Phys.org
  • #2
I assume there is either something wrong or some form of clarification should accompany the above because it appears to contradict the basic idea that there is no uniform distribution over [itex]\mathbb{N}[/itex]. I understand that a uniform distribution over [itex]\mathbb{N}[/itex] is considered impossible because the probability [itex]f(n)[/itex] assigned to each natural [itex]n[/itex] would be both positive and equivalent to the probability assigned to every other natural, so it is perceived that the [itex]\sum_{n \in \mathbb{N}} f(n) = \infty[/itex].

Nevertheless, I believe the above is very clean, concise, and easy to understand, so I would like some feedback. Thank you!
 
  • #3
I've been told the argument in the OP here is not very clear, so I should do a better job to get some feedback. Can we start with this? It's much better:Let the infinite binary sequence [itex]S = s_1, s_2, s_3, …[/itex] be selected uniformly at random from the set of all infinite binary sequences. My suggested method is to assert the existence of a function [itex]b[/itex] that, given an integer input, returns an element uniformly at random from the sample space [itex]\Omega = \{0, 1\}[/itex]. Then, [itex]S = b(1), b(2), b(3), …[/itex].Let [itex]x = 0.s_1s_2s_3…[/itex]. By definition, [itex]x[/itex] has now been randomly selected from [itex]\Omega = [ \,0, 1 ]\,[/itex].Using [itex]x[/itex], the goal is to try and select a natural number uniformly at random, so let [itex]k(x) = n[/itex].We now note that there is one and only one element [itex]v[/itex] in our Vitali set [itex]V[/itex] such that [itex]v – x \in \mathbb{Q}[/itex]. Remember that every element of [itex]V[/itex] falls in the interval [itex][ \,0.5, 1 ] \,[/itex], as was specified in the OP.Also remember that function [itex]h[/itex] from the OP is a bijection from [itex]\mathbb{N}[/itex] onto [itex]\mathbb{Q}[ \,0, 1) \,[/itex], so its inverse [itex]h^{-1}[/itex] is a bijection from [itex]\mathbb{Q}[ \,0, 1) \,[/itex] onto [itex]\mathbb{N}[/itex].Assuming [itex]x[/itex] is irrational, we can assert that:If [itex]x < v - 0.5[/itex], then [itex]k(x) = h^{-1} ( \,(x + 1) - v + 0.5) \,[/itex].

If [itex]x \geq v - 0.5[/itex], then [itex]k(x) = h^{-1} ( \,x - v + 0.5) \,[/itex].If we simply let the domain of function [itex]k[/itex] be the irrationals on the interval [itex][ \,0, 1] \,[/itex], then function [itex]k[/itex] is a surjection from those irrationals onto [itex]\mathbb{N}[/itex] that is uniform. This is because the Vitali equivalence classes partition [itex]\mathbb{R}[/itex] into countable subsets and the intersection of each Vitali equivalence class and the interval [itex][ \,0, 1] \,[/itex] is bijected back onto [itex]\mathbb{N}[/itex]. Accordingly, if [itex]x[/itex] was restricted to just the irrationals on the interval [itex][ \,0, 1] \,[/itex], then we would be done because [itex]k(x) = n[/itex] has been selected uniformly at random.Make sense so far? Please comment!
 
  • #4
AplanisTophet said:
I assume there is either something wrong or some form of clarification should accompany the above because it appears to contradict the basic idea that there is no uniform distribution over [itex]\mathbb{N}[/itex].

There would be no contradiction in having a "uniform" probability measure that is defined on ##\mathbb{N}## and assigns each natural number a probability zero of being realized. People, including myself, have a double standard when considering random sampling. On the one hand, we are willing to consider problems that involve picking a real number r from a uniform distribution on [0,1] (which is an event with probability zero) , but on the other hand, we baulk when someone proposes a problem where we "pick a natural number at random" and insist that they show us a distribution on ##\mathbb{N}## where each number has a non-zero probability of being chosen.

Putting aside the unfair objection to having events of probability zero, the difficulty of defining a uniform distribution on the natural numbers is to define and implement the ideas of "distribution" and "uniform". The idea of a "distribution" is more specific than the notion of a "probability measure". A distribution on ##\mathbb{N}## is a probability measure that is specified by a cumulative distribution function ##F(n)##. It assigns probability ##F(n+k) - F(n)## to the set ##\{n+1,n+2,..n+k\}##.

The concept of "uniform" needs to go beyond the thought that each individual "point" in the probability space has the same probability - for example, a gaussian distribution gives each point the same probability - namely zero. What we need for "uniform" distribution is a concept of translation invariance. If ##S## is a measurable set, we want the probability of ##S## to be the same as the probability of ##\{ y: y = x + k, x \in S\}## for each natural number ##k##.

If you are claiming to have constructed a uniform probability distribution on the natural numbers, you need to specify the distribution function ##F## and show the probability it assigns to sets is translation invariant.

In the original post, it isn't clear what it means to be given:
2) a randomly selected infinite binary sequence S=s1,s2,s3, (created via the theoretical flipping of a coin infinitely many times where H = 1 and T = 0)

You probably intend that we are "given" more than one single such binary sequence. I think you are assuming we use the "coin toss" measure ( https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-436j-fundamentals-of-probability-fall-2008/lecture-notes/MIT6_436JF08_lec02.pdf ) on the set of infinite binary sequences.

Your definition of "uniform" does not agree with the usual definition of a uniform probability measure.
3) a definition of "select an element of an infinite set uniformly at random" that simply means, in layman's terms, that one and only one element of an infinite set will be selected using a process where all elements of the set have an equal chance of being selected,

According to that definition, a gaussian distribution would be uniform.
 
  • Like
Likes AplanisTophet and PeroK
  • #5
Thank you very much for your response. I have considered it thoroughly.

Using the above equations, each natural number [itex]n[/itex] has its own unique non-measurable Vitali set [itex]V^n[/itex] such that, if [itex]x[/itex] falls in [itex]V^n[/itex], then [itex]k(x) = n[/itex]. E.g., [itex]V[/itex] as defined above (the initial index Vitali set) is such that [itex]V = V^{h^{-1}(0.5)}[/itex], so whenever [itex]x \in V \rightarrow x – v = 0[/itex], we get [itex]k(x) = h^{-1}(0.5) = n[/itex].

Given that the reals are partitioned into non-measureable sets such that every element of each non-measurable set [itex]V^n[/itex] will map to a particular [itex]n[/itex], then you should see why when you say…

Stephen Tashi said:
If ##S## is a measurable set, we want the probability of ##S## to be the same as the probability of ##\{ y: y = x + k, x \in S\}## for each natural number ##k##.
…that your suggestion does not (or perhaps cannot be) applied using my model, yes?
 
  • #6
AplanisTophet said:
…that your suggestion does not (or perhaps cannot be) applied using my model, yes?

I don't know because I don't see that you have defined any probability measure on the natural numbers yet - much less a probability distribution on them. You aren't dealing with probability theory. Your arguments rely on an intuitive notion that a selection process that is symmetrical (in some sense) is therefore "uniform".

I don't even see that your selection process is symmetrical. For example, in the OP, using the uniform distribution ##\mu## on [0,1], we have ##\mu(E) = 0## and ##\mu(D) = 0##, so that gives probability 1 to case 3). Do you have some argument that that there is symmetry in the way a natural number is selected - give that cases 1), 2), 3) don't have symmetry in their probability of occurence?

As I understand your general idea, you wish to employ a uniform distribution on the reals ( in your case a probability measure defined by the "coin-toss" measure) to generate a uniform distribution on the natural numbers. You accomplish this by a series of mappings from the real numbers to other sets, and finally to the natural numbers. The procedure for picking a natural number "at random" is to pick a real number r at random from a uniform distribution on the reals, then map this real number through the chain of mappings to natural number n.

For this to work, you need a theorem like: " If ##\mu_S## is a probability measure defined on the set ##S## and ##f## is a function such that the set ##f(S) = T## then the function ##u_T## defined on subsets of ##T## by ##\mu_T(t) = \mu_S( f^{-1}( t)) ## is a probability measure on ##T##."

That type of theorem is true (by definition) for "measurable functions" ##f##, but not for arbitrary functions. So what you need to show is that each function in your chain of mappings is a measurable function.


-------------

A valid reason that a distribution on the natural numbers can't assign probability zero to each number (which I didn't state in the previous post) is that a probability measure is required to set the probability of the union of a countable number of disjoint measurable sets equal to the sum of their individual probabilities. So if we assign probability zero to each individual natural number, the probability of the entire set of natural numbers would not be 1 because it can be expressed as the countable union of the individual numbers and each term in the sum of the associated probabilities is zero.

The definition of "probability measure" focuses on countable unions. In contrast to the case of the natural numbers, a gaussian probability distribution can assign probability zero to each real number and not run afoul of the properties of a probability measure because the set of real numbers isn't expressible as a countable disjoint union of sets of single real numbers.

At some point in your chain of mappings, you are mapping a domain whose elements are each uncountable sets of real numbers to a co-domain whose elements are each a single natural number. In the measure ##\mu_S## defined on the domain of this function, it is permissable that the probability of each of the uncountable sets of numbers be zero provided the are an uncountable number of these sets. But in the co-domain of the function, it is not permissible to assign each natural number a probability of zero
 
  • Like
Likes AplanisTophet
  • #7
That was very helpful. Also, as novice here I see how my notation was confusing, how I took some unnecessary additional steps, etc. Thank you for wading through it. I believe I understand now. To sum it up in my own words:

Where Vitali sets are not measurable and the set of irrationals being mapped to each specific natural using my model takes the form of a Vitali set, the probability assignable to each natural is undefined as one might have expected. This is perhaps a backwards way of demonstrating why Vitali sets are not measurable (if they were, we'd be able to assign a probability to each natural here). That is why I included the layman's definition of uniformly at random in the OP.

Is it possible that the infinite sum of the undefinable probability assigned to each natural using this (or a cleaner version of this) model must still sum to 1? To me that's taking the impossible, like the square root of -1 or somehow showing -1/12 is a meaningful answer to an infinite sum, and trying to give it a purpose. Perhaps moreso, because it's fairly clear that each natural should have a uniform chance of being selected and the sum of those chances should be 1, even if that uniform chance is not definable or able to be expressed as a real number.
 
  • #8
PS, on Page 10 of that link, it demonstrates exactly what I was trying to do. Nice.
 

1. What does it mean to select a natural and real uniformly at random?

Selecting a natural and real uniformly at random means that each option or item has an equal chance of being chosen. This means that the selection process is fair and unbiased, and there is no preference or pattern in the selection.

2. Why is it important to select a natural and real uniformly at random?

Selecting a natural and real uniformly at random is important in scientific research and experiments because it ensures that the results are not skewed or influenced by any external factors. This allows for accurate and reliable data to be collected and analyzed.

3. How do you ensure that a selection is natural and real uniformly at random?

To ensure that a selection is natural and real uniformly at random, a random number generator or a randomization procedure can be used. This eliminates any bias or preference in the selection process and ensures that each item has an equal chance of being chosen.

4. Can a selection be both natural and real uniformly at random?

Yes, a selection can be both natural and real uniformly at random. Natural selection refers to the random process of evolution in biology, while selecting real uniformly at random refers to the selection of items in an unbiased manner. Both concepts can be applied in different contexts.

5. Are there any limitations to selecting a natural and real uniformly at random?

There are some limitations to selecting a natural and real uniformly at random, such as the potential for errors in the randomization process or the sample size being too small to accurately represent the entire population. It is important to take these limitations into consideration when interpreting the results of a study.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
13
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
18
Views
1K
Replies
0
Views
234
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
987
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
4K
  • Set Theory, Logic, Probability, Statistics
Replies
0
Views
963
Back
Top