No Probability Measure for Equal Probability of Countable, Infinite Set

AI Thread Summary
The discussion centers on the impossibility of assigning equal probabilities to all integers in a countable infinite set, as this leads to contradictions in probability theory, particularly regarding sigma-additivity. Participants argue that while one might intuitively assign probabilities like 0.5 to even integers, this is not mathematically valid when considering the entire set of integers. The concept of choosing an integer "at random" lacks meaning without a proper probability measure, as any assignment of probability to singletons results in all subsets having a measure of zero. The conversation also touches on the implications of finite versus infinite probability measures and the potential for generalized distributions, like the Dirac delta function, to illustrate limits in probability theory. Ultimately, the thread emphasizes the complexities and limitations of defining probabilities in infinite sets.
johnG2011
Messages
6
Reaction score
0
People take the liberty in using statements like "choose an integer at random," usually meaning consider the integers to be equally likely. Show that this statement is truly void of meaning by demonstating that there does not exist a problability measure that addigns equal problability to all the singletons on a discrete problability space (\Omega, 2^{\Omega}), where \Omega is infinite but countable, and 2^{\Omega}) is its power set.
 
Physics news on Phys.org


johnG2011 said:
People take the liberty in using statements like "choose an integer at random," usually meaning consider the integers to be equally likely. Show that this statement is truly void of meaning by demonstating that there does not exist a problability measure that addigns equal problability to all the singletons on a discrete problability space (\Omega, 2^{\Omega}), where \Omega is infinite but countable, and 2^{\Omega}) is its power set.

If p is the common probability then sigma-additivity implies a contradiction both for p=0 and for p>0.
 


bpet said:
If p is the common probability then sigma-additivity implies a contradiction both for p=0 and for p>0.

I don't think there's a contradiction. Any element can be chosen from an infinite countable set with probability zero given a uniform distribution. The probability can never be greater than zero. The probability of choosing an even number from the infinite countable set of natural numbers is 0.5. The probabilities of either an odd number or an even number sum to unity. Perhaps I'm misunderstanding your argument.
 
Last edited:


JohnG2011,

You're talking about various examples whose probablity measures are different and assuming the conclusions from one apply to the other.

Of the assertions you made, I find these interesting.

The probability of choosing an even number from the infinite countable set of natural numbers is 0.5.

What probability measure are you using to draw this conclusion? I think it contradicts what you are trying to prove. When people say "the probability that a random integer is even is 0.5" the only sensible interpretation I can make of that is to phrase it as a statement about a limit of a sequence of probability distributions. The probability that an integer chosen at random from a unform distribution on the integers from 1 to L is approximately 1/2 for large L and approaches 1/2 as a limit as L approaches infinity. However, there is no probablity distribution that is the limit of these distributions.

Any element can be chosen from an infinite countable set with probability zero given a uniform distribution
If you mean something like a uniform distribution on the reals from -1 to 1 then that's true, but irrelevant to the question. In that case, the definition of the probablity measure and the meaning of integration say you do a calculus type integral, not a discrete summation. I think the problem's reference to "singletons" boxes you into using discrete summation. (If can show it doesn't then perhaps you've made a notable discovery.)
 


SW VandeCarr said:
I don't think there's a contradiction. Any element can be chosen from an infinite countable set with probability zero given a uniform distribution. The probability can never be greater than zero. The probability of choosing an even number from the infinite countable set of natural numbers is 0.5. The probabilities of either an odd number or an even number sum to unity. Perhaps I'm misunderstanding your argument.

Nope, as bpet explained sigma-additivity contradicts this. One will need the set to be uncountable or finite if all singletons are part of the sigma algebra.
 


disregardthat said:
Nope, as bpet explained sigma-additivity contradicts this. One will need the set to be uncountable or finite if all singletons are part of the sigma algebra.

OK. If take every every natural number divisible by three, why can't I say that the probability of selecting such a number from the set of natural numbers is 1/3?
 


SW VandeCarr said:
I don't think there's a contradiction. Any element can be chosen from an infinite countable set with probability zero given a uniform distribution. The probability can never be greater than zero. The probability of choosing an even number from the infinite countable set of natural numbers is 0.5. The probabilities of either an odd number or an even number sum to unity. Perhaps I'm misunderstanding your argument.
The sum of all probabilities must be 1. If the probability of each possibility is 0, even an infinite sum of all "0"s is 0.

In your example, prob of even numbers being 1/2, prob of odd numbers 1/2, you have only two outcomes, Even and Odd. You are NOT assigning a probability to each integer so you do NOT have a probability over a countable number of outcomes.
 


HallsofIvy said:
You are NOT assigning a probability to each integer so you do NOT have a probability over a countable number of outcomes.

What I'm saying is that the natural numbers can be divided into an arbitrary number of infinite subsets: say every number divisible by 3 and every number divisible by 7. The probabilities can be summed (1/3)+(1/7)-(1/3)(1/7)= 9/21.

My point is that we can talk about probabilities over an infinite countable set despite the obvious fact that, given a uniform distribution, the probability of any given number is 0. I can still say that the selected number will have P=1/3 of being divisible by 3, P = 1/7 of being divisible by 7, and P=9/21 of being divisible by either.
 
Last edited:


SW, it doesn't work like that.

The probability measure which assigns each singleton the probability 0 will force the measure of each subset (even the infinite ones) to be 0.

http://en.wikipedia.org/wiki/Measure_(probability )

See countable additivity.

If a probability measure on the integers assigned each singleton set probability 0, then

P({even integers}) = P({0} U {2} U {-2} U {4} U {-4} U ...) = P({0}) + P({2}) + P({-2}) + P({4}) + P({-4}) + ... = 0 + 0 + 0 + 0 + ... = 0.

If the measure of singleton sets where a positive constant p, then the measure of the even integers would be an infinite sum of p's, which does diverge (but must be equal to 1 for P to be a probability measure).
 
Last edited by a moderator:
  • #10


I think this really emphasizes the role the sigma algebra plays in probability. It is viable to say the probability of selecting a multiple of 3 is 1/3, so long as you restrict the measurable events appropriately. You could restrict your measurable events to the sigma algebra generated by the events \{n|n \mod 3 = 0 \}, \{n|n \mod 3 = 1 \}, and \{n|n \mod 3 = 2 \} and the above statement would make sense. The problem is then that we couldn't say anything about probabilities of subsets of those events, because those probabilities aren't even defined for this sigma algebra.

For this case, the infinite sample space doesn't matter; we may as well work with \Omega = \{1,2,3\} and the uniform distribution.
 
  • #11


disregardthat said:
SW, it doesn't work like that.

The probability measure which assigns each singleton the probability 0 will force the measure of each subset (even the infinite ones) to be 0.

http://en.wikipedia.org/wiki/Measure_(probability )

Well, I have to admit that I've answered questions in this forum to the effect that all infinite subsets of the natural numbers must have the same cardinality as the set of all natural numbers, that is \aleph _0. I never considered the impact on probability theory. Sampling theory involves finite samples from theoretically infinite sets. Clearly any finite segment of the positive real number line of some length will have more natural numbers divisible by small natural numbers than by large natural numbers. I'm going to think about this for a while.
 
Last edited by a moderator:
  • #12


A slight digression: A popularized description of Erdos's work in number theory is that he introduced proofs involving probability. Anyone know what probability spaces and measures were involved in that?
 
  • #13


Stephen Tashi said:
A slight digression: A popularized description of Erdos's work in number theory is that he introduced proofs involving probability. Anyone know what probability spaces and measures were involved in that?

Might have been a variant of Natural Density which (I think) uses finite additivity instead of sigma-additivity.
 
  • #14


I looked at "Natural density" on the Wikipedia and it's an intuitively pleasing idea. I wonder if there is a relation between it and the "improper priors" that some people use in Bayesian statistics. Can we express most useful "improper priors" as limits of a sequence of probability distributions, each of which has support on a proper subset of the domain of the random variable?

I think one way to get a Bayesian interpretation of frequentist confidence intervals for the mean of a normal distribution is to compute an answer (e.g. the probability that the mean is in (5.0, 7.0) , a specific numerical interval) based on a uniform prior on (-L,L) and then look at the limit of the answer as L approaches infinity. However, that sequence of distributions doesn't approach a distribution. I suppose one could define the limit as a "generalized distribution" by analogy to a Dirac delta function being a generalized function.
 
  • #15


Stephen Tashi said:
I think one way to get a Bayesian interpretation of frequentist confidence intervals for the mean of a normal distribution is to compute an answer (e.g. the probability that the mean is in (5.0, 7.0) , a specific numerical interval) based on a uniform prior on (-L,L) and then look at the limit of the answer as L approaches infinity. However, that sequence of distributions doesn't approach a distribution. I suppose one could define the limit as a "generalized distribution" by analogy to a Dirac delta function being a generalized function.

I'm not following. The Dirac delta is a distribution representing the limit of the normal distribution with mean 0 as the variance goes to 0. I'm not sure the Dirac delta can even be properly called a function as such. The integral is defined to be \int_{-\infty}^{+\infty}\delta (x) dx = 1. How does this relate to estimating the CI around a parameter estimate?
 
Last edited:
  • #16


Both the Dirac delta and a (fictitious) uniform distribution on minus infinity to infinity are zero at most places but still integrate to 1. I visualize the Dirac delta function \delta_m as the limit of a sequence of normal distributions with mean m and standard deviations approaching 0. I visualize a "uniform distribution on minus infinity to infinity" as a limit of a sequence of uniform distributions from -L to L as L approaches infinity. So I think there is an analogy between the two concepts, without them being the same concept.
 
  • #17


Stephen Tashi said:
Both the Dirac delta and a (fictitious) uniform distribution on minus infinity to infinity are zero at most places but still integrate to 1. I visualize the Dirac delta function \delta_m as the limit of a sequence of normal distributions with mean m and standard deviations approaching 0. I visualize a "uniform distribution on minus infinity to infinity" as a limit of a sequence of uniform distributions from -L to L as L approaches infinity. So I think there in an analogy between the two concepts, without them being the same concept.

I don't see how that gets us to sigma additivity. It seems we could just define a function on the interval of natural numbers [1,n] such that f(x)=\lim_{n\rightarrow\infty} n(x)/n where 0\leq x \leq 1.
 
Last edited:
  • #18


I'm not claiming that things like the Dirac delta function are actual functions or that they define actual probability measures. Likewise, the jargon "improper prior" doesn't refer to an actual probability distribution.
 

Similar threads

Replies
18
Views
2K
Replies
13
Views
3K
Replies
22
Views
2K
Replies
4
Views
2K
Replies
3
Views
2K
Back
Top