# Countably Infinite and Uncountably Infinite

## Main Question or Discussion Point

Given that there are N events, and the probability of each event is equal to the other and each probability is equal to 1 (ex. P(B1)=P(B2)=P(B3)...=P(BN)=1, we can show by induction that P(B1B2B3...BN)=1. If the collection of these N events are countably infinite or uncountably infinite, how would each affect the fact that P(B1B2...BN)=1?

My initial thoughts are that countably infinite collections can add up to 1, but with uncountably infinite the probabilities will be 0. But in this instance, we are given that P(B1)=P(B2)...=P(BN)=1. Does it make a difference if the set is countably infinite? How about if it is uncountably infinite?

Related Set Theory, Logic, Probability, Statistics News on Phys.org
mathman
If you have N distinct events with equal probabilities, then the probability of each is 1/N. The only way you can have all probabilities the same and = 1 is if all the events are different labels for the same one event.

If you have N distinct events with equal probabilities, then the probability of each is 1/N. The only way you can have all probabilities the same and = 1 is if all the events are different labels for the same one event.
That's not my question. Perhaps I worded it incorrectly. Let's say there are B1, B2, B3, ... Bn events.

If P(B1)=P(B2)=P(B3)...=P(Bn)=1, then we can show that P(B1B2B3...Bn)=1. This can be done by induction.

My question is, will this still be true if the events are countably infinite? What if the events are uncountably infinite?

HallsofIvy
Homework Helper
IF there are N events, only one of which can happen, and all of which are equally likely, then the probability of each is 1/N. But I don't think that is what was intended. here.

If an event has probability 1, then it is "certain" to happen. Then all of them together are "certain" to happen. No, it doesn't matter whether the number of events is "finite", "countably infinite", or "uncountably infinite'. But that is because "probability 1" is a very special situation.

IF there are N events, only one of which can happen, and all of which are equally likely, then the probability of each is 1/N. But I don't think that is what was intended. here.

If an event has probability 1, then it is "certain" to happen. Then all of them together are "certain" to happen. No, it doesn't matter whether the number of events is "finite", "countably infinite", or "uncountably infinite'. But that is because "probability 1" is a very special situation.
Is it really true that if the probability is 1 then the even is "certain" to happen? I see how this is true for countable sets, but uncountable? Perhaps I misunderstand the problem, but it seems to be that when dealing with continuous distributions, we can't really talk about the probability that a single even will occur (e.g. picking any number at random from the unit interval.) The probabability of picking .5 is 0, but this doesn't mean it can't happen. And the probability of picking something other than .5 is 1-(Probability of picking .5)=1, but this doesn't mean that 5 can't be picked. Perhaps I have misunderstood something, though.

IF there are N events, only one of which can happen, and all of which are equally likely, then the probability of each is 1/N. But I don't think that is what was intended. here.

If an event has probability 1, then it is "certain" to happen. Then all of them together are "certain" to happen. No, it doesn't matter whether the number of events is "finite", "countably infinite", or "uncountably infinite'. But that is because "probability 1" is a very special situation.
I am not so sure about the uncountable case.

I don't know any probability theory but I know a little about Lebesgue measure. It seems to me that if you take, say, the set of irrationals in the unit interval, that set has measure 1. If I throw out countably many points, the resulting set still has measure one but it's missing a lot of points.

Now the trick is to see if we can find a collection of sets of measure 1 with the property that their intersection has less than measure 1. I don't know if we could arrange this with a countable collection. But since sigma algebras are not necessarily closed under uncountable intersections, it's not clear that the intersection of uncountably many sets of measure 1 is even measurable, let alone with measure 1. I'm pretty sure someone could whip up a counterexample in the uncountable case.

Must a countable intersection of sets of measure 1 have measure 1? And might an uncountable intersection of sets of measure 1 either fail to be measurable, or fail to have measure 1?

I'm on shaky ground here with respect to my own knowledge, so I'd be happy if someone could either develop or refute my idea.

Last edited:
I'm conflicted. Intuitively I think it makes sense that it doesn't matter whether the number of events is countable or uncountable, given that we are given events that are certain to occur, and the question is asking for the probability that all these certain events will occur together. Regardless of the number of events, they are all certain to occur.

On the other hand, I feel like I might not be taking something into account regarding uncountable infinity.

lavinia
Gold Member
Given that there are N events, and the probability of each event is equal to the other and each probability is equal to 1 (ex. P(B1)=P(B2)=P(B3)...=P(BN)=1, we can show by induction that P(B1B2B3...BN)=1. If the collection of these N events are countably infinite or uncountably infinite, how would each affect the fact that P(B1B2...BN)=1?

My initial thoughts are that countably infinite collections can add up to 1, but with uncountably infinite the probabilities will be 0. But in this instance, we are given that P(B1)=P(B2)...=P(BN)=1. Does it make a difference if the set is countably infinite? How about if it is uncountably infinite?
If a set has probability 1 then it equals the whole space except for a set of measure zero.
If two sets have measure 1 then there intersection must be the whole space except for a set of measure zero/

If a set has probability 1 then it equals the whole space except for a set of measure zero.
If two sets have measure 1 then there intersection must be the whole space except for a set of measure zero/
Ah I've got it.

In the unit interval, a countable collection of sets of measure 1 has a complement that's a countable union of sets of measure zero. That complement must have measure zero. So a countable intersection of measure 1 subsets of the unit interval has measure one.

For a counterexample in the uncountable case, for each real $\alpha \in [0,1]$ let $X_\alpha = [0,1]\setminus \{\alpha\}$; that is, $X_\alpha$ is the all the points in the unit interval except for $\alpha$.

Then each $X_\alpha$ has measure 1; but the intersection of all the $X_\alpha$'s is empty. That's because each $X_\alpha$ is missing $\alpha$. No point is in every $X_\alpha$ so the intersection's empty.

The key is that in the infinite situation, an event having probability 1 (or in the language of measure theory, a set of measure 1) can have a set of measure zero as exceptions. The rationals have measure zero in the reals, but there are still a lot of rationals and in fact they are dense in the reals.

Likewise, just because something is certain doesn't mean it will happen! That's math for you :-)

Last edited:
So if we are to use Di=Bi(complement) and using De Morgan's law, we can say that P(D1)=P(D2)=...=P(Dn)=0, then P(D1∪D2∪...∪Dn)=0. From this we can show that the countable union of null sets gives us a null set. But with uncountable unions, we are not guaranteed a measurable union. Yes?

Last edited:
So if we are to use Di=Bi(complement) and using De Morgan's law, we can say that P(D1)=P(D2)=...=P(Dn)=0, then P(D1∪D2∪...∪Dn)=0. From this we can show that the countable union of null sets gives us a null set. But with uncountable unions, we are not guaranteed a measurable union. Yes?
Yes exactly, this is DeMorgan all the way.

The laws of set theory let us apply DeMorgan's laws to uncountable collections of sets. But the rules of sigma algebras -- the mathematical structures underlying probability theory -- only apply to countable collections of sets. So we have to be careful with uncountable collections of events.

In my counterexample above I constructed a an uncountable collection of sets of measure 1 whose intersection is in fact measurable, of measure zero.

It's true that an uncountable union of measurable sets can be nonmeasurable, but nonmeasurable sets are very weird. We can prove that we can never give an explicit construction of one. You will never see anyone in math say, here's a set X and it's nonmeasurable; with any description of what X might possibly look like. It can't be done.

What we can do is prove that such a set might exist. And this always requires the Axiom of Choice.

But you'll never see an explicit one.

Yes exactly, this is DeMorgan all the way.

The laws of set theory let us apply DeMorgan's laws to uncountable collections of sets. But the rules of sigma algebras -- the mathematical structures underlying probability theory -- only apply to countable collections of sets. So we have to be careful with uncountable collections of events.

In my counterexample above I constructed a an uncountable collection of sets of measure 1 whose intersection is in fact measurable, of measure zero.

It's true that an uncountable union of measurable sets can be nonmeasurable, but nonmeasurable sets are very weird. We can prove that we can never give an explicit construction of one. You will never see anyone in math say, here's a set X and it's nonmeasurable; with any description of what X might possibly look like. It can't be done.

What we can do is prove that such a set might exist. And this always requires the Axiom of Choice.

But you'll never see an explicit one.
Cool, thanks.

I appreciate everyone's contribution to this thread.

pwsnafu
It's true that an uncountable union of measurable sets can be nonmeasurable, but nonmeasurable sets are very weird. We can prove that we can never give an explicit construction of one. You will never see anyone in math say, here's a set X and it's nonmeasurable; with any description of what X might possibly look like. It can't be done.

What we can do is prove that such a set might exist. And this always requires the Axiom of Choice.

But you'll never see an explicit one.
Careful. The correct statement is: every description of a Lebesgue non-measurable set requires the Axiom of Choice. If you are talking about the Borel structure then we can give a construction of a non-measurable set. In measure theory, the Borel sigma algebra of a Polish space is so important it has a special name: "standard Borel space".

Given that there are N events, and the probability of each event is equal to the other and each probability is equal to 1 (ex. P(B1)=P(B2)=P(B3)...=P(BN)=1, we can show by induction that P(B1B2B3...BN)=1. If the collection of these N events are countably infinite or uncountably infinite, how would each affect the fact that P(B1B2...BN)=1?

My initial thoughts are that countably infinite collections can add up to 1, but with uncountably infinite the probabilities will be 0. But in this instance, we are given that P(B1)=P(B2)...=P(BN)=1. Does it make a difference if the set is countably infinite? How about if it is uncountably infinite?
If you want to think about probabilities of infinite subsets then there is no substitute for measure theory and the Kolmogorov axioms. These are both quite simple, though rather abstract.

Conversely, if you try to get by without measure theory and Kolmogorov you will wander in confusion and never get anywhere.