Countably Infinite and Uncountably Infinite

  • Thread starter brojesus111
  • Start date
  • Tags
    Infinite
In summary, the conversation is talking about how if there are a set of N events, and the probability of each event is equal to the other and each probability is equal to 1 (ex. P(B1)=P(B2)=P(B3)...=P(BN)=1), then the probability of each event occurring is 1/N. If the collection of these N events are countably infinite or uncountably infinite, how would each affect the fact that P(B1B2...BN)=1?
  • #1
brojesus111
39
0
Given that there are N events, and the probability of each event is equal to the other and each probability is equal to 1 (ex. P(B1)=P(B2)=P(B3)...=P(BN)=1, we can show by induction that P(B1B2B3...BN)=1. If the collection of these N events are countably infinite or uncountably infinite, how would each affect the fact that P(B1B2...BN)=1?

My initial thoughts are that countably infinite collections can add up to 1, but with uncountably infinite the probabilities will be 0. But in this instance, we are given that P(B1)=P(B2)...=P(BN)=1. Does it make a difference if the set is countably infinite? How about if it is uncountably infinite?
 
Physics news on Phys.org
  • #2
If you have N distinct events with equal probabilities, then the probability of each is 1/N. The only way you can have all probabilities the same and = 1 is if all the events are different labels for the same one event.
 
  • #3
mathman said:
If you have N distinct events with equal probabilities, then the probability of each is 1/N. The only way you can have all probabilities the same and = 1 is if all the events are different labels for the same one event.

That's not my question. Perhaps I worded it incorrectly. Let's say there are B1, B2, B3, ... Bn events.

If P(B1)=P(B2)=P(B3)...=P(Bn)=1, then we can show that P(B1B2B3...Bn)=1. This can be done by induction.

My question is, will this still be true if the events are countably infinite? What if the events are uncountably infinite?
 
  • #4
IF there are N events, only one of which can happen, and all of which are equally likely, then the probability of each is 1/N. But I don't think that is what was intended. here.

If an event has probability 1, then it is "certain" to happen. Then all of them together are "certain" to happen. No, it doesn't matter whether the number of events is "finite", "countably infinite", or "uncountably infinite'. But that is because "probability 1" is a very special situation.
 
  • #5
HallsofIvy said:
IF there are N events, only one of which can happen, and all of which are equally likely, then the probability of each is 1/N. But I don't think that is what was intended. here.

If an event has probability 1, then it is "certain" to happen. Then all of them together are "certain" to happen. No, it doesn't matter whether the number of events is "finite", "countably infinite", or "uncountably infinite'. But that is because "probability 1" is a very special situation.

Is it really true that if the probability is 1 then the even is "certain" to happen? I see how this is true for countable sets, but uncountable? Perhaps I misunderstand the problem, but it seems to be that when dealing with continuous distributions, we can't really talk about the probability that a single even will occur (e.g. picking any number at random from the unit interval.) The probabability of picking .5 is 0, but this doesn't mean it can't happen. And the probability of picking something other than .5 is 1-(Probability of picking .5)=1, but this doesn't mean that 5 can't be picked. Perhaps I have misunderstood something, though.
 
  • #6
HallsofIvy said:
IF there are N events, only one of which can happen, and all of which are equally likely, then the probability of each is 1/N. But I don't think that is what was intended. here.

If an event has probability 1, then it is "certain" to happen. Then all of them together are "certain" to happen. No, it doesn't matter whether the number of events is "finite", "countably infinite", or "uncountably infinite'. But that is because "probability 1" is a very special situation.

I am not so sure about the uncountable case.

I don't know any probability theory but I know a little about Lebesgue measure. It seems to me that if you take, say, the set of irrationals in the unit interval, that set has measure 1. If I throw out countably many points, the resulting set still has measure one but it's missing a lot of points.

Now the trick is to see if we can find a collection of sets of measure 1 with the property that their intersection has less than measure 1. I don't know if we could arrange this with a countable collection. But since sigma algebras are not necessarily closed under uncountable intersections, it's not clear that the intersection of uncountably many sets of measure 1 is even measurable, let alone with measure 1. I'm pretty sure someone could whip up a counterexample in the uncountable case.

Must a countable intersection of sets of measure 1 have measure 1? And might an uncountable intersection of sets of measure 1 either fail to be measurable, or fail to have measure 1?

I'm on shaky ground here with respect to my own knowledge, so I'd be happy if someone could either develop or refute my idea.
 
Last edited:
  • #7
I'm conflicted. Intuitively I think it makes sense that it doesn't matter whether the number of events is countable or uncountable, given that we are given events that are certain to occur, and the question is asking for the probability that all these certain events will occur together. Regardless of the number of events, they are all certain to occur.

On the other hand, I feel like I might not be taking something into account regarding uncountable infinity.
 
  • #8
brojesus111 said:
Given that there are N events, and the probability of each event is equal to the other and each probability is equal to 1 (ex. P(B1)=P(B2)=P(B3)...=P(BN)=1, we can show by induction that P(B1B2B3...BN)=1. If the collection of these N events are countably infinite or uncountably infinite, how would each affect the fact that P(B1B2...BN)=1?

My initial thoughts are that countably infinite collections can add up to 1, but with uncountably infinite the probabilities will be 0. But in this instance, we are given that P(B1)=P(B2)...=P(BN)=1. Does it make a difference if the set is countably infinite? How about if it is uncountably infinite?

If a set has probability 1 then it equals the whole space except for a set of measure zero.
If two sets have measure 1 then there intersection must be the whole space except for a set of measure zero/
 
  • #9
lavinia said:
If a set has probability 1 then it equals the whole space except for a set of measure zero.
If two sets have measure 1 then there intersection must be the whole space except for a set of measure zero/

Ah I've got it.

In the unit interval, a countable collection of sets of measure 1 has a complement that's a countable union of sets of measure zero. That complement must have measure zero. So a countable intersection of measure 1 subsets of the unit interval has measure one.

For a counterexample in the uncountable case, for each real [itex]\alpha \in [0,1][/itex] let [itex] X_\alpha = [0,1]\setminus \{\alpha\}[/itex]; that is, [itex]X_\alpha[/itex] is the all the points in the unit interval except for [itex]\alpha[/itex].

Then each [itex]X_\alpha[/itex] has measure 1; but the intersection of all the [itex]X_\alpha[/itex]'s is empty. That's because each [itex]X_\alpha[/itex] is missing [itex]\alpha[/itex]. No point is in every [itex]X_\alpha[/itex] so the intersection's empty.

The key is that in the infinite situation, an event having probability 1 (or in the language of measure theory, a set of measure 1) can have a set of measure zero as exceptions. The rationals have measure zero in the reals, but there are still a lot of rationals and in fact they are dense in the reals.

Likewise, just because something is certain doesn't mean it will happen! That's math for you :-)
 
Last edited:
  • #10
So if we are to use Di=Bi(complement) and using De Morgan's law, we can say that P(D1)=P(D2)=...=P(Dn)=0, then P(D1∪D2∪...∪Dn)=0. From this we can show that the countable union of null sets gives us a null set. But with uncountable unions, we are not guaranteed a measurable union. Yes?
 
Last edited:
  • #11
brojesus111 said:
So if we are to use Di=Bi(complement) and using De Morgan's law, we can say that P(D1)=P(D2)=...=P(Dn)=0, then P(D1∪D2∪...∪Dn)=0. From this we can show that the countable union of null sets gives us a null set. But with uncountable unions, we are not guaranteed a measurable union. Yes?

Yes exactly, this is DeMorgan all the way.

The laws of set theory let us apply DeMorgan's laws to uncountable collections of sets. But the rules of sigma algebras -- the mathematical structures underlying probability theory -- only apply to countable collections of sets. So we have to be careful with uncountable collections of events.

In my counterexample above I constructed a an uncountable collection of sets of measure 1 whose intersection is in fact measurable, of measure zero.

It's true that an uncountable union of measurable sets can be nonmeasurable, but nonmeasurable sets are very weird. We can prove that we can never give an explicit construction of one. You will never see anyone in math say, here's a set X and it's nonmeasurable; with any description of what X might possibly look like. It can't be done.

What we can do is prove that such a set might exist. And this always requires the Axiom of Choice.

But you'll never see an explicit one.
 
  • #12
SteveL27 said:
Yes exactly, this is DeMorgan all the way.

The laws of set theory let us apply DeMorgan's laws to uncountable collections of sets. But the rules of sigma algebras -- the mathematical structures underlying probability theory -- only apply to countable collections of sets. So we have to be careful with uncountable collections of events.

In my counterexample above I constructed a an uncountable collection of sets of measure 1 whose intersection is in fact measurable, of measure zero.

It's true that an uncountable union of measurable sets can be nonmeasurable, but nonmeasurable sets are very weird. We can prove that we can never give an explicit construction of one. You will never see anyone in math say, here's a set X and it's nonmeasurable; with any description of what X might possibly look like. It can't be done.

What we can do is prove that such a set might exist. And this always requires the Axiom of Choice.

But you'll never see an explicit one.

Cool, thanks.

I appreciate everyone's contribution to this thread.
 
  • #13
SteveL27 said:
It's true that an uncountable union of measurable sets can be nonmeasurable, but nonmeasurable sets are very weird. We can prove that we can never give an explicit construction of one. You will never see anyone in math say, here's a set X and it's nonmeasurable; with any description of what X might possibly look like. It can't be done.

What we can do is prove that such a set might exist. And this always requires the Axiom of Choice.

But you'll never see an explicit one.

Careful. The correct statement is: every description of a Lebesgue non-measurable set requires the Axiom of Choice. If you are talking about the Borel structure then we can give a construction of a non-measurable set. In measure theory, the Borel sigma algebra of a Polish space is so important it has a special name: "standard Borel space".
 
  • #14
brojesus111 said:
Given that there are N events, and the probability of each event is equal to the other and each probability is equal to 1 (ex. P(B1)=P(B2)=P(B3)...=P(BN)=1, we can show by induction that P(B1B2B3...BN)=1. If the collection of these N events are countably infinite or uncountably infinite, how would each affect the fact that P(B1B2...BN)=1?

My initial thoughts are that countably infinite collections can add up to 1, but with uncountably infinite the probabilities will be 0. But in this instance, we are given that P(B1)=P(B2)...=P(BN)=1. Does it make a difference if the set is countably infinite? How about if it is uncountably infinite?

If you want to think about probabilities of infinite subsets then there is no substitute for measure theory and the Kolmogorov axioms. These are both quite simple, though rather abstract.

Conversely, if you try to get by without measure theory and Kolmogorov you will wander in confusion and never get anywhere.
 

Question 1: What is the difference between countably infinite and uncountably infinite?

Countably infinite refers to a set that can be counted and listed using the natural numbers, while uncountably infinite refers to a set that cannot be counted or listed using the natural numbers.

Question 2: Can you give an example of a countably infinite set?

Yes, the set of all positive integers (1, 2, 3, ...) is an example of a countably infinite set.

Question 3: How can you prove that a set is uncountably infinite?

One way to prove that a set is uncountably infinite is by using a proof by contradiction. If you can show that assuming the set is countable leads to a contradiction, then the set must be uncountably infinite.

Question 4: Is it possible for a set to be both countably infinite and uncountably infinite?

No, a set can only be either countably infinite or uncountably infinite, but not both at the same time. This is because the definitions of these terms are mutually exclusive.

Question 5: How are countably infinite and uncountably infinite sets related to the concept of infinity?

Countably infinite and uncountably infinite sets are both infinite, meaning they have no upper bound or largest element. However, they differ in how they can be counted or listed, with countably infinite sets being able to be counted using the natural numbers and uncountably infinite sets being unable to be counted using the natural numbers.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
16
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
1K
Replies
15
Views
1K
  • Set Theory, Logic, Probability, Statistics
2
Replies
41
Views
4K
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
931
  • Set Theory, Logic, Probability, Statistics
Replies
9
Views
407
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
18
Views
2K
  • Quantum Interpretations and Foundations
2
Replies
38
Views
4K
  • Set Theory, Logic, Probability, Statistics
Replies
0
Views
1K
Back
Top