Mistake using Borel-Cantelli Lemma

  • Thread starter logarithmic
  • Start date
  • Tags
    Mistake
In summary: You are correct, the events \{ \omega \in \Omega: \bar{X}_n(\omega) = 0 \} are measurable for each n, as the sample mean \bar{X}_n is a random variable defined on the probability space \Omega. And yes, in order to apply the Borel-Cantelli lemma, the events must all be in the same probability space. However, the paradox arises from trying to use the lemma in a situation where the events are not in the same probability space.To illustrate this, let's consider the example of flipping a coin repeatedly. Let X_n be the random variable that takes on the value 1 if the nth flip is heads, and 0 if it
  • #1
logarithmic
107
0
Suppose that [itex]X_i\sim N(0,1)[/itex] are iid, then by the Strong Law of Large Numbers the sample mean [itex]\bar{X}_n=\frac{1}{n}\sum_{i=1}^n X_i[/itex] converges almost surely to the mean, which in this case is 0.

Recall that by definition, [itex]\bar{X}_n \to 0[/itex] almost surely means that [itex]P(\lim_{n\to\infty}|\bar{X_n}-0|=0)=1[/itex]. Since for the limit to converge, the limsup (and liminf) must also converge to the same value, we have [itex]P(\limsup_{n\to\infty}|\bar{X_n}-0|=0)=1[/itex].

But [itex]P(\limsup_{n\to\infty}|\bar{X_n}|=0)=P(|\bar{X_n}|=0\text{ i.o.})[/itex], where i.o. means infinitely often (this is the definition of i.o.). Now, [itex]\sum_n P(|\bar{X_n}|=0)=0<\infty[/itex], since [itex]\bar{X}_n[/itex] is a continuous random variable and the probability it takes any particular value is 0. But by the Borel-Cantelli Lemma this would imply that [itex]P(|\bar{X_n}|=0\text{ i.o.})=0[/itex], not 1, as the Strong Law of Large Numbers says.

I've been trying all day to find what's wrong with my argument. We can replace the normal distribution with any continuous distribution with finite mean. Can anyone help me?
 
Last edited:
Physics news on Phys.org
  • #2
The error is in this line:

logarithmic said:
But [itex]P(\limsup_{n\to\infty}|X_n|=0)=P(|X_n|=0 \text{ i.o.})[/itex]

It is entirely possible for [itex]\limsup_{n\rightarrow \infty} |X_n|=0[/itex] without having [itex]|X_n|=0[/itex] even once, let alone infinitely often (example: suppose that [itex]X_n=\frac{1}{n}[/itex]).
 
  • #3
I believe that equality you quoted is correct.

It's true by definition, it's the definition of i.o.:
http://planetmath.org/encyclopedia/Io.html

Alternatively, we could have just use the version of the Borel-Cantelli Lemma given at Wikipedia (http://en.wikipedia.org/wiki/Borel–Cantelli_lemma) on [itex]P(\limsup_{n\to\infty}|\bar{X_n}|=0)[/itex], without any reference to i.o..

Also, all the [itex]X_n[/itex]'s after the first one should be [itex]\bar{X}_n[/itex], I'll edit that in.
 
Last edited by a moderator:
  • #4
logarithmic said:
I believe that equality you quoted is correct.

It's true by definition, it's the definition of i.o.:
http://planetmath.org/encyclopedia/Io.html

Also, all the [itex]X_n[/itex]'s after the first one should be [itex]\bar{X}_n[/itex], I'll edit that in.

No, it isn't. |X_n| = 0 i.o. means that for infinitely many n, |X_n|=0. This is very different from saying that limsup |X_n|=0.

You're apparently being confused by the fact that the planetmath definition uses the word limsup. But in the definition of i.o., the limsup operator is being applied to a sequence of events, not a sequence of values of random variables. This becomes clear if we write out the formal definition of the actual events being described. Assuming the r.v.s X_n are defined on the underlying probability space Ω, we have:

[tex]"\limsup_{n\rightarrow\infty} |\overline{X_n}|=0" = \{\omega\in \Omega:\limsup_{n\rightarrow\infty} |\overline{X_n}(\omega)|=0\}[/tex]

But

[tex]"|\overline{X_n}|=0 \text{ i.o.}"=\limsup \{\omega\in \Omega: |\overline{X_n}(\omega)|=0\}[/tex]

Do you see why these describe completely different sets in the probability space?
 
Last edited by a moderator:
  • #5
Citan Uzuki said:
No, it isn't. |X_n| = 0 i.o. means that for infinitely many n, |X_n|=0. This is very different from saying that limsup |X_n|=0.

You're apparently being confused by the fact that the planetmath definition uses the word limsup. But in the definition of i.o., the limsup operator is being applied to a sequence of events, not a sequence of values of random variables. This becomes clear if we write out the formal definition of the actual events being described. Assuming the r.v.s X_n are defined on the underlying probability space Ω, we have:

[tex]"\limsup_{n\rightarrow\infty} |\overline{X_n}|=0" = \{\omega\in \Omega:\limsup_{n\rightarrow\infty} |\overline{X_n}(\omega)|=0\}[/tex]

But

[tex]"|\overline{X_n}|=0 \text{ i.o.}"=\limsup \{\omega\in \Omega: |\overline{X_n}(\omega)|=0\}[/tex]

Do you see why these describe completely different sets in the probability space?
So the event [itex]"\limsup_{n\rightarrow\infty} |\overline{X_n}|=0"[/itex]
is really the event [itex]"Y=0"[/itex], where [itex]Y:=\limsup_{n\rightarrow\infty}|\bar{X}_n|[/itex] is a random variable? And this is what is meant in the definition of almost sure convergence?

And [itex]"Y=0"[/itex] is not equal to [itex]"\limsup_{n\rightarrow\infty}\{|\bar{X}_n |=0\}":="|\bar{X}_n | \text{ i.o.}"[/itex], so that the Borel Cantelli Lemma can't be applied?
 
  • #6
logarithmic said:
So the event [itex]"\limsup_{n\rightarrow\infty} |\overline{X_n}|=0"[/itex]
is really the event [itex]"Y=0"[/itex], where [itex]Y:=\limsup_{n\rightarrow\infty}|\bar{X}_n|[/itex] is a random variable? And this is what is meant in the definition of almost sure convergence?

Yes, that's right.

And [itex]"Y=0"[/itex] is not equal to [itex]"\limsup_{n\rightarrow\infty}\{|\bar{X}_n |=0\}":="|\bar{X}_n | \text{ i.o.}"[/itex], so that the Borel Cantelli Lemma can't be applied?

You got it!
 
  • #7
logarithmic said:
Suppose that [itex]X_i\sim N(0,1)[/itex] are iid, then by the Strong Law of Large Numbers the sample mean [itex]\bar{X}_n=\sum_{i=1}^n X_i[/itex] converges almost surely to the mean, which in this case is 0.
[tex] \bar{X}_n = \frac{\sum_{i=1}^n X_i}{n} [/tex]
Recall that by definition, a.s. convergence means that [itex]P(\lim_{n\to\infty}|X_n-0|=0)=1[/itex].

You need the bar:
[tex] | \bar{X}_n - 0 | [/tex]

That's OK, provided you are using the notation [itex] | \bar{X}_n - 0| [/itex] to denote a set of outcomes in a probability space and "0" to denote a constant random variable. It would be clearer to write [itex] P(\{\omega \in \Omega [/itex] such that [itex] \bar{X}_n(\omega) = 0 \}) = 1 [/itex] where [itex] \Omega [/itex] is the set of outcomes in the probability space.

An interesting technicality here is "What probablity space?". It can't be the probability space for a single normal random variable. One way to define the probability space is say that for each [itex] n [/itex], it must the the product probability space that defines the vector of outcomes of [itex] n [/itex] normal random variables. Another way is to say that the outcomes of the probability space are the set of real numbers, but the measure on the space involves the convolution of [itex] n [/itex] normal distributions. Either way, the probability space for the event [itex] | \bar{X}_n - 0 ] [/itex] is a different probability space for each [itex] n [/itex] since a probability space depends on the set of outcomes, the sigma field and the probability measure.

(It's also interesting that the definition hypothesizes that the event [itex] \{ \omega \in \Omega: \bar{X}_n = 0 \} [/itex] is a measureable set for each [itex] n [/itex], which could be proven in your example, but I wonder about the general case.)

The Borel-Cantelli lemma deals with a sequence of events whose members are all from the same probability space. In the strong law of large numbers, each event in the sequence of events [itex] \{ \omega \in \Omega: \bar{X_n}(\omega) = 0 \} [/itex] is in a different probability space.

To get to a paradox, I think you must try to "embed" all these events in the same space.
 
  • #8
Citan Uzuki said:
Yes, that's right.



You got it!

On example 2 on page 2 of this link (http://math.arizona.edu/~jgemmer/bishopprobability3.pdf) it uses Borel-Canetelli for random variables in the same way I did, even though they've defined io for sets only.
 
  • #9
logarithmic said:
On example 2 on page 2 of this link (http://math.arizona.edu/~jgemmer/bishopprobability3.pdf) it uses Borel-Canetelli for random variables in the same way I did, even though they've defined io for sets only.

That example says the [itex] X_n [/itex] are independent identically distributed random variables. So each [itex] X_i [/itex] is defined in the same probablity space and the measure in that space is the same measure.

In your example, the [itex]\bar{X_i} [/itex] are each defined in different probability spaces. In the statement of the theorems in that link, the [itex] A_n [/itex] can be different events but they are in the same probability space.
 
  • #10
logarithmic said:
On example 2 on page 2 of this link (http://math.arizona.edu/~jgemmer/bishopprobability3.pdf) it uses Borel-Canetelli for random variables in the same way I did, even though they've defined io for sets only.

I really don't see how you came to that conclusion. The only place Borel-Cantelli is applied in that example is to the sequence of events [itex]X_n > \alpha \log (n)[/itex]. It isn't being applied to the values of [itex]\frac{X_n}{\log n}[/itex]
 
  • #11
What about the following argument. Let [itex]X_i[/itex] be standard Cauchy. It's a well known fact about the Cauchy distribution that [itex]\bar{X}_n[/itex] has the same distribution as [itex]X_1[/itex], i.e. the sample mean of standard Cauchy is standard Cauchy, for any n.

Let [itex]c \geq 0[/itex] be arbitrary.

Now [itex]P(\bar{X}_n > c) = P(X_1 > c)[/itex] which is some nonzero constant. So [itex]\sum_n P(\bar{X}_n > c) = \infty[/itex], and by Borel-Cantelli, [itex]P(\limsup\bar{X}_n > c)[/itex]=1. Since c is arbitrary, [itex]\limsup\bar{X}_n=\infty[/itex] (this was obvious because the Cauchy distribution has infinite mean).

But we can replace ">" with "<" and everything above (appears) to still work and we conclude that [itex]P(\limsup\bar{X}_n < c)=1[/itex], hence [itex]\limsup\bar{X}_n=-\infty[/itex]. Where have I made a mistake?

Source: http://www-stat.wharton.upenn.edu/~steele/Courses/530/HomeWorkSolutions/STAT%20530%20HW3%20Solutions.pdf (I think there's a typo in the 2nd equation in this link. The n on the denominator should be n^2).
 
  • #12
Apparently, you don't want to reply to any objections about different probability spaces, so let's put it this way. The [itex] \bar{X}_i [/itex] are not idependent random variables. (Is [itex] \frac{ X_1 + X_2 + X_3}{3} [/itex] going to be independent of [itex] \frac{X_1 + X_2}{2} [/itex]?) And the [itex] \bar{X_i} [/itex] are not identically distributed. (Edit: OK, they are identically distributed in your example! I see now why you picked the Cauchy. But that doesn't overcome the objection that they are different non-independent random variables.)

Is there a version of the Borel - Cantelli lemma that applies to events [itex] A_i [/itex] where the probability of each event is computed by using a different random variable?
 
Last edited:
  • #13
Stephen Tashi said:
Apparently, you don't want to reply to any objections about different probability spaces, so let's put it this way. The [itex] \bar{X}_i [/itex] are not idependent random variables. (Is [itex] \frac{ X_1 + X_2 + X_3}{3} [/itex] going to be independent of [itex] \frac{X_1 + X_2}{2} [/itex]?) And the [itex] \bar{X_i} [/itex] are not identically distributed. (Edit: OK, they are identically distributed in your example! I see now why you picked the Cauchy. But that doesn't overcome the objection that they are different non-independent random variables.)

Is there a version of the Borel - Cantelli lemma that applies to events [itex] A_i [/itex] where the probability of each event is computed by using a different random variable?
Borel-Cantelli doesn't require independence.

https://en.wikipedia.org/wiki/Borel–Cantelli_lemma The line above "Example".

And I don't think what you're saying is true, as it would seem to imply that Borel-Cantelli can't be used on sample means, when it's used to prove the Strong Law of Large Numbers and the strong convergence of the sample mean in inference. Why can't everything be on the same probability space?
 
  • #14
logarithmic said:
Borel-Cantelli doesn't require independence.

https://en.wikipedia.org/wiki/Borel–Cantelli_lemma The line above "Example".

That line refers to the events, not to what random variable is involved, but you are correct that my citing the non-independence of the random variables in your example is irrelevant. The point is that your example uses events defined by different random variables.

Let [itex] X [/itex] and [itex] Y [/itex] be random variables. Suppose they have the same probability density function. Are they "the same" random variable? - not necessarily. They would be the same if they both refer to the same experimental outcome, in which case they would always have the same realized value. But then a realization of their sample mean [itex] X(\omega) + Y(\omega) [/itex] would always equal [itex] \frac{2 X(\omega)}{2} = X(\omega) [/itex]. In your example, [itex] \bar{X}_2 [/itex] and [itex]\bar{X}_3[/itex] have the same distribution, but they are not the same random variable.

I think unravelling the paradox involves parsing the hypothesis of the lemma, which begins "Let [itex] (E_n) [/itex] be a sequence of events in a probability space...". If we define [itex] E_i [/itex] to be the event [itex] \{\omega: \bar{X}_i > c \} [/itex] we haven't defined an event that involves the outcome of [itex] \bar{X}_{i+1} [/itex], so in order to get all the events [itex] E_n [/itex] to be in the same probability space, you must specify what values of [itex] \bar{X}_{i+1} [/itex] are outcomes in [itex] E_i [/itex]. You can say that [itex] E_i [/itex] is the event [itex] \{\omega: \bar{X}_i(\omega) > c [/itex] and for [itex] k \ne i, \bar{X}_k [/itex] can take any value [itex] \}[/itex]. However then you must examine your assertion that [itex] P(A_i) [/itex] can be computed by a simple integration of the Cauchy distribution because if we consider only finitely many of the [itex] \bar{X_n} [/itex] this is not an integration involving independent random variables where the integrand is a product of factors, each being a function of only one of the variables. Furthemore, the probability space for the example would involve outcomes that must be infinite sequences of real numbers since an outcome must specify the value of each [itex] \bar{X}_n [/itex].
 
  • #15
I don't think there is a paradox, as the argument using ">" is valid. It shows that in the link, and the conclusion that the limsup is infinite is definitely correct. Something is wrong with it when I change to "<".

Your argument doesn't make sense. Why can't we have a probability that contains all the [itex]\bar{X}_n[/itex]'s, and have them be dependent or independent or whatever we want. Obviously such a space exists, as it is very simple to simulate either of the two cases on a computer.

Your argument seems to imply that we can't make any statement relating to convergence of sample means, like stating the central limit theorem.
 
Last edited:
  • #16
Stephen, in all the problems we have been considering so far, the [itex]X_i[/itex] are specified to be independent identically distributed random variables. The independence assumption already tells us that they are defined on the same probability space, because otherwise it wouldn't make sense to talk about whether they are independent or not. The reason nobody has been replying to your statements about the r.v.s being defined on the same probability space is because they are addressing issues that simply are not present in this problem.

logarithmic said:
What about the following argument. Let [itex]X_i[/itex] be standard Cauchy. It's a well known fact about the Cauchy distribution that [itex]\bar{X}_n[/itex] has the same distribution as [itex]X_1[/itex], i.e. the sample mean of standard Cauchy is standard Cauchy, for any n.

Let [itex]c \geq 0[/itex] be arbitrary.

Now [itex]P(\bar{X}_n > c) = P(X_1 > c)[/itex] which is some nonzero constant. So [itex]\sum_n P(\bar{X}_n > c) = \infty[/itex], and by Borel-Cantelli, [itex]P(\limsup\bar{X}_n > c)[/itex]=1. Since c is arbitrary, [itex]\limsup\bar{X}_n=\infty[/itex] (this was obvious because the Cauchy distribution has infinite mean).

This is not true. Steve's right here, you do need independence here. Remember that there are two forms of the Borel-Cantelli lemma. The first form is the one that says [itex]\sum_{n=1}^{\infty} P(E_n) < \infty \Rightarrow P(E_n \text{ i.o.} = 0)[/itex], which does not require independence. The second form, which is the one being invoked here, is the one that states [itex]\sum_{n=1}^{\infty} P(E_n)=\infty \Rightarrow P(E_n \text{ i.o}) = 1[/itex], and this form of the lemma does require independence. To see this, let U be a uniform 0-1 r.v. and let E_n be the event that [itex]U\leq \frac{1}{n}[/itex]. Then [itex]\sum_{n=1}^{\infty}P(E_n) = \sum_{n=1}^{\infty}\frac{1}{n} = \infty[/itex] but [itex]P(\limsup_{n\rightarrow \infty} E_n) = P(U=0) = 0[/itex].

By the way, there is a mistake in that student's paper. The mean of the Cauchy distribution is undefined, not [itex]\infty[/itex].

But we can replace ">" with "<" and everything above (appears) to still work and we conclude that [itex]P(\limsup\bar{X}_n < c)=1[/itex], hence [itex]\limsup\bar{X}_n=-\infty[/itex]. Where have I made a mistake?

Well, for one thing, you're still mixing up the limsup of a sequence of events and the limsup of a sequence of random variables. In the paper, the student concludes (correctly) that [itex]P(\frac{|X_n|}{n} \geq c \text{ i.o.}) = 1[/itex], and then states that since c was arbitrary, [itex]P(\limsup_{n \rightarrow \infty} \frac{|X_n|}{n} = \infty)=1[/itex], which is true. There is however, a missing step which the student did not write explicitly, but which is important. Namely, the inference that [itex]P(\frac{|X_n|}{n} \geq c \text{ i.o.}) = 1 \Rightarrow P(\limsup_{n\rightarrow \infty} \frac{|X_n|}{n} \geq c) = 1 [/itex]. You seem to think that [itex]\frac{|X_n|}{n} \geq c \text{ i.o.}[/itex] and [itex]\limsup_{n\rightarrow \infty} \frac{|X_n|}{n} \geq c[/itex] are the same event. They aren't. The event [itex]\frac{|X_n|}{n} \geq c \text{ i.o.}[/itex] is a subset of the event [itex]\limsup_{n\rightarrow \infty} \frac{|X_n|}{n} \geq c[/itex], so if the first one happens almost surely, then so does the second. But if the [itex]X_n[/itex] happen to take on the values [itex]c-\frac{1}{n}[/itex], then the second event occurs, but the first one does not. That point seems to be the source of all your confusion.
 
  • #17
Citan Uzuki said:
Stephen, in all the problems we have been considering so far, the [itex]X_i[/itex] are specified to be independent identically distributed random variables. The independence assumption already tells us that they are defined on the same probability space, because otherwise it wouldn't make sense to talk about whether they are independent or not.

I agree the [itex] X_i [/itex] are in independent. If you want them "in" the same probability space then an outcome in that space must specify the values of each of the [itex] X_i [/itex].

But the [itex] \bar{X}_i [/itex] are not independent. The examples by logarithmic that purport to contradict the Bore-Cantelli lemma involve the [itex] \bar{X}_i [/itex].

Your objections to the examples by logarithmic may be valid but, in addition, a basic flaw in the examples is that the events [itex] E_n [/itex] (in the Wikipedia) or [itex] A_n [/itex] (in the link) are supposed to be in the same probability space by the hypothesis of the lemma. In logarithmic's examples, they are not defined as being in the same space. (If they are in the same space, what is that space? What exactly is an outcome in this space? If the space is [itex] (\Omega, F, \mu) [/itex] , what is an element of [itex] \Omega [/itex]? It can't be specified as a single real number because that doesn't define the values of each of the [itex] \bar{X}_i [/itex].

For example, the probability space of a single random variable [itex] X_1 [/itex] isn't a probability space for the [itex] \bar{X}_2 [/itex] since an outcome such as [itex] X_1(\omega) = 2.5 [/itex] doesn't specify a value for [itex] X_2 [/itex], upon which [itex] \bar{X}_2[/itex] depends.
 
  • #18
In general, if a sequence of i.i.d. random variables [itex]X_1,X_2,\ldots[/itex] is specified, then the underlying probability space is [itex]\mathbb{R}^{\infty}[/itex], with the [itex]\Sigma[/itex]-algebra [itex]\mathcal{F}[/itex] being the [itex]\Sigma[/itex]-algebra generated by all sets of the form [itex]\prod_{n=1}^{N}A_n \times \prod_{n=N+1}^{\infty} \mathbb{R}[/itex] (where the [itex]A_n[/itex] are Borel subsets of [itex]\mathbb{R}[/itex]), and the probability measure being the unique probability measure such that [itex]\mu \left( \prod_{n=1}^{N}A_n \times \prod_{n=N+1}^{\infty}\mathbb{R} \right) = \prod_{n=1}^{N}P(A_n)[/itex], where [itex]P(A_n)[/itex] is computed with respect to the specified distribution of the [itex]X_n[/itex]. The existence and uniqueness of such a measure is guaranteed by the Kolmogorov extension theorem. This is what you do implicitly every time you have a sequence of i.i.d. random variables. Note that the mean of the first N [itex]X_n[/itex] is indeed a measurable function with respect to this [itex]\Sigma[/itex]-algebra.
 
  • #19
Stephen Tashi said:
But the [itex] \bar{X}_i [/itex] are not independent. The examples by logarithmic that purport to contradict the Bore-Cantelli lemma involve the [itex] \bar{X}_i [/itex].

Yes, that's correct. The paper he's quoting, however, talks about the [itex]X_i[/itex] themselves, and I want him to understand why what the paper is doing is (essentially, there are some minor mistakes, but they are fixable) valid, and what he's doing is not.
 
  • #20
Citan Uzuki said:
Stephen, in all the problems we have been considering so far, the [itex]X_i[/itex] are specified to be independent identically distributed random variables. The independence assumption already tells us that they are defined on the same probability space, because otherwise it wouldn't make sense to talk about whether they are independent or not. The reason nobody has been replying to your statements about the r.v.s being defined on the same probability space is because they are addressing issues that simply are not present in this problem.
This is not true. Steve's right here, you do need independence here. Remember that there are two forms of the Borel-Cantelli lemma. The first form is the one that says [itex]\sum_{n=1}^{\infty} P(E_n) < \infty \Rightarrow P(E_n \text{ i.o.} = 0)[/itex], which does not require independence. The second form, which is the one being invoked here, is the one that states [itex]\sum_{n=1}^{\infty} P(E_n)=\infty \Rightarrow P(E_n \text{ i.o}) = 1[/itex], and this form of the lemma does require independence. To see this, let U be a uniform 0-1 r.v. and let E_n be the event that [itex]U\leq \frac{1}{n}[/itex]. Then [itex]\sum_{n=1}^{\infty}P(E_n) = \sum_{n=1}^{\infty}\frac{1}{n} = \infty[/itex] but [itex]P(\limsup_{n\rightarrow \infty} E_n) = P(U=0) = 0[/itex].

By the way, there is a mistake in that student's paper. The mean of the Cauchy distribution is undefined, not [itex]\infty[/itex].
Well, for one thing, you're still mixing up the limsup of a sequence of events and the limsup of a sequence of random variables. In the paper, the student concludes (correctly) that [itex]P(\frac{|X_n|}{n} \geq c \text{ i.o.}) = 1[/itex], and then states that since c was arbitrary, [itex]P(\limsup_{n \rightarrow \infty} \frac{|X_n|}{n} = \infty)=1[/itex], which is true. There is however, a missing step which the student did not write explicitly, but which is important. Namely, the inference that [itex]P(\frac{|X_n|}{n} \geq c \text{ i.o.}) = 1 \Rightarrow P(\limsup_{n\rightarrow \infty} \frac{|X_n|}{n} \geq c) = 1 [/itex]. You seem to think that [itex]\frac{|X_n|}{n} \geq c \text{ i.o.}[/itex] and [itex]\limsup_{n\rightarrow \infty} \frac{|X_n|}{n} \geq c[/itex] are the same event. They aren't. The event [itex]\frac{|X_n|}{n} \geq c \text{ i.o.}[/itex] is a subset of the event [itex]\limsup_{n\rightarrow \infty} \frac{|X_n|}{n} \geq c[/itex], so if the first one happens almost surely, then so does the second. But if the [itex]X_n[/itex] happen to take on the values [itex]c-\frac{1}{n}[/itex], then the second event occurs, but the first one does not. That point seems to be the source of all your confusion.
Thanks. How would we show that [itex]P(\liminf \bar{X}_n < c) = 1[/itex]. I think this is [itex]P(\bar{X}_n < c \text{ e.v.}) = 1 - P(\bar{X}_n > c \text{ i.o.}) = 1 - 1 = 0[/itex], not 1.
 
  • #21
Citan Uzuki said:
In general, if a sequence of i.i.d. random variables [itex]X_1,X_2,\ldots[/itex] is specified, then the underlying probability space is [itex]\mathbb{R}^{\infty}[/itex],...

I agree. And my point is that in the similar infinite dimensional probability space needed for the [itex] \bar{X}_i [/itex] , describing a set by a requirement like [itex] \bar{X}_3 < 7.3 [/itex] fails to completely specify an event in that probability space unless some conventions are stated about what values the other [itex] \bar{X}_i [/itex] may take.
 
  • #22
Stephen Tashi said:
I agree. And my point is that in the similar infinite dimensional probability space needed for the [itex] \bar{X}_i [/itex] , describing a set by a requirement like [itex] \bar{X}_3 < 7.3 [/itex] fails to completely specify an event in that probability space unless some conventions are stated about what values the other [itex] \bar{X}_i [/itex] may take.

By definition, [itex]\overline{X}_3 = \frac{1}{3}(X_1 + X_2 + X_3)[/itex]. The set [itex]\{ \omega : \frac{1}{3}(X_1(\omega) + X_2(\omega) + X_3(\omega)) < 7.3 \}[/itex] is certainly a well-defined measurable set, seeing as how it is the preimage of a measurable set under the measurable function [itex]\omega \mapsto \frac{1}{3}(X_1(\omega) + X_2(\omega) + X_3(\omega)[/itex]. So I'm having some difficulty understanding what you think the problem is here.

logarithmic said:
Thanks. How would we show that [itex]P(\liminf \bar{X}_n < c) = 1[/itex].

I think you mean [itex]X_n[/itex], not [itex]\overline{X}_n[/itex]. Since the [itex]\overline{X}_n[/itex] are not independent, the relevant case of the Borel-Cantelli theorem doesn't apply, and I'm not sure whether it's true that[itex]P(\liminf \overline{X}_n < c) = 1[/itex]. For the [itex]X_n[/itex] though, we note that by symmetry, [itex]\sum_{n=1}^{\infty} P(X_n < c) = \sum_{n=1}^{\infty} P(X_n > -c) = \infty[/itex], so by independence and Borel-Cantelli, we have [itex]P(X_n < c \text{ i.o.})=1[/itex]. Of course if for infinitely many [itex]n[/itex], [itex]X_n < c[/itex], then we have [itex]\liminf X_n < c[/itex], so since the first event happens almost surely, so too does the second.

I think this is [itex]P(\bar{X}_n < c \text{ e.v.}) = 1 - P(\bar{X}_n > c \text{ i.o.}) = 1 - 1 = 0[/itex], not 1.

What do you mean by "e.v." in this context?
 
  • #23
Citan Uzuki said:
The set [itex]\{ \omega : \frac{1}{3}(X_1(\omega) + X_2(\omega) + X_3(\omega)) < 7.3 \}[/itex] is certainly a well-defined measurable set,...

It's only a well defined set in [itex] \mathbb{R}^\infty [/itex] if we have the understanding that it defines an event where the other [itex] \bar{X}_i [/itex] can take on any values. I'm just trying to get confirmation that we have this understanding.
 
  • #24
We do.
 
  • #25
e.v. means eventually.

For sets S_n, [itex]\{S_n e.v.\}:= \cup_{n\in \mathbb{N}}\cap_{m\geq n}S_n[/itex].
 
  • #26
Ah, I see. In that case you are correct that [itex]P(\overline{X}_n < c \text{ e.v.})=0[/itex], but this is irrelevant, since we may have [itex]\liminf_{n\rightarrow \infty} \overline{X}_n < c[/itex] even if we do not have [itex]\overline{X}_n < c \text{ e.v.}[/itex]
 

1. What is the Borel-Cantelli Lemma?

The Borel-Cantelli Lemma is a theorem in probability theory that states that if a series of independent events have a finite probability of occurring, then the probability of the events occurring infinitely often is either 0 or 1.

2. How is the Borel-Cantelli Lemma used in science?

The Borel-Cantelli Lemma is commonly used in science to prove the convergence of series and to analyze the behavior of random variables. It is also useful in proving the existence of certain properties in probability theory, such as the almost sure convergence of a sequence of random variables.

3. Can the Borel-Cantelli Lemma be used to prove all convergent series?

No, the Borel-Cantelli Lemma can only be used to prove the convergence of series that satisfy certain conditions, such as being independent and having a finite probability of occurring.

4. What are some common mistakes when using the Borel-Cantelli Lemma?

One common mistake when using the Borel-Cantelli Lemma is assuming that it can be used for all convergent series, as mentioned in the previous question. Other mistakes include incorrectly identifying the events as independent or failing to properly calculate the probability of the events occurring.

5. How can I avoid making mistakes when using the Borel-Cantelli Lemma?

To avoid making mistakes when using the Borel-Cantelli Lemma, it is important to carefully check the assumptions and conditions required for the lemma to hold. Additionally, double-checking calculations and seeking guidance from experts can help prevent errors.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
741
  • Set Theory, Logic, Probability, Statistics
Replies
8
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
27
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
0
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
9
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
3K
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
2K
Back
Top