Intuitive Difference Between Weak And Strong Convergence in Probability

In summary: How is it that saying the probability that |X_{n}-X| is greater than some epsilon, goes to 0 for large n, implies that difference can jump above epsilon, infinitely many times?The difference is that for strong convergence, the limit is taken outside the probability while for almost sure convergence, the limit is taken inside the probability. So, when the limit is taken outside the probability for strong convergence, it means that the probability of X being greater than some epsilon is less than 1. However, when the limit is taken inside the probability for almost sure convergence, it means that the probability of X being greater than some epsilon is greater than 1.
  • #1
IniquiTrance
190
0
I've seen numerous rigourous/conceptual explanations of the difference between convergence in probability (weak), and strong, almost sure convergence.

One explanation my prof gave was that convergence in probability entails:

[tex]\lim_{n \rightarrow \infty} \mathbb{P}\left\{|X_{n}-X|<\epsilon\right\}=1[/tex]

or:[tex]\lim_{n \rightarrow \infty} \mathbb{P}\left\{\omega:|X_{n}(\omega)-X(\omega)|>\epsilon\right\}=0[/tex]

While strong convergence means:

[tex]\mathbb{P}\left\{\lim_{n \rightarrow \infty} X_{n}=X\right\}=1[/tex]

So my prof explains one difference is the limit is taken outside the probability for convergence in probability, while it is inside the probability for almost sure convergence.

Can anyone elaborate on this?

Also, the limits in both forms of convergence seem to imply the same thing.

How is it that saying the probability that [itex]|X_{n}-X|[/itex] is greater than some epsilon, goes to 0 for large n, implies that difference can jump above epsilon, infinitely many times?

And if it does, how is it that saying the probability of it staying below epsilon goes to 1, as n goes to infinity, implies that it CAN'T jump above epsilon EVER, after some n? (And is thus a somehow stronger form of convergence).

Thanks!
 
Physics news on Phys.org
  • #2
IniquiTrance said:
I've seen numerous rigourous/conceptual explanations of the difference between convergence in probability (weak), and strong, almost sure convergence.

One explanation my prof gave was that convergence in probability entails:

[tex]\lim_{n \rightarrow \infty} \mathbb{P}\left\{|X_{n}-X|<\epsilon\right\}=1[/tex]

While strong convergence means:

[tex]\mathbb{P}\left\{\lim_{n \rightarrow \infty} X_{n}=X\right\}=1[/tex]

So my prof explains one difference is the limit is taken outside the probability for convergence in probability, while it is inside the probability for almost sure convergence.

Can anyone elaborate on this?
Thanks!

My intuitive understanding is that strong convergence of a probability is analogous to sampling real numbers from the interval [0,1], say by Dedekind cuts. Every real number has a uniform probability of zero of being 'drawn' but the sum (density) of probabilities over the interval [0,1] is 1. Even though the probability of any real number is zero, a real number is always chosen by a Dedekind cut on the interval.

Weak convergence is analogous to the probability density of an event under an infinite continuous distribution. An infinite distribution cannot be uniform. Given that some events have a non zero probability density, then for every event, there can be an event with a smaller non zero probability density. Note that the closed interval [0,1] is finitely bounded by, and includes, 0 and 1 while the range of the Gaussian pdf is not finitely bounded.

BTW my intuition based on this example may be too restrictive or even wrong; in which case I'm sure someone will jump in and correct me. I responded because your post has gone unanswered for a while. Essentially, my understanding of strong convergence of a probability is defined in terms of a sample space while almost everywhere convergence is defined in terms of a pdf.
 
Last edited:
  • #3
IniquiTrance said:
... So my prof explains one difference is the limit is taken outside the probability for convergence in probability, while it is inside the probability for almost sure convergence.

Can anyone elaborate on this?

Also, the limits in both forms of convergence seem to imply the same thing.

One important difference is that the strong limit need not even exist when the weak one does. A neat example is given on the Wikipedia with an archer doing target practice: if X(n)=1 is a hit and X(n)=0 is a miss, then the probability of missing decreases as they practice (weak convergence to X=1) but there is always a non-zero chance of missing (no strong convergence).
 

What is the difference between weak and strong convergence in probability?

Weak and strong convergence are two different ways of measuring how closely a sequence of random variables approaches a limit. Weak convergence, also known as convergence in distribution, measures how close the probability distributions of the random variables are to each other. Strong convergence, on the other hand, measures how close the values of the random variables themselves are to the limit.

How are weak and strong convergence related?

Weak convergence is a weaker condition than strong convergence. This means that if a sequence of random variables converges strongly, it also converges weakly. However, the converse is not always true – a sequence may converge weakly but not strongly.

When is weak convergence used?

Weak convergence is often used when we are interested in the limiting behavior of a sequence of random variables, rather than the specific values of the random variables themselves. It is also useful for proving theorems and establishing properties of random variables.

In what situations is strong convergence more appropriate?

Strong convergence is typically used when we want to measure the actual values of the random variables and how they approach a limit. It is useful for studying the convergence of sums or averages of random variables.

How can weak and strong convergence be visualized?

One way to visualize weak and strong convergence is by using graphs of the probability distributions of the random variables. Weak convergence can be seen as the distributions becoming increasingly similar as the sequence approaches the limit, while strong convergence can be seen as the actual values of the random variables approaching the limit value.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
8
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Math POTW for Graduate Students
Replies
1
Views
798
  • Math POTW for Graduate Students
Replies
3
Views
1K
  • Topology and Analysis
Replies
4
Views
2K
  • Calculus and Beyond Homework Help
Replies
2
Views
187
Back
Top