Convergence in measure vs Almost surely convergence

In summary, the difference between convergence in probability and almost surely convergence of a sequence of random variables lies in the consideration of randomness. While almost surely convergence takes into account the rare occurrences of a sequence not converging, convergence in probability only looks at the measure of points where the sequence differs significantly from the limit. This means that almost surely convergence implies convergence in probability, but the converse is not necessarily true. This concept can be compared to pointwise and uniform convergence of sequence of functions in real analysis.
  • #1
cappadonza
27
0
Hi all
I am struggling to see the difference between Convergence in probability and Almost surely convergence of a sequence of random variables.
From what i can see Almost surely convergence of Sequence of Random variables is very similar to pointwise convegence from Real analysis.

I am struggling to see why almost surely convergence is different to convergence in probability.
more so why does almost surely convegence imply convergence in probabililty where as the converse is not true. I see some counter examples but still don't grasp the concept

am i confusing myself by thinking convergence in proability vs almost surely convergence is analogous to pointwise convergence vs uniform convergence of sequence of functions ?

sorry if this question is not clear, I'm kind of lost here.

any pointers would we much appreciated
 
Last edited:
Physics news on Phys.org
  • #2
cappadonza said:
Hi all I am struggling to see why almost surely convergence is different to convergence in probability.
more so why does almost surely convegence imply convergence in probabililty where as the converse is not true. I see some counter examples but still don't grasp the concept

In real analysis convergence "almost everywhere" implies holding for all values except on a set of zero measure.

In probability theory, "almost everywhere" takes randomness into account such that for a large sequence of realizations of some random variable X over a population P, the mean value of X will fail to converge to the population mean of P with probability 0. This does not mean that the event is impossible; just that it happens so rarely that it is not possible to assign any non-zero probability to its occurrence.

Here's a more formal treatment of the subject:

http://www.stat.tamu.edu/~suhasini/teaching673/asymptotics.pdf
 
Last edited:
  • #3
Thanks
i understand almost surely convergence, i still don't understand convergence in probability .
 
  • #4
cappadonza said:
Thanks
i understand almost surely convergence, i still don't understand convergence in probability .

Convergence of a probability, as opposed to almost sure convergence, is not used much in probability theory although it can be defined:

[tex]\{\omega\in\Omega|lim_{n\rightarrow\infty} X_{n}(\omega)=X\}=\Omega[/tex]

Since it is defined in terms of the sample space [tex]\Omega[/tex], the issue of sets with probability zero is not, it seems, relevant.

http://en.wikipedia.org/wiki/Convergence_of_random_variables
 
Last edited:
  • #5
I suppose you know the definition of convergence in probability (or, in general, in measure) of a sequence [tex]{f_n}[/tex] of functions:

[tex]{f_n}[/tex] converges in measure to [tex]f[/tex] if for any [tex]\epsilon[/tex] and [tex]\delta[/tex] [tex]\mu\left\{|f_n-f|>\delta\right\}<\epsilon[/tex] for [tex]n[/tex] large enough.

So, attention is not focused on any single point, all that matters is the measure of the set of points which differ significatively from [tex]f[/tex], even if the sequence does not converges anywhere!

For example, consider a sequence of rectangular-pulse shaped functions the supports of which have measure (probability) [tex]1/n[/tex], each pulse function being located randomly inside [0,1]. The sequence does not converge on any single point (because randomness hypothesis implies that any point will be in the support of some pulse for infinite n), however the important fact is that the measure (probability) of the support where they differ from the 0 function goes to cero. So the sequence converge in probability to 0.

Keep I am mind: for convergence in probability all that matter is the measure (probability) of points where the sequence differ from the limit, not the points themselves.
 

What is convergence in measure?

Convergence in measure is a type of convergence of a sequence of random variables, where the probability that the difference between a random variable and its limit is greater than a given positive number approaches zero as the number of terms in the sequence increases.

What is almost surely convergence?

Almost surely convergence is a stronger type of convergence of a sequence of random variables, where the probability of the difference between a random variable and its limit being equal to zero approaches one as the number of terms in the sequence increases.

What is the difference between convergence in measure and almost surely convergence?

The main difference is in the level of certainty of the convergence. Almost surely convergence guarantees that the limit of the sequence will be reached with probability one, while convergence in measure only guarantees that the limit will be approached with probability approaching one.

Which type of convergence is stronger?

Almost surely convergence is stronger than convergence in measure, as it provides a stronger level of certainty for the convergence of a sequence of random variables.

When is convergence in measure preferred over almost surely convergence?

Convergence in measure may be preferred when dealing with more general spaces, as it is a weaker condition and may hold for a wider range of cases. It may also be preferred when dealing with more complex and difficult to analyze sequences of random variables.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
8
Views
1K
Replies
11
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
2K
Replies
3
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
3K
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
4K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
Back
Top