Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Convergence in measure vs Almost surely convergence

  1. Sep 2, 2010 #1
    Hi all
    I am struggling to see the difference between Convergence in probability and Almost surely convergence of a sequence of random variables.
    From what i can see Almost surely convergence of Sequence of Random variables is very similar to pointwise convegence from Real analysis.

    I am struggling to see why almost surely convergence is different to convergence in probability.
    more so why does almost surely convegence imply convergence in probabililty where as the converse is not true. I see some counter examples but still dont grasp the concept

    am i confusing myself by thinking convergence in proability vs almost surely convergence is analogous to pointwise convergence vs uniform convergence of sequence of functions ?

    sorry if this question is not clear, i'm kind of lost here.

    any pointers would we much appreciated
     
    Last edited: Sep 2, 2010
  2. jcsd
  3. Sep 3, 2010 #2
    In real analysis convergence "almost everywhere" implies holding for all values except on a set of zero measure.

    In probability theory, "almost everywhere" takes randomness into account such that for a large sequence of realizations of some random variable X over a population P, the mean value of X will fail to converge to the population mean of P with probability 0. This does not mean that the event is impossible; just that it happens so rarely that it is not possible to assign any non-zero probability to its occurrence.

    Here's a more formal treatment of the subject:

    http://www.stat.tamu.edu/~suhasini/teaching673/asymptotics.pdf
     
    Last edited: Sep 4, 2010
  4. Sep 4, 2010 #3
    Thanks
    i understand almost surely convergence, i still dont understand convergence in probability .
     
  5. Sep 5, 2010 #4
    Convergence of a probability, as opposed to almost sure convergence, is not used much in probability theory although it can be defined:

    [tex]\{\omega\in\Omega|lim_{n\rightarrow\infty} X_{n}(\omega)=X\}=\Omega[/tex]

    Since it is defined in terms of the sample space [tex]\Omega[/tex], the issue of sets with probability zero is not, it seems, relevant.

    http://en.wikipedia.org/wiki/Convergence_of_random_variables
     
    Last edited: Sep 5, 2010
  6. Sep 6, 2010 #5
    I suppose you know the definition of convergence in probability (or, in general, in measure) of a sequence [tex]{f_n}[/tex] of functions:

    [tex]{f_n}[/tex] converges in measure to [tex]f[/tex] if for any [tex]\epsilon[/tex] and [tex]\delta[/tex] [tex]\mu\left\{|f_n-f|>\delta\right\}<\epsilon[/tex] for [tex]n[/tex] large enough.

    So, attention is not focused on any single point, all that matters is the measure of the set of points which differ significatively from [tex]f[/tex], even if the sequence does not converges anywhere!!

    For example, consider a sequence of rectangular-pulse shaped functions the supports of which have measure (probability) [tex]1/n[/tex], each pulse function being located randomly inside [0,1]. The sequence does not converge on any single point (because randomness hypothesis implies that any point will be in the support of some pulse for infinite n), however the important fact is that the measure (probability) of the support where they differ from the 0 function goes to cero. So the sequence converge in probability to 0.

    Keep im mind: for convergence in probability all that matter is the measure (probability) of points where the sequence differ from the limit, not the points themselves.
     
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook