Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Convergence of Random Variables

  1. Apr 24, 2007 #1
    Hi,

    What is meant by "convergence of random variables"? Specifically, this statement confuses me:

    The sequence of random variables [tex] X_1, X_2, ... , X_n [/tex] is said to converge in probability to the constant c if for any [tex] \epsilon > 0[/tex]:
    [tex] \lim_{n \rightarrow \infty} P(\vert X_n - c \vert < \epsilon) = 1 [/tex]
     
  2. jcsd
  3. Apr 24, 2007 #2

    matt grime

    User Avatar
    Science Advisor
    Homework Helper

    it means exactly what it says: for any e>0 the sequence of numbers

    P(|X_n - c|<e)

    converges to 1. What is confusing about it?

    So, if X_n was the normal distribution with mean c and standard deviation 1/n, that would satisfy the condition.

    Just work through a few examples, try to figure out what is going on.

    As to why this is important, I can't help you, but there ought to be no mystery as to what the definition is: the r.v.s are getting 'closer' to being the constant r.v. X with P(X=c)=1.
     
  4. Apr 24, 2007 #3
    Sorry I realize now I didn't make myself very clear. What I don't get is, what is this sequence of random variables? And how is it approaching something? For example, consider the case of rolling say a 3 with a fair die. What is [tex] X_1? X_2? [/tex] in this case? I would think there is only one random variable here, so what is this sequence of random variables? And what and how is it approaching?
     
  5. Apr 24, 2007 #4

    matt grime

    User Avatar
    Science Advisor
    Homework Helper

    X_1 and X_2 are nothing (a priori) to do with rolling a 3 with a fair die.

    They're just random variables. That's all. They don't represent any real life experiment.

    X_1 is an r.v. X_2, is an r.v. etc.

    If they satisfy this condition they are said to converge in probability to that constant r.v.

    It's just a definition. I gave you an example: X_n is a normal r.v. with mean c and variance 1/n.

    If the word sequence bothers you, just think of a family of r.v.s indexed by the integers.

    Given such a family, and an e>0, and a c, I can write down a seqeunce of numbers

    x_n=P(|X_n-c|<e)

    if this sequence of numbers x_n tends to 1, we are in the situation above. X_n, as n increases gets closer to being 'like a constant r.v.' in this case.

    If you want examples with die, then X_n as the score on a die, ain't going to converge to anything. If I set X_n to be the average score on n dice then it will converge in probability to the constant r.v. with c=3.5, by the strong law of large numbers, or the central limit theorem, or whatever.
     
    Last edited: Apr 24, 2007
  6. Apr 24, 2007 #5

    EnumaElish

    User Avatar
    Science Advisor
    Homework Helper

    Rightly confused you are: a random variable is neither. It is not a variable, it is a (probability) function. Usually a probability function is a deterministic function with known parameters and a known "shape." A sequence of random variables is a sequence of probability functions F1, ... Fn; P(|Xn - c| < ε) = Fn(ε+c) if x > c; P(|Xn - c| < ε) = 1 - Fn(ε-c) if x < c (where x is the "generic" argument of Fn evaluated "near c").

    "A random variable converging to a constant" is, I think, a probability distribution converging to a degenerate distribution "in probability" (pointwise) although not necessarily "in distribution" (uniformly).
     
    Last edited: Apr 24, 2007
  7. Apr 29, 2007 #6
    This is not right. A real-valued random variable is a measurable function from the probability space to the reals. If the random variable is [tex]X[/tex], then its distribution (a.k.a. cumulative distribution function) is [tex]F(x) = P(X \le x)[/tex]. The objects [tex]X[/tex] and [tex]F[/tex] are different.

    Saying [tex]X_n \to X[/tex] in probability means that

    [tex]P(|X_n - X| > \varepsilon) \to 0[/tex]

    as [tex]n\to\infty[/tex] for each fixed [tex]\varepsilon>0[/tex]. Saying [tex]X_n\to X[/tex] in distribution means that

    [tex]F_n(x)\to F(x)[/tex]

    as [tex]n\to\infty[/tex] for all [tex]x[/tex] such that [tex]F[/tex] is continuous at [tex]x[/tex].

    In general, convergence in probability implies convergence in distribution, but not conversely. However, if [tex]X[/tex] is a constant (as in the case being discussed), then the two concepts are equivalent.
     
  8. Apr 29, 2007 #7

    EnumaElish

    User Avatar
    Science Advisor
    Homework Helper

    Thanks for your input, Jason.

    Isn't X_n ---> X in probability the same thing as F_n(x) ---> F(x) pointwise? If this is right, then I don't think in terms of convergence there is a meaningful difference between X_n ---> X and F_n(x) ---> F(x). For all ends and purposes they are one and the same.
     
    Last edited: Apr 29, 2007
  9. Apr 29, 2007 #8
    These are not the same thing. If [tex]F_n(x) \to F(x)[/tex] pointwise, then [tex]F_n(x) \to F(x)[/tex] for all [tex]x[/tex] such that [tex]F[/tex] is continuous at [tex]x[/tex]. The latter statement is true iff [tex]X_n \to X[/tex] in distribution. Convergence in distribution does not imply convergence in probability.

    Here are some examples. Let [tex]X_n = 1/n[/tex] and [tex]X = 0[/tex]. Then [tex]X_n \to X[/tex] in probability and in distribution. But [tex]F_n(0) = P(X_n \le 0) = 0[/tex] for all [tex]n[/tex], whereas [tex]F(0) = P(X \le 0) = 1[/tex]. So [tex]F_n[/tex] does not converge to [tex]F[/tex] pointwise.

    Let [tex]X_1,X_2,\ldots[/tex] be iid with [tex]X_1\sim N(0,1)[/tex]. Let [tex]S_n=X_1+\cdots+X_n[/tex] and [tex]Y_n=S_n/\sqrt{n}[/tex]. Let [tex]Y\sim N(0,1)[/tex]. Then [tex]Y_n\to Y[/tex] in distribution. (In fact, [tex]Y_n=Y[/tex] in distribution for all [tex]n[/tex].) But, for [tex]m<n[/tex],

    [tex]Y_n - Y_m = \frac{S_n - S_m}{\sqrt{n}} - \left({\frac1{\sqrt{m}}
    - \frac1{\sqrt{n}}}\right)S_m.[/tex]

    If my algebra is correct, this means [tex]Y_n - Y_m[/tex] is normal with mean zero and variance [tex]2(1-\sqrt{m/n})[/tex]. It follows that [tex]\{Y_n\}[/tex] is not Cauchy in probability, so it cannot converge in probability.
     
  10. Apr 29, 2007 #9

    EnumaElish

    User Avatar
    Science Advisor
    Homework Helper

    I was going with "Convergence in probability is, indeed, the (pointwise) convergence of probabilities."

    Are you saying that "(pointwise) convergence of probability" is different from pointwise convergence of F_n(x) in the real analysis sense? I guess that's what you are saying.

    Are you also saying that convergence in distribution is different from F_n ---> F in real analysis?

    If convergence in dist. is the real analytic F_n ---> F, and assuming "(pointwise) convergence of probability" is the pointwise convergence of F_n(x) to F(x), then the former should have implied the latter. Since it doesn't (except when converging to a constant, see below) then either or both of these are misconceptions on my part. Thanks for pointing this out.

    Also, "If Xn converges in distribution to a constant c, then Xn converges in probability to c." Am I right to state "at least when converging to a constant, convergence in distribution implies convergence in probability"?

    I do not understand
    How is Cauchy part of this? Isn't Cauchy the ratio of two normals?
     
    Last edited: Apr 29, 2007
  11. Apr 29, 2007 #10
    I do not know what the author(s) of that Wikipedia article mean by "(pointwise) convergence of probability". But I am saying that convergence in probability is different from pointwise convergence of F_n(x).

    Convergence in distribution is equivalent to F_n(x) -> F(x) for all x such that F is continuous at x. This is, in fact, weaker than pointwise convergence.

    Yes, this is correct.

    I meant "Cauchy" in the sense of converging sequences, not in the sense of the Cauchy distribution. For the real analysis analogue, a sequence {a_n} is convergent iff it is Cauchy, i.e. for all e > 0, there exists N > 0 such that |a_n - a_m| < e whenever n > N and m > M.

    The probabilistic version is this: Let [tex]\{Y_n\}[/tex] be a sequence of random variables. There exists a random variable [tex]Y[/tex] such that [tex]Y_n\to Y[/tex] in probability if and only if [tex]\{Y_n\}[/tex] is Cauchy in probability, i.e. for all [tex]\varepsilon>0[/tex] and [tex]\delta>0[/tex], there exists [tex]N>0[/tex] such that

    [tex]P(|Y_n - Y_m| > \varepsilon) < \delta[/tex]

    whenever [tex]n,m>N[/tex].
     
  12. Apr 30, 2007 #11

    EnumaElish

    User Avatar
    Science Advisor
    Homework Helper

    In light of the comments by Jason Swanson, I am going to update my first post under this thread:

    A random variable is not a variable, it is a function from the probability space (W) to the set of real numbers (R). And its randomness is completely described by a deterministic probability function with known parameters.

    I. Convergence of probability functions (real analysis):
    The probability function F: R ---> [0,1] associated with a random variable can be analyzed using the ordinary tools of real analysis. Thus, it can be stated that a sequence of probability functions converge to a limiting probability function, either pointwise (weaker) or uniformly (stronger).

    I.A. Pointwise convergence (real analysis):
    The sequence {Fn(x)} pointwise converges to F(x) iff for every ε > 0, there is a natural number Nx such that all n ≥ Nx, |Fn(x) − F(x)| < ε.

    I.B. Uniform convergence (real analysis):
    The sequence {Fn} uniformly converges to F iff for every ε > 0, there is a natural number N such that for all x and all n ≥ N, |Fn(x) − F(x)| < ε.

    I.C. Implication (real analysis):
    Uniform convergence implies pointwise convergence.

    II. Convergence of random variables (probability analysis):

    The convergence of random variables is a related but different concept.

    II.A. Convergence in distribution (probability analysis):
    The weakest concept of convergence of a r.v. is convergence in distribution. A sequence of r.v.'s {Xn} described by respective prob. functions {Fn} is said to converge in distribution to a r.v. X described by a prob. function F iff {Fn(x)} ---> F(x) (pointwise, in the real analytic sense) for all x at which F is continuous.

    II.B. Convergence in probability (probability analysis):
    Convergence in probability is stronger. {Xn} converges in probability to X iff P(|Xn - X| > ε) ---> 0 for every ε > 0. This is the concept of convergence used in the weak law of large numbers.

    II.C. Almost sure convergence (probability analysis):
    Still stronger is almost sure convergence. {Xn} converges almost surely to X iff P(Xn ---> X) = 1.

    II.D. Sure convergence or "convergence everywhere" (probability analysis):
    The strongest concept of probabilistic convergence is sure convergence (convergence everywhere). {Xn} converges surely (or converges everywhere) to X iff {Xn(w)} ---> X(w) (pointwise, in the real analytic sense) for all w in W.

    II.E. Implications (probability analysis):
    1. Sure convergence implies almost sure convergence.
    2. Almost sure convergence implies convergence in probability.
    3. Convergence in probability implies convergence in distribution.

    II.F. Concepts yet to be invented:
    1. Uniform sure convergence: {Xn} converges "uniformly surely" to X iff {Xn} ---> X uniformly in the real analytic sense.
    2. Uniform convergence in distribution: A sequence of r.v.'s {Xn} described by respective prob. functions {Fn} is said to "uniformly converge in distribution" to a r.v. X described by a prob. function F iff {Fn} ---> F uniformly in the real analytic sense.
     
    Last edited: Apr 30, 2007
  13. Oct 23, 2010 #12
    Here is an example of sequence of random variables (which was the original question)

    Consider a telephone switchboard that can handle up to 100 simultaneous calls. Define a random variable X as the number of active calls at a given instant of time. Now this random variable has a probability mass function that describes the probability of the following:
    0 active calls,
    1 active call,
    2 active calls and so on up to
    100 active calls.

    Now consider a related random variable X1 which is the number of active calls at 10:00:00 am. And another random variable X2 which is the number of active calls at 10:00:01 and so on till X900 which is the number of active calls at 10:15:00.

    X1 to X900 is a sequence of random variables. They all have the same probability mass function. And we could make the simplifying assumption that they are all independent. In that case, this sequence of random variables is independent, identically distributed (iid).
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?