# Homework Help: Convergence of Mean Question

1. Feb 22, 2013

### gajohnson

1. The problem statement, all variables and given/known data

Let $X_1, X_2...$ be a sequence of independent random variables with $E(X_i)=\mu_i$ and $(1/n)\sum(\mu_i)\rightarrow\mu$

Show that $\overline{X}\rightarrow\mu$ in probability.

2. Relevant equations

NA

3. The attempt at a solution

I feel as if this shouldn't be too hard, but I'm having some trouble getting started. Any point in the right direction would be helpful. Thanks!

2. Feb 22, 2013

### gajohnson

Or rather, the better question might be. Is this as easy as it looks? Namely, is it as simple as showing that, because of the equality given, X-bar = mu and so they must converge?

3. Feb 22, 2013

### jbunniii

Is any information given about the variances of $X_i$? Are they known to be finite, for example?

4. Feb 22, 2013

### gajohnson

Well, this was a little confusing. The previous question also gives that
$Var(X_i)=\sigma^2$.

However, it also says that the sum of the variances converges to 0, which clearly means that $\overline{X}\rightarrow\mu$ as well (and this is the key to the first problem, which asks me to show the same thing, i.e. that $\overline{X}\rightarrow\mu$).

The second question says simply to "Let $X_i$ be as in Problem 1" and then adds the additional information I included in the OP.

So I guess I assume that $Var(X_i)=\sigma^2$ still applies, but not the bit about the sum of the variances converging to 0 (because this would make the second problem as trivial as the first), and that instead we're supposed to use the new fact about the expected values converging to $\mu$.

What do you think?

Last edited: Feb 22, 2013
5. Feb 22, 2013

### micromass

So they actually ask you to prove the WLLN in a problem??

6. Feb 22, 2013

### gajohnson

I suppose so, and this is what I found confusing. Because it seems quite trivial and I'm wondering if I am missing something here.

I apply Chebyshev's inequality with the given information and I'm done, right?

7. Feb 22, 2013

### micromass

Chebyshev requires some knowledge of the variance though. So you got to know it's finite. In that case, the proof is indeed trivial.

However, some versions of the WLLN also holds without any knowledge of the variance. For example, I know that if $X_n$ are iid and if the first moment is finite, then you can also prove that the means converge in probability.
Needless to say that the proof is going to be more difficult, since you can't apply Chebyshev.

8. Feb 22, 2013

### gajohnson

So carrying over that $Var(X_i)=\sigma^2$ is of no use here? From that can't I just use Chebyshev?

I believe that this fact is supposed to carry over from the previous problem.

9. Feb 22, 2013

### micromass

If you can indeed carry over $Var(X_i)=\sigma^2$, then you can use Chebyshev and your proof is valid.
Whether you can carry over $Var(X_i)=\sigma^2$ is something I don't know, you should ask your professor. But I think that you probably can.

10. Feb 22, 2013

### gajohnson

I'll go with that. I envision the proof being quite challenging if this is not possible (and probably beyond the scope of the class). Thanks for your the help, everyone.

11. Feb 22, 2013

### jbunniii

And the proof using characteristic functions, which does not require finite variance, can't be applied directly because we aren't in an iid situation. We will still have $\phi_{x+y} = \phi_{x}\phi_{y}$, but this won't equal $\phi_x^2$.

12. Feb 22, 2013

### micromass

The proof without the variance condition is not very difficult too understand. But it wouldn't be suitable as an exercise.

13. Feb 22, 2013

### gajohnson

Got it. If you don't mind indulging me, what's the gist of it?

14. Feb 22, 2013

### Ray Vickson

The wonderful book W.Feller, "An Introduction to Probability Theory and its Applications", Vol. I, (Wiley) has a great chapter (Chapter X) on this and other limit properties. The way he does it is to look at a method of truncation:
$$U_k = X_k, \text{ and }V_k = 0 \text{ if } |X_k| \leq \delta n\\ U_k = 0, \text{ and } V_k = X_k \text{ if } |X_k| > \delta n,$$
where $\delta > 0$ is a parameter to be determined later, and $k = 1,2,\ldots, n$. This implies $X_k = U_k + V_k$. He then proves the Law of Large Numbers by showing that for any given $\epsilon > 0$ the constant $\delta > 0$ can be chosen so that as $n \to \infty$,
$$P\{ |U_1 + \cdots + U_n| > \frac{1}{2} \epsilon n \} \to 0\\ \text{ and }\\ P\{ |V_1 + \cdots + V_n | > \frac{1}{2} \epsilon n \} \to 0.$$
The Law of Large Numbers follows from these.

The point of the truncation is to permit some statements to be made and proved even when variance is infinite---it works as long as $E X$ is finite. For proof of the above crucial results, consult Feller.

15. Feb 24, 2013

### gajohnson

This is good stuff; I'll have to take a look at the text. Thanks a lot!