Convergence of Mean in Probability - How to Prove it?

In summary: The proof using characteristic functions, which does not require finite variance, can't be applied directly because we aren't in an iid situation. We will still have ##\phi_{x+y} = \phi_{x}\phi_{y}##, but this won't equal...That's correct. In fact, this is why the WLLN holds without knowledge of the variance- because we can still apply the distributive law.
  • #1
gajohnson
73
0

Homework Statement



Let [itex]X_1, X_2...[/itex] be a sequence of independent random variables with [itex]E(X_i)=\mu_i[/itex] and [itex](1/n)\sum(\mu_i)\rightarrow\mu[/itex]

Show that [itex]\overline{X}\rightarrow\mu[/itex] in probability.


Homework Equations



NA

The Attempt at a Solution



I feel as if this shouldn't be too hard, but I'm having some trouble getting started. Any point in the right direction would be helpful. Thanks!
 
Physics news on Phys.org
  • #2
Or rather, the better question might be. Is this as easy as it looks? Namely, is it as simple as showing that, because of the equality given, X-bar = mu and so they must converge?
 
  • #3
Is any information given about the variances of ##X_i##? Are they known to be finite, for example?
 
  • #4
jbunniii said:
Is any information given about the variances of ##X_i##? Are they known to be finite, for example?

Well, this was a little confusing. The previous question also gives that
[itex]Var(X_i)=\sigma^2[/itex].

However, it also says that the sum of the variances converges to 0, which clearly means that [itex]\overline{X}\rightarrow\mu[/itex] as well (and this is the key to the first problem, which asks me to show the same thing, i.e. that [itex]\overline{X}\rightarrow\mu[/itex]).

The second question says simply to "Let [itex]X_i[/itex] be as in Problem 1" and then adds the additional information I included in the OP.

So I guess I assume that [itex]Var(X_i)=\sigma^2[/itex] still applies, but not the bit about the sum of the variances converging to 0 (because this would make the second problem as trivial as the first), and that instead we're supposed to use the new fact about the expected values converging to [itex]\mu[/itex].

What do you think?
 
Last edited:
  • #5
So they actually ask you to prove the WLLN in a problem??
 
  • #6
micromass said:
So they actually ask you to prove the WLLN in a problem??

I suppose so, and this is what I found confusing. Because it seems quite trivial and I'm wondering if I am missing something here.

I apply Chebyshev's inequality with the given information and I'm done, right?
 
  • #7
gajohnson said:
I suppose so, and this is what I found confusing. Because it seems quite trivial and I'm wondering if I am missing something here.

I apply Chebyshev's inequality with the given information and I'm done, right?

Chebyshev requires some knowledge of the variance though. So you got to know it's finite. In that case, the proof is indeed trivial.

However, some versions of the WLLN also holds without any knowledge of the variance. For example, I know that if [itex]X_n[/itex] are iid and if the first moment is finite, then you can also prove that the means converge in probability.
Needless to say that the proof is going to be more difficult, since you can't apply Chebyshev.
 
  • #8
micromass said:
Chebyshev requires some knowledge of the variance though. So you got to know it's finite. In that case, the proof is indeed trivial.

However, some versions of the WLLN also holds without any knowledge of the variance. For example, I know that if [itex]X_n[/itex] are iid and if the first moment is finite, then you can also prove that the means converge in probability.
Needless to say that the proof is going to be more difficult, since you can't apply Chebyshev.

So carrying over that [itex]Var(X_i)=\sigma^2[/itex] is of no use here? From that can't I just use Chebyshev?

I believe that this fact is supposed to carry over from the previous problem.
 
  • #9
gajohnson said:
So carrying over that [itex]Var(X_i)=\sigma^2[/itex] is of no use here? From that can't I just use Chebyshev?

I believe that this fact is supposed to carry over from the previous problem.

If you can indeed carry over [itex]Var(X_i)=\sigma^2[/itex], then you can use Chebyshev and your proof is valid.
Whether you can carry over [itex]Var(X_i)=\sigma^2[/itex] is something I don't know, you should ask your professor. But I think that you probably can.
 
  • #10
micromass said:
If you can indeed carry over [itex]Var(X_i)=\sigma^2[/itex], then you can use Chebyshev and your proof is valid.
Whether you can carry over [itex]Var(X_i)=\sigma^2[/itex] is something I don't know, you should ask your professor. But I think that you probably can.

I'll go with that. I envision the proof being quite challenging if this is not possible (and probably beyond the scope of the class). Thanks for your the help, everyone.
 
  • #11
micromass said:
However, some versions of the WLLN also holds without any knowledge of the variance. For example, I know that if [itex]X_n[/itex] are iid and if the first moment is finite, then you can also prove that the means converge in probability.
Needless to say that the proof is going to be more difficult, since you can't apply Chebyshev.
And the proof using characteristic functions, which does not require finite variance, can't be applied directly because we aren't in an iid situation. We will still have ##\phi_{x+y} = \phi_{x}\phi_{y}##, but this won't equal ##\phi_x^2##.
 
  • #12
gajohnson said:
I'll go with that. I envision the proof being quite challenging if this is not possible (and probably beyond the scope of the class). Thanks for your the help, everyone.

The proof without the variance condition is not very difficult too understand. But it wouldn't be suitable as an exercise.
 
  • #13
micromass said:
The proof without the variance condition is not very difficult too understand. But it wouldn't be suitable as an exercise.

Got it. If you don't mind indulging me, what's the gist of it?
 
  • #14
gajohnson said:
Got it. If you don't mind indulging me, what's the gist of it?

The wonderful book W.Feller, "An Introduction to Probability Theory and its Applications", Vol. I, (Wiley) has a great chapter (Chapter X) on this and other limit properties. The way he does it is to look at a method of truncation:
[tex] U_k = X_k, \text{ and }V_k = 0 \text{ if } |X_k| \leq \delta n\\
U_k = 0, \text{ and } V_k = X_k \text{ if } |X_k| > \delta n,[/tex]
where ##\delta > 0## is a parameter to be determined later, and ##k = 1,2,\ldots, n##. This implies ##X_k = U_k + V_k##. He then proves the Law of Large Numbers by showing that for any given ##\epsilon > 0## the constant ##\delta > 0## can be chosen so that as ##n \to \infty##,
[tex] P\{ |U_1 + \cdots + U_n| > \frac{1}{2} \epsilon n \} \to 0\\
\text{ and }\\
P\{ |V_1 + \cdots + V_n | > \frac{1}{2} \epsilon n \} \to 0.[/tex]
The Law of Large Numbers follows from these.

The point of the truncation is to permit some statements to be made and proved even when variance is infinite---it works as long as ##E X## is finite. For proof of the above crucial results, consult Feller.
 
  • #15
Ray Vickson said:
The wonderful book W.Feller, "An Introduction to Probability Theory and its Applications", Vol. I, (Wiley) has a great chapter (Chapter X) on this and other limit properties. The way he does it is to look at a method of truncation:
[tex] U_k = X_k, \text{ and }V_k = 0 \text{ if } |X_k| \leq \delta n\\
U_k = 0, \text{ and } V_k = X_k \text{ if } |X_k| > \delta n,[/tex]
where ##\delta > 0## is a parameter to be determined later, and ##k = 1,2,\ldots, n##. This implies ##X_k = U_k + V_k##. He then proves the Law of Large Numbers by showing that for any given ##\epsilon > 0## the constant ##\delta > 0## can be chosen so that as ##n \to \infty##,
[tex] P\{ |U_1 + \cdots + U_n| > \frac{1}{2} \epsilon n \} \to 0\\
\text{ and }\\
P\{ |V_1 + \cdots + V_n | > \frac{1}{2} \epsilon n \} \to 0.[/tex]
The Law of Large Numbers follows from these.

The point of the truncation is to permit some statements to be made and proved even when variance is infinite---it works as long as ##E X## is finite. For proof of the above crucial results, consult Feller.

This is good stuff; I'll have to take a look at the text. Thanks a lot!
 

1. What is the definition of convergence of mean?

Convergence of mean refers to the phenomenon where a sequence of random variables approaches a certain value as the number of variables in the sequence increases. In other words, the average of the sequence approaches a constant value as the number of variables increases.

2. How is convergence of mean different from convergence in probability?

Convergence of mean is a specific type of convergence in probability, where the average of a sequence of random variables approaches a constant value. Convergence in probability, on the other hand, refers to the probability of a sequence of random variables approaching a certain value as the number of variables increases.

3. What are the conditions for convergence of mean to occur?

In order for convergence of mean to occur, the sequence of random variables must be independent and identically distributed (IID). Additionally, the variables must have finite means and variances.

4. Can convergence of mean be proven mathematically?

Yes, convergence of mean can be proven using mathematical theorems such as the law of large numbers and the central limit theorem. These theorems provide conditions under which convergence of mean will occur.

5. How is convergence of mean used in statistics and data analysis?

Convergence of mean is an important concept in statistics and data analysis as it allows us to make inferences about a population based on a sample. By using the law of large numbers, we can estimate the true mean of a population by taking the average of a large enough sample. This is commonly used in hypothesis testing and confidence interval calculations.

Similar threads

  • Calculus and Beyond Homework Help
Replies
4
Views
884
  • Calculus and Beyond Homework Help
Replies
7
Views
1K
  • Calculus and Beyond Homework Help
Replies
9
Views
797
  • Calculus and Beyond Homework Help
Replies
17
Views
2K
  • Math POTW for Graduate Students
Replies
1
Views
798
  • Calculus and Beyond Homework Help
Replies
15
Views
1K
  • Calculus and Beyond Homework Help
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Calculus and Beyond Homework Help
Replies
7
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
Back
Top