Asymptotically unbiased & consistent estimators

  • Context: Graduate 
  • Thread starter Thread starter kingwinner
  • Start date Start date
  • Tags Tags
    Estimators
Click For Summary
SUMMARY

The discussion centers on the theorem stating that if "θ hat" is an unbiased estimator for θ and Var(θ hat) approaches 0 as n approaches infinity, then it is a consistent estimator of θ. Participants explore the possibility of replacing "unbiased" with "asymptotically unbiased" in this theorem, asserting that the result still holds. The proof involves Chebyshev's Inequality and the Squeeze Theorem, with a focus on demonstrating that asymptotic unbiasedness coupled with variance approaching zero implies consistency. Participants express challenges in modifying the original proof to accommodate asymptotic unbiasedness.

PREREQUISITES
  • Understanding of Chebyshev's Inequality
  • Familiarity with the Squeeze Theorem
  • Knowledge of statistical estimators and their properties
  • Concept of asymptotic behavior in statistics
NEXT STEPS
  • Study the proof of Chebyshev's Inequality in detail
  • Explore the Squeeze Theorem and its applications in statistics
  • Research the properties of asymptotically unbiased estimators
  • Learn about consistency in statistical estimators and related proofs
USEFUL FOR

Statisticians, data scientists, and graduate students in statistics who are looking to deepen their understanding of estimator properties and proofs related to consistency and unbiasedness.

kingwinner
Messages
1,266
Reaction score
0
Theorem: If "θ hat" is an unbiased estimator for θ AND Var(θ hat)->0 as n->∞, then it is a consistent estimator of θ.

The textbook proved this theorem using Chebyshev's Inequality and Squeeze Theorem and I understand the proof.
BUT then there is a remark that we can replace "unbiased" by "asymptotically unbiased" in the above theorem, and the result will still hold, but the textbook provided no proof. This is where I'm having a lot of trouble. I really don't see how we can prove this (i.e. asymptotically unbiased and variance->0 implies consistent). I tried to modify the original proof, but no way I can get it to work under the assumption of asymptotically unbiased.

I'm frustrated and I hope someone can explain how to prove it. Thank you!
 
Last edited:
Physics news on Phys.org
Hi kingwinner! :smile:

What about the following adjustment:

P(|\hat{\theta}_n-\theta_0|\geq \varepsilon)\leq P(|\hat{\theta}_n-E(\hat{\theta}_n)|+|E(\hat{\theta}_n)-\theta_0|\geq \varepsilon)\leq \frac{Var(\hat{\theta}_n)}{(\varepsilon-|E(\hat{\theta}_n)-\theta_0|)^2}\rightarrow 0
 
micromass said:
Hi kingwinner! :smile:

What about the following adjustment:

P(|\hat{\theta}_n-\theta_0|\geq \varepsilon)\leq P(|\hat{\theta}_n-E(\hat{\theta}_n)|+|E(\hat{\theta}_n)-\theta_0|\geq \varepsilon)\leq \frac{Var(\hat{\theta}_n)}{(\varepsilon-|E(\hat{\theta}_n)-\theta_0|)^2}\rightarrow 0

Thanks for the help, but one of the assumptions of Chebyshev's inequality requires \varepsilon-|E(\hat{\theta}_n)-\theta_0|>0 which is not necessarily true here?
 
kingwinner said:
Thanks for the help, but one of the assumptions of Chebyshev's inequality requires \varepsilon-|E(\hat{\theta}_n)-\theta_0|>0 which is not necessarily true here?

It's not necessarily true, but it is true for large n. We know that

E(\hat{\theta}_n)\rightarrow \theta_0

So from a certain n0, we know that

|E(\hat{\theta}_n)-\theta_0|<\varepsilon

So from that certain n0, we know that

\varepsilon-|E(\hat{\theta}_n)-\theta_0|>0
 
Thanks for the help! :) You're a legend...
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
657
  • · Replies 8 ·
Replies
8
Views
8K
  • · Replies 2 ·
Replies
2
Views
9K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 11 ·
Replies
11
Views
3K
  • · Replies 1 ·
Replies
1
Views
4K
Replies
1
Views
4K
  • · Replies 5 ·
Replies
5
Views
3K