Statistics: Consistent Estimators

  • Context: Graduate 
  • Thread starter Thread starter kingwinner
  • Start date Start date
  • Tags Tags
    Estimators Statistics
Click For Summary

Discussion Overview

The discussion revolves around the concept of consistent estimators in statistics, particularly focusing on the conditions under which an asymptotically unbiased estimator is considered consistent. Participants explore the implications of variance not tending to zero and whether this can conclusively determine inconsistency.

Discussion Character

  • Debate/contested
  • Mathematical reasoning
  • Homework-related

Main Points Raised

  • One participant presents a theorem stating that an asymptotically unbiased estimator is consistent if the limit of its variance approaches zero as sample size increases.
  • Another participant questions whether the converse is true, specifically if an estimator with a non-zero limit of variance can be definitively deemed inconsistent.
  • Some participants express difficulty in proving the converse and suggest that the theorem may not be "if and only if," indicating uncertainty about the implications of variance not tending to zero.
  • Several participants discuss the implications of variance in relation to convergence in probability, suggesting that if variance does not tend to zero, it may imply that the estimator cannot converge probabilistically.
  • There is a recurring theme of uncertainty regarding the validity of the converse statement and its proof, with some participants citing external sources and personal doubts about its truth.

Areas of Agreement / Disagreement

Participants do not reach a consensus on whether the converse of the theorem is true. There are competing views on the implications of variance not tending to zero, with some arguing it indicates inconsistency while others question this conclusion.

Contextual Notes

Participants express challenges in proving the necessary mathematical relationships and theorems, indicating a reliance on specific definitions and theorems that may not cover all scenarios. There is also mention of the complexity involved in calculating variances of certain estimators.

kingwinner
Messages
1,266
Reaction score
0
1) Theorem:
An asymptotically unbiased estimator 'theta hat' for 'theta' is a consistent estimator of 'theta' IF
lim Var(theta hat) = 0
n->inf

Now my question is, if the limit is NOT zero, can we conclude that the estimator is NOT consistent? (i.e. is the theorem actually "if and only if", or is the theorem just one way?)




2) http://www.geocities.com/asdfasdf23135/stat9.JPG

I'm OK with part a, but I am stuck badly in part b. The only theorem I learned about consistency is the one above. Using the theorem, how can we prove consistency or inconsistency of each of the two estimators? I am having trouble computing and simplifying the variances...


Thank you for your help!
 
Physics news on Phys.org
Please post it in homework section. The answer is not tough. Show your attempts.
 
1) I've seen the proof for the case of the theorem as stated.
Let A=P(|theta hat - theta|>epsilon) and B=Var(theta hat)/epsilon^2
At the end of the proof we have 0<A<B and if V(theta hat)->0 as n->inf, then B->0, so by squeeze theorem A->0 which proves convergence in probability (i.e. proves consistency).

I tried to modfiy the proof for the converse, but failed. For the case that lim V(theta hat) is not equal to zero, it SEEMS to me that (by looking at the above proof and modifying the last step) the estimator can be consistent or inconsistent (i.e. the theorem is inconclusive) since A may tend to zero or it may not, so we can't say for sure.

How can we prove rigorously that "for an unbiased estimator, if its variance does not tend to zero, then it's not a consistent estimator." Is this is a true statement?



2) Var(aX+b) = a^2 Var(X)
So the variance of the first estimator is [1/(n-1)^2]Var[...] where ... is the summation stuff. I am stuck right here. How can I calculate Var[...]? The terms are not even independent...and (Xi-Xbar) is squared, which creates more trouble in computing the variance

Thanks!
 
kingwinner said:
How can we prove rigorously that "for an unbiased estimator, if its variance does not tend to zero, then it's not a consistent estimator." Is this is a true statement?

If the variance doesn’t tend to zero then how can it converge in a probabilistic sense. If there is variance it means that there is a finite probability of getting something other then your estimated value. Also why are you trying to prove the converse when you weren’t asked to in the above questions?
http://en.wikipedia.org/wiki/Consistent_estimator
 
John Creighto said:
If the variance doesn’t tend to zero then how can it converge in a probabilistic sense. If there is variance it means that there is a finite probability of getting something other then your estimated value. Also why are you trying to prove the converse when you weren’t asked to in the above questions?
http://en.wikipedia.org/wiki/Consistent_estimator

My textbook only states the theorem only in "one way" (if), so if I can prove that the converse is also true (iff), then I can have a way of proving some estimator is NOT consistent, but I highly doubt whether the converse of the theorem is true. Note that with the theorem as stated in "one way", I can only prove that something is consistent, but I have no way of proving something is NOT consistent.
 
kingwinner said:
My textbook only states the theorem only in "one way" (if), so if I can prove that the converse is also true (iff), then I can have a way of proving some estimator is NOT consistent, but I highly doubt whether the converse of the theorem is true. Note that with the theorem as stated in "one way", I can only prove that something is consistent, but I have no way of proving something is NOT consistent.

I think your over thinking it. But anyway if you must; show if the variance doesn’t go to zero then it cannot converge in probability. I would probably use contradiction.
 
But are you sure that the following is a true statement?
"If lim Var(theta hat) is NOT equal to zero, then 'theta hat' is NOT consistent."

I am having troubles proving it, and a search on the internet seems to collect some evidence that the statement (i.e. the converse of the original stated theorem) is not true. I saw somebody saying that, but he/she might be wrong.
 
kingwinner said:
But are you sure that the following is a true statement?
"If lim Var(theta hat) is NOT equal to zero, then 'theta hat' is NOT consistent."

I am having troubles proving it, and a search on the internet seems to collect some evidence that the statement (i.e. the converse of the original stated theorem) is not true. I saw somebody saying that, but he/she might be wrong.

Okay. Let's say P((\theta- \hat \theta_n )^2&gt;\epsilon) goes to zero for all \epsilon but P(|\theta- \hat \theta_n |&gt;\epsilon) doesn't.

This would imply that there exists a positive \epsilon where:
for all n P(|\theta- \hat \theta_n |&gt;\epsilon).

This is equivalent to saying that there is a finite probability that
P(|\theta- \hat \theta_n |^2&gt;\epsilon^2) since:
(|\theta- \hat \theta_n |)^2&gt;\epsilon^2 &lt;=&gt; |\theta- \hat \theta_n |&gt;\epsilon

But this violates the original hypothesis that:
P((\theta- \hat \theta_n )^2&gt;\epsilon) for all \epsilon goes to zero for all \epsilon.
 
Last edited:

Similar threads

  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 4 ·
Replies
4
Views
7K
  • · Replies 4 ·
Replies
4
Views
3K
Replies
4
Views
5K
  • · Replies 36 ·
2
Replies
36
Views
4K
  • · Replies 3 ·
Replies
3
Views
10K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 7 ·
Replies
7
Views
3K