Statistics: Consistent Estimators

In summary: Thus it is impossible for the variance not to go to zero, and at the same time for the estimator to be consistent. In summary, the theorem is "if and only if", meaning that if the variance of an estimator does not go to zero, then the estimator is not consistent.
  • #1
kingwinner
1,270
0
1) Theorem:
An asymptotically unbiased estimator 'theta hat' for 'theta' is a consistent estimator of 'theta' IF
lim Var(theta hat) = 0
n->inf

Now my question is, if the limit is NOT zero, can we conclude that the estimator is NOT consistent? (i.e. is the theorem actually "if and only if", or is the theorem just one way?)




2) http://www.geocities.com/asdfasdf23135/stat9.JPG

I'm OK with part a, but I am stuck badly in part b. The only theorem I learned about consistency is the one above. Using the theorem, how can we prove consistency or inconsistency of each of the two estimators? I am having trouble computing and simplifying the variances...


Thank you for your help!
 
Physics news on Phys.org
  • #2
Please post it in homework section. The answer is not tough. Show your attempts.
 
  • #3
1) I've seen the proof for the case of the theorem as stated.
Let A=P(|theta hat - theta|>epsilon) and B=Var(theta hat)/epsilon^2
At the end of the proof we have 0<A<B and if V(theta hat)->0 as n->inf, then B->0, so by squeeze theorem A->0 which proves convergence in probability (i.e. proves consistency).

I tried to modfiy the proof for the converse, but failed. For the case that lim V(theta hat) is not equal to zero, it SEEMS to me that (by looking at the above proof and modifying the last step) the estimator can be consistent or inconsistent (i.e. the theorem is inconclusive) since A may tend to zero or it may not, so we can't say for sure.

How can we prove rigorously that "for an unbiased estimator, if its variance does not tend to zero, then it's not a consistent estimator." Is this is a true statement?



2) Var(aX+b) = a^2 Var(X)
So the variance of the first estimator is [1/(n-1)^2]Var[...] where ... is the summation stuff. I am stuck right here. How can I calculate Var[...]? The terms are not even independent...and (Xi-Xbar) is squared, which creates more trouble in computing the variance

Thanks!
 
  • #5
kingwinner said:
How can we prove rigorously that "for an unbiased estimator, if its variance does not tend to zero, then it's not a consistent estimator." Is this is a true statement?

If the variance doesn’t tend to zero then how can it converge in a probabilistic sense. If there is variance it means that there is a finite probability of getting something other then your estimated value. Also why are you trying to prove the converse when you weren’t asked to in the above questions?
http://en.wikipedia.org/wiki/Consistent_estimator
 
  • #6
John Creighto said:
If the variance doesn’t tend to zero then how can it converge in a probabilistic sense. If there is variance it means that there is a finite probability of getting something other then your estimated value. Also why are you trying to prove the converse when you weren’t asked to in the above questions?
http://en.wikipedia.org/wiki/Consistent_estimator

My textbook only states the theorem only in "one way" (if), so if I can prove that the converse is also true (iff), then I can have a way of proving some estimator is NOT consistent, but I highly doubt whether the converse of the theorem is true. Note that with the theorem as stated in "one way", I can only prove that something is consistent, but I have no way of proving something is NOT consistent.
 
  • #7
kingwinner said:
My textbook only states the theorem only in "one way" (if), so if I can prove that the converse is also true (iff), then I can have a way of proving some estimator is NOT consistent, but I highly doubt whether the converse of the theorem is true. Note that with the theorem as stated in "one way", I can only prove that something is consistent, but I have no way of proving something is NOT consistent.

I think your over thinking it. But anyway if you must; show if the variance doesn’t go to zero then it cannot converge in probability. I would probably use contradiction.
 
  • #8
But are you sure that the following is a true statement?
"If lim Var(theta hat) is NOT equal to zero, then 'theta hat' is NOT consistent."

I am having troubles proving it, and a search on the internet seems to collect some evidence that the statement (i.e. the converse of the original stated theorem) is not true. I saw somebody saying that, but he/she might be wrong.
 
  • #9
kingwinner said:
But are you sure that the following is a true statement?
"If lim Var(theta hat) is NOT equal to zero, then 'theta hat' is NOT consistent."

I am having troubles proving it, and a search on the internet seems to collect some evidence that the statement (i.e. the converse of the original stated theorem) is not true. I saw somebody saying that, but he/she might be wrong.

Okay. Let's say [tex]P((\theta- \hat \theta_n )^2>\epsilon)[/tex] goes to zero for all [tex]\epsilon[/tex] but [tex]P(|\theta- \hat \theta_n |>\epsilon)[/tex] doesn't.

This would imply that there exists a positive [tex]\epsilon[/tex] where:
for all n [tex]P(|\theta- \hat \theta_n |>\epsilon)[/tex].

This is equivalent to saying that there is a finite probability that
[tex]P(|\theta- \hat \theta_n |^2>\epsilon^2)[/tex] since:
[tex](|\theta- \hat \theta_n |)^2>\epsilon^2 <=> |\theta- \hat \theta_n |>\epsilon[/tex]

But this violates the original hypothesis that:
[tex]P((\theta- \hat \theta_n )^2>\epsilon)[/tex] for all [tex]\epsilon[/tex] goes to zero for all [tex]\epsilon[/tex].
 
Last edited:

1. What is the definition of a consistent estimator in statistics?

A consistent estimator is a statistical method or technique that produces an estimate of a population parameter that approaches the true value of the parameter as the sample size increases. In other words, as more data is collected, the estimate becomes more accurate and converges to the true value.

2. How is consistency related to unbiasedness in statistics?

Consistency and unbiasedness are both important properties of an estimator. Unbiasedness refers to an estimator that, on average, produces an estimate that is equal to the true value of the parameter. Consistency, on the other hand, refers to an estimator that produces an estimate that converges to the true value as the sample size increases. Therefore, a consistent estimator can be biased, but an unbiased estimator may not be consistent.

3. Can an estimator be consistent if it is biased?

Yes, an estimator can still be consistent if it is biased. As long as the bias does not increase as the sample size increases, the estimator can still converge to the true value of the parameter and be considered consistent.

4. What are some common examples of consistent estimators in statistics?

Some common examples of consistent estimators include the sample mean, sample variance, and maximum likelihood estimators. These estimators produce estimates that become more accurate as more data is collected and can be used to estimate population parameters such as the population mean and variance.

5. How can one assess the consistency of an estimator?

One way to assess the consistency of an estimator is to compare its performance with different sample sizes. As the sample size increases, the estimate produced by the estimator should become more accurate and closer to the true value of the parameter. Additionally, statistical tests such as the mean squared error (MSE) can also be used to evaluate the consistency of an estimator.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
475
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
23
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
9
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
748
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
Replies
1
Views
619
  • Set Theory, Logic, Probability, Statistics
Replies
5
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
6K
  • Calculus and Beyond Homework Help
Replies
11
Views
1K
Back
Top