Statistics: Consistent Estimators

In summary: it is important to thoroughly understand and analyze the theorems and equations being used, and to apply critical thinking and problem-solving skills to determine the consistency or inconsistency of estimators.
  • #1
kingwinner
1,270
0

Homework Statement


Q1) Theorem:
An asymptotically unbiased estimator 'theta hat' for 'theta' is a consistent estimator of 'theta' IF
lim Var(theta hat) = 0
n->inf

Now my question is, if the limit is NOT zero, can we conclude that the estimator is NOT consistent? (i.e. is the theorem actually "if and only if", or is the theorem just one way?)




Q2) http://www.geocities.com/asdfasdf23135/stat9.JPG

I'm OK with part a, but I am stuck badly in part b. The only theorem I learned about consistency is the one above. Using the theorem, how can we prove consistency or inconsistency of each of the two estimators? I am having trouble computing and simplifying the variances...


Homework Equations


N/A

The Attempt at a Solution


Q1) I've seen the proof for the case of the theorem as stated.
Let A=P(|theta hat - theta|>epsilon) and B=Var(theta hat)/epsilon^2
At the end of the proof we have 0<A<B and if V(theta hat)->0 as n->inf, then B->0, so by squeeze theorem A->0 which proves convergence in probability (i.e. proves consistency).

I tried to modfiy the proof for the converse, but failed. For the case that lim V(theta hat) is not equal to zero, it SEEMS to me that (by looking at the above proof and modifying the last step) the estimator can be consistent or inconsistent (i.e. the theorem is inconclusive) since A may tend to zero or it may not, so we can't say for sure.

How can we prove rigorously that "for an unbiased estimator, if its variance does not tend to zero, then it's not a consistent estimator." Is this is a true statement?



Q2) Var(aX+b) = a^2 Var(X)
So the variance of the first estimator is [1/(n-1)^2]Var[...] where ... is the summation stuff. I am stuck right here. How can I calculate Var[...]? The terms are not even independent...and (Xi-Xbar) is squared, which creates more trouble in computing the variance


Thanks for helping!
 
Physics news on Phys.org
  • #2


it is important to approach this question with a critical and analytical mindset. Let's break down the forum post and address each question separately.

Q1) The theorem in question states that an asymptotically unbiased estimator is a consistent estimator if the limit of its variance tends to zero as n (the sample size) tends to infinity. The question posed is whether this theorem is a "if and only if" statement, meaning that if the limit is not zero, can we conclude that the estimator is not consistent?

The answer to this question is yes, the theorem is a "if and only if" statement. In other words, if the limit of the variance is not zero, then we can conclude that the estimator is not consistent. This is because the proof of the theorem relies on the squeeze theorem, which states that if a function is squeezed between two other functions that approach the same limit, then the squeezed function must also approach that limit. In this case, the function A (defined in the post) represents the probability that the estimator is far from the true value, and the function B represents the variance of the estimator divided by a constant. If B tends to zero, then A must also tend to zero, and vice versa. Therefore, if the variance does not tend to zero, then the estimator cannot be consistent.

Q2) In part b of the question, the task is to use the given theorem to prove the consistency or inconsistency of two estimators. The theorem itself is not applicable in this case, as it only applies to asymptotically unbiased estimators. However, we can use the properties of variance to prove the consistency or inconsistency of the estimators.

To calculate the variance of the first estimator, we can use the property Var(aX+b) = a^2 Var(X). However, as stated in the post, the terms in the summation are not independent, so this property cannot be directly applied. Instead, we can use the definition of variance Var(X) = E[(X-E[X])^2] and expand the squared term, which will result in a summation of terms that can be simplified and calculated.

Once the variance is calculated, we can then use the given theorem to determine the consistency or inconsistency of the estimator. If the limit of the variance tends to zero, then the estimator is consistent. If it does not tend to zero, then the estimator is inconsistent.

In conclusion, as a scientist,
 

What is the definition of a consistent estimator in statistics?

A consistent estimator in statistics is a type of estimator that, as the sample size increases, produces estimates that converge to the true population parameter. In other words, as more data is collected, consistent estimators become more accurate and reliable in estimating population parameters.

What are the key properties of a consistent estimator?

There are two key properties of a consistent estimator: unbiasedness and efficiency. Unbiasedness means that, on average, the estimator produces estimates that are equal to the true population parameter. Efficiency means that the estimator has the smallest possible variance among all unbiased estimators for a given sample size.

How can we determine if an estimator is consistent?

One way to determine if an estimator is consistent is by using the Law of Large Numbers. This law states that as the sample size increases, the sample mean will approach the true population mean. Therefore, if the estimator is based on the sample mean, it will also approach the true population parameter as the sample size increases, making it a consistent estimator.

What are some examples of consistent estimators in statistics?

Some common examples of consistent estimators include the sample mean and the sample variance. These estimators become more accurate and reliable as the sample size increases, making them consistent. Other examples include the method of moments estimators and maximum likelihood estimators.

Why is it important to use consistent estimators in statistical analysis?

Consistent estimators are important because they provide reliable and accurate estimates of population parameters. This is especially crucial in situations where the sample size is small or the population is unknown. Using consistent estimators ensures that the results of statistical analysis are valid and can be applied to the larger population.

Similar threads

  • Calculus and Beyond Homework Help
Replies
11
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
458
  • Calculus and Beyond Homework Help
Replies
2
Views
709
  • Calculus and Beyond Homework Help
Replies
3
Views
1K
  • Calculus and Beyond Homework Help
Replies
4
Views
4K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
740
  • Calculus and Beyond Homework Help
Replies
8
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
1K
Replies
2
Views
2K
  • Calculus and Beyond Homework Help
Replies
1
Views
253
Back
Top