Why is/was consistency of estimators desired?

  • Thread starter Stephen Tashi
  • Start date
  • Tags
    Estimators
In summary, the author of the article is discussing the criteria for a good statistic. Fisher proposed that a statistic should be consistent, sufficient, and efficient. SD and MD meet all of the criteria. SD is superior because it converges to the true value of the parameter more efficiently.
  • #1
Stephen Tashi
Science Advisor
7,861
1,598
Why is/was "consistency" of estimators desired?

In an article, I found while researching another thread ("Revisiting a 90-year-old debate: the advantages of the mean deviation", http://www.leeds.ac.uk/educol/documents/00003759.htm ), the author states this bit of statistics history:

Fisher had proposed that the quality of any statistic could be judged in terms of three characteristics. The statistic, and the population parameter that it represents, should be consistent (i.e. calculated in the same way for both sample and population). The statistic should be sufficient in the sense of summarising all of the relevant information to be gleaned from the sample about the population parameter. In addition, the statistic should be efficient in the sense of having the smallest probable error as an estimate of the population parameter. Both SD and MD meet the first two criteria (to the same extent). According to Fisher, it was in meeting the last criteria that SD proves superior.

I recognize the description of "sufficient" and "efficient" as modern criteria. But the description of "consistent" seems rather simple minded. Was the idea of "consistent" that if the estimator and the population parameter were calculated "in the same way" that the probability of the estimate being near true value of the parameter would approach 1.0 as the sample size approached infinity?
 
Physics news on Phys.org
  • #2


Was the idea of "consistent" that if the estimator and the population parameter were calculated "in the same way" that the probability of the estimate being near true value of the parameter would approach 1.0 as the sample size approached infinity?

I glanced at my old mathematical statistics textbook, and it defines a sequence of estimators (in most cases, taken as sample size increases) as consistent if it converges in probability to the true value of the parameter, which is the only way I remember ever seeing it defined anywhere. I assume there are some other equivalent definitions.
 
  • #3


My understanding is pretty much the same as Number Nine's with the exception that it is quantified in terms of the variance converging to 0 for some estimator as the number of samples reaches the size of the population: in something like a census, this is finite but for a theoretical distribution, it's infinite.

In terms of things being calculated "the same way", it would seem that there would be some similarity between the population parameter and the estimated parameter's distribution since they are both based on the same underlying PDF, but I'd be interested to here any further comments on this.

I guess the only other thing though that I see as important is the actual nature of the convergence as opposed to the condition that convergence simply exists.

Typically the way this is looked at is in terms of how the variance changes with an increasing sample size, but I would think that it's equally important to see how P(X = x) changes as n -> infinity rather than how just the variance changes.
 
  • #4


I understand (or can understand if I read carefully) the modern definition of "consistency" for an estimator. My original post is mainly about the old fashioned definition of consistency that says the estimator must be computed "in the same way" as the parameter that it estimates.

(An interesting historical question is "When did the modern definition of consistency" supercede the old one?".)

I think the condition "in the same way" can be made precise by saying we compute a (old fashioned) consistent estimator for the parameter P by treating the sample as a population (i.e. as defining a distribution) and define the estimate by the same formula as we define the parameter P.

If that's what was meant in olden times, then technically the unbiased estimator for the variance of a Gaussian distribution was not consistent since it is not computed "in the same way" as the population parameter.
 
  • #5


The concept of consistency in estimators is a fundamental principle in statistics that has been desired and valued for many years. It refers to the idea that as the sample size increases, the estimate of a population parameter should approach the true value of that parameter. In other words, the estimate should be reliable and accurate, and not be heavily influenced by random variations in the sample.

Consistency is important because it allows us to have confidence in the results of our statistical analyses. If an estimator is consistent, it means that with larger sample sizes, we can be more confident that our estimate is close to the true value of the population parameter. This is especially important in scientific research, where we want to draw accurate conclusions about a population based on a sample.

Furthermore, consistency is essential for making valid comparisons and drawing meaningful conclusions from statistical analyses. If the estimator is not consistent, then the results may not accurately reflect the population being studied. This can lead to incorrect conclusions and potentially misleading findings.

In the context of the article you mentioned, Fisher's proposal of consistency as one of the three characteristics of a good statistic highlights its importance in the evaluation of statistical methods. It is considered a necessary condition for a good estimator, along with sufficiency and efficiency.

In conclusion, consistency of estimators is desired because it ensures the accuracy and reliability of statistical analyses, allowing for valid comparisons and meaningful conclusions to be drawn from the data.
 

1. Why is consistency of estimators important in scientific research?

The consistency of estimators is important because it ensures that the estimated values are close to the true values of the population parameter. This allows for more accurate and reliable conclusions to be drawn from the data. Without consistency, the estimated values may be biased, leading to incorrect conclusions.

2. How does consistency of estimators affect the validity of statistical analysis?

The consistency of estimators is crucial for the validity of statistical analysis. If the estimators are not consistent, the results of the analysis may not accurately reflect the true population. This can lead to incorrect conclusions and potentially misleading findings.

3. What are the consequences of using inconsistent estimators in scientific studies?

The consequences of using inconsistent estimators in scientific studies can be severe. It can lead to incorrect conclusions and undermine the validity of the research. This can also have a negative impact on future studies and the overall progress of scientific knowledge in a particular field.

4. How does the sample size affect the consistency of estimators?

The sample size can greatly affect the consistency of estimators. As the sample size increases, the estimators tend to become more consistent and closer to the true population parameter. This is why larger sample sizes are often preferred in statistical analysis.

5. Is consistency of estimators always guaranteed in scientific research?

No, consistency of estimators is not always guaranteed in scientific research. It depends on various factors such as the sample size, sampling method, and the underlying assumptions of the statistical model. It is important for researchers to check for consistency and take appropriate measures to address any issues that may arise.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
335
  • Set Theory, Logic, Probability, Statistics
Replies
23
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
3K
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
21
Views
3K
Replies
7
Views
2K
  • Calculus and Beyond Homework Help
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
8
Views
1K
  • Calculus and Beyond Homework Help
Replies
1
Views
2K

Back
Top