Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Why is/was consistency of estimators desired?

  1. Jul 30, 2012 #1

    Stephen Tashi

    User Avatar
    Science Advisor

    Why is/was "consistency" of estimators desired?

    In an article, I found while researching another thread ("Revisiting a 90-year-old debate: the advantages of the mean deviation", http://www.leeds.ac.uk/educol/documents/00003759.htm ), the author states this bit of statistics history:

    I recognize the description of "sufficient" and "efficient" as modern criteria. But the description of "consistent" seems rather simple minded. Was the idea of "consistent" that if the estimator and the population parameter were calculated "in the same way" that the probability of the estimate being near true value of the parameter would approach 1.0 as the sample size approached infinity?
     
  2. jcsd
  3. Jul 30, 2012 #2
    Re: Why is/was "consistency" of estimators desired?

    I glanced at my old mathematical statistics textbook, and it defines a sequence of estimators (in most cases, taken as sample size increases) as consistent if it converges in probability to the true value of the parameter, which is the only way I remember ever seeing it defined anywhere. I assume there are some other equivalent definitions.
     
  4. Jul 31, 2012 #3

    chiro

    User Avatar
    Science Advisor

    Re: Why is/was "consistency" of estimators desired?

    My understanding is pretty much the same as Number Nine's with the exception that it is quantified in terms of the variance converging to 0 for some estimator as the number of samples reaches the size of the population: in something like a census, this is finite but for a theoretical distribution, it's infinite.

    In terms of things being calculated "the same way", it would seem that there would be some similarity between the population parameter and the estimated parameter's distribution since they are both based on the same underlying PDF, but I'd be interested to here any further comments on this.

    I guess the only other thing though that I see as important is the actual nature of the convergence as opposed to the condition that convergence simply exists.

    Typically the way this is looked at is in terms of how the variance changes with an increasing sample size, but I would think that it's equally important to see how P(X = x) changes as n -> infinity rather than how just the variance changes.
     
  5. Jul 31, 2012 #4

    Stephen Tashi

    User Avatar
    Science Advisor

    Re: Why is/was "consistency" of estimators desired?

    I understand (or can understand if I read carefully) the modern definition of "consistency" for an estimator. My original post is mainly about the old fashioned definition of consistency that says the estimator must be computed "in the same way" as the parameter that it estimates.

    (An interesting historical question is "When did the modern definition of consistency" supercede the old one?".)

    I think the condition "in the same way" can be made precise by saying we compute a (old fashioned) consistent estimator for the parameter P by treating the sample as a population (i.e. as defining a distribution) and define the estimate by the same formula as we define the parameter P.

    If that's what was meant in olden times, then technically the unbiased estimator for the variance of a Gaussian distribution was not consistent since it is not computed "in the same way" as the population parameter.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Why is/was consistency of estimators desired?
Loading...