Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Statistics: Consistent Estimators

  1. Feb 18, 2009 #1
    1) Theorem:
    An asymptotically unbiased estimator 'theta hat' for 'theta' is a consistent estimator of 'theta' IF
    lim Var(theta hat) = 0
    n->inf

    Now my question is, if the limit is NOT zero, can we conclude that the estimator is NOT consistent? (i.e. is the theorem actually "if and only if", or is the theorem just one way?)




    2) http://www.geocities.com/asdfasdf23135/stat9.JPG

    I'm OK with part a, but I am stuck badly in part b. The only theorem I learned about consistency is the one above. Using the theorem, how can we prove consistency or inconsistency of each of the two estimators? I am having trouble computing and simplifying the variances...


    Thank you for your help!
     
  2. jcsd
  3. Feb 19, 2009 #2

    ssd

    User Avatar

    Please post it in homework section. The answer is not tough. Show your attempts.
     
  4. Feb 19, 2009 #3
    1) I've seen the proof for the case of the theorem as stated.
    Let A=P(|theta hat - theta|>epsilon) and B=Var(theta hat)/epsilon^2
    At the end of the proof we have 0<A<B and if V(theta hat)->0 as n->inf, then B->0, so by squeeze theorem A->0 which proves convergence in probability (i.e. proves consistency).

    I tried to modfiy the proof for the converse, but failed. For the case that lim V(theta hat) is not equal to zero, it SEEMS to me that (by looking at the above proof and modifying the last step) the estimator can be consistent or inconsistent (i.e. the theorem is inconclusive) since A may tend to zero or it may not, so we can't say for sure.

    How can we prove rigorously that "for an unbiased estimator, if its variance does not tend to zero, then it's not a consistent estimator." Is this is a true statement?



    2) Var(aX+b) = a^2 Var(X)
    So the variance of the first estimator is [1/(n-1)^2]Var[...] where ... is the summation stuff. I am stuck right here. How can I calculate Var[...]? The terms are not even independent...and (Xi-Xbar) is squared, which creates more trouble in computing the variance

    Thanks!
     
  5. Feb 19, 2009 #4
    Last edited: Feb 19, 2009
  6. Feb 19, 2009 #5
    If the variance doesn’t tend to zero then how can it converge in a probabilistic sense. If there is variance it means that there is a finite probability of getting something other then your estimated value. Also why are you trying to prove the converse when you weren’t asked to in the above questions?
    http://en.wikipedia.org/wiki/Consistent_estimator
     
  7. Feb 19, 2009 #6
    My textbook only states the theorem only in "one way" (if), so if I can prove that the converse is also true (iff), then I can have a way of proving some estimator is NOT consistent, but I highly doubt whether the converse of the theorem is true. Note that with the theorem as stated in "one way", I can only prove that something is consistent, but I have no way of proving something is NOT consistent.
     
  8. Feb 19, 2009 #7
    I think your over thinking it. But anyway if you must; show if the variance doesn’t go to zero then it cannot converge in probability. I would probably use contradiction.
     
  9. Feb 19, 2009 #8
    But are you sure that the following is a true statement?
    "If lim Var(theta hat) is NOT equal to zero, then 'theta hat' is NOT consistent."

    I am having troubles proving it, and a search on the internet seems to collect some evidence that the statement (i.e. the converse of the original stated theorem) is not true. I saw somebody saying that, but he/she might be wrong.
     
  10. Feb 19, 2009 #9
    Okay. Let's say [tex]P((\theta- \hat \theta_n )^2>\epsilon)[/tex] goes to zero for all [tex]\epsilon[/tex] but [tex]P(|\theta- \hat \theta_n |>\epsilon)[/tex] doesn't.

    This would imply that there exists a positive [tex]\epsilon[/tex] where:
    for all n [tex]P(|\theta- \hat \theta_n |>\epsilon)[/tex].

    This is equivalent to saying that there is a finite probability that
    [tex]P(|\theta- \hat \theta_n |^2>\epsilon^2)[/tex] since:
    [tex](|\theta- \hat \theta_n |)^2>\epsilon^2 <=> |\theta- \hat \theta_n |>\epsilon[/tex]

    But this violates the original hypothesis that:
    [tex]P((\theta- \hat \theta_n )^2>\epsilon)[/tex] for all [tex]\epsilon[/tex] goes to zero for all [tex]\epsilon[/tex].
     
    Last edited: Feb 19, 2009
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook