Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Basics of estimation theory

  1. Apr 21, 2010 #1
    Hello everyone,

    I just started reading "Fundamentals of Statistical Signal processing" by Steven Kay and just got done with the first chapter, which describes the estimation problem, PDFs etc.

    It has some very interesting problems at the end and some of them have me a bit confused. I think they are quite central to understanding the content of this chapter and I was hoping to see if you guys can shed some light on this.

    An unknown parameter [tex]\theta[/tex] influences the outcome of an experiment which is modeled by a random variable x. The PDF of x is:
    p(x;[tex]\theta[/tex]) = [tex]\frac{1}{\sqrt{2\pi}}exp \left[-\frac{1}{2}\left(x-\theta\right)^{2}\right][/tex]
    A series of experiments are performed and x is found to be always in the range [97, 103]. The experimenter concludes x must be 100. Comment on this.

    I was thinking about this for a while. But I am not sure about my reasoning. I have a hunch that the experimenter is wrong to come to this conclusion but am unable to explain why. I have a feeling that I need to know the shape of the PDF and maybe know something about the particular estimator.

    I would be really grateful if someone can help me with this. This seems quite a fundamental question and I have a feeling that I have not understood the subject matter well.

    This is always a hassle with self-study!

    Many thanks,

  2. jcsd
  3. Apr 22, 2010 #2


    User Avatar
    Science Advisor

    In your description you are given that the pdf is normal with a standard deviation = 1 and an unknown mean. By performing experiments, he is is estimating the mean to be 100.

    You should have said θ not x.
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook