Basics of estimation theory

Main Question or Discussion Point

Hello everyone,

I just started reading "Fundamentals of Statistical Signal processing" by Steven Kay and just got done with the first chapter, which describes the estimation problem, PDFs etc.

It has some very interesting problems at the end and some of them have me a bit confused. I think they are quite central to understanding the content of this chapter and I was hoping to see if you guys can shed some light on this.

Question:
An unknown parameter $$\theta$$ influences the outcome of an experiment which is modeled by a random variable x. The PDF of x is:
p(x;$$\theta$$) = $$\frac{1}{\sqrt{2\pi}}exp \left[-\frac{1}{2}\left(x-\theta\right)^{2}\right]$$
A series of experiments are performed and x is found to be always in the range [97, 103]. The experimenter concludes x must be 100. Comment on this.

I was thinking about this for a while. But I am not sure about my reasoning. I have a hunch that the experimenter is wrong to come to this conclusion but am unable to explain why. I have a feeling that I need to know the shape of the PDF and maybe know something about the particular estimator.

I would be really grateful if someone can help me with this. This seems quite a fundamental question and I have a feeling that I have not understood the subject matter well.

This is always a hassle with self-study!

Many thanks,

Luc