Hi,(adsbygoogle = window.adsbygoogle || []).push({});

I'm taking a basic course in statistical methods, and we recently learned of maximum likelihood estimation. We defined the likelihood as a function of some parameter 'a', and found the estimator of 'a' by requiring a maximum likelihood with respect to it.

As an example, we took the results of some fictional experiment, in which the measurements 't' are distributed according to exp(-at), with a statistical error that is normally distributed with a known variance. We defined the likelihood function and maximized it, numerically, in order to find the estimated value of 'a'.

However, in this example we assumed that we know the variance of the statistical error. If this value of 'sigma' is not known, then we must maximize the likelihood function with respect to both 'a' and 'sigma'. And now comes my question - isn't it possible that we will get more than one solution for this extremum problem? Couldn't it be that two different sets of 'a' and 'sigma' will give rise to the same distribution that we saw in the experiment? If that happens, what are the "real" estimators? And besides, even if we deal only with one parameter 'a', couldn't the likelihood function have maxima for two or more values of it?

Sorry about the lengthy post...

Thanks,

Chen

**Physics Forums | Science Articles, Homework Help, Discussion**

Join Physics Forums Today!

The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

# Maximum likelihood

**Physics Forums | Science Articles, Homework Help, Discussion**