- #1
Bipolarity
- 776
- 2
Suppose you have a distribution ##p(x, \mu)##.
You take a sample of n points ## (x_{1}...x_{n})## from independent and identical distributions of ##p(x, \mu)##.
The maximum likelihood estimator (MLE) for the mean ## \mu ## is the value of ## \mu ## that maximizes the joint distribution ## \prod^{n}_{i = 1} p(x_{i},\mu) ##. It is easy to find using calculus.
The sample mean is simply ## \frac{(x_{1}+x_{2}+...+x_{n})}{n} ##.
It turns out that for Gaussian, Poisson, and Bernoulli distributions, the MLE estimator for the mean equals the sample mean. I was curious if this is the case for ALL distributions? If so, how would I prove this? If not, what is one distribution for which this isn't the case?
Thanks!
BiP
You take a sample of n points ## (x_{1}...x_{n})## from independent and identical distributions of ##p(x, \mu)##.
The maximum likelihood estimator (MLE) for the mean ## \mu ## is the value of ## \mu ## that maximizes the joint distribution ## \prod^{n}_{i = 1} p(x_{i},\mu) ##. It is easy to find using calculus.
The sample mean is simply ## \frac{(x_{1}+x_{2}+...+x_{n})}{n} ##.
It turns out that for Gaussian, Poisson, and Bernoulli distributions, the MLE estimator for the mean equals the sample mean. I was curious if this is the case for ALL distributions? If so, how would I prove this? If not, what is one distribution for which this isn't the case?
Thanks!
BiP