MLE is biased: are there other estimation methods?

  • Thread starter Thread starter omg!
  • Start date Start date
  • Tags Tags
    Estimation Mle
AI Thread Summary
The discussion revolves around estimating the coordinates x_1 and x_2 in a 2D plane given their distance d and variances. The maximum likelihood estimator (MLE) is initially thought to be biased when variances are comparable to d, but further analysis reveals that bias primarily occurs when variances differ significantly. The MLE is generally asymptotically unbiased, meaning it performs better with larger sample sizes, which is a concern since the data consists of only one pair of coordinates. Alternatives to MLE, such as minimum variance unbiased estimators (MVUEs), are suggested for providing estimates of parameter errors. The conversation also emphasizes the importance of calculating the expected value of the estimator to assess bias accurately.
omg!
Messages
50
Reaction score
0
hi all,
i would appreciate any help you can offer for the following problem.

consider coordinates x_1, x_2 in the plane for which ||x_1-x_2||=d.
suppose that this pair of coordinates can be measured independently, and that the measurements are 2D normally distributed with means x_1, x_2 and variances \sigma^2_1, \sigma^2_2. given the value of d and with known variances, how do i estimate the real position x_1, x_2 from a pair of measured positions?

the parameters to be estimated can be reduced to the coordinates of the center between x_1, x_2, giving 2 parameters (x,y coordinates), and an angle parameter to describe the angle of the vector x_1 - x_2

with MLE, i have written down the probability density function, which takes the form of a 4-variate normal distribution with 3 unknown parameters. the extremal point of the log-likelihood in terms of can be written down explicitly.

the problem is that the solution seems to be biased, if the sqrt(variances) are on the order of d. for very small variances it seems to work just fine.

do you know why this could be the case? and are there any alternatives to the MLE approach that also provide estimates for the errors of the estimated parameters?

thank you very much.
cheers
 
Physics news on Phys.org
omg! said:
the problem is that the solution seems to be biased, if the sqrt(variances) are on the order of d.

Are you basing that statement on a Monte-Carlo simulation? , or a numerical solution of the equations satisfied by the MLE? I assume you wouldn't say "seems" if you can compute the expectation of the estimator explicitly.
 
I'd be interested if you have done a Monte-Carlo simulation as Stephen Tashi has suggested since this should demonstrate how it relates to the theoretical results.
 
thank you very much for your interest.

the maximum likelihood estimator of the aforementioned center coordinate \vec{x}_c=x_(\vec{x}_1+\vec{x}_2)/2 is given by

x_c=(\frac{x_1+d\cos\alpha}{\sigma_1^2}+\frac{x_2-d\cos\alpha}{\sigma_2^2})/(1/\sigma_1^2+1/\sigma_2^2)

y_c=\frac{y_1+d\sin\alpha}{\sigma_1^2}+\frac{y_2-d\sin\alpha}{\sigma_2^2}/(1/\sigma_1^2+1/\sigma_2^2)
(x_i,y_i, i=1,2 are the measured coordinate components now)
while the maximum likelihood estimator of the angle \alpha (the angle of \vec{x}_2-\vec{x}_1 with the x-axis) is
\alpha=\arccos\frac{x_2-x_1}{||\vec{x}_2-\vec{x}_1||}

judging by these formulas, i would say that the estimators are not biased.
contrary to my first post, i have found that a bias occurs only when the variances are different from each other, even when they are one order of magnitude smaller than d. the test was done with the normal random number found in MATLAB.
additional observations:
1. the theoretical variance of \alpha (from the MLE covariance matrix) is much greater than the actual variance in the estimated \alpha, which is due to \alpha\in[0,2\pi].
2. x_c,y_c are independent, but not with respect to \alpha.

do you think that i have chosen the right way to parameterize the distribution? is there a set of parameters that are independent, and unbounded?

thank you again for your advice
 
Last edited:
Ordinarily, MLE is an efficient estimator in that it minimizes the mean square error (MSE) more rapidly than other estimators with respect to sample size or repeated sampling. The MLE estimate of the mean is unbiased, but the estimate of variance is biased. The bias of the MLE is most important with small samples and near the boundary values of sample data points. However, there are corrections for these such as the Bessel correction for variance and the Cox and Snell correction for extreme values based on the Pareto distribution. You can look these up to see how they are used.

http://web.uvic.ca/~dgiles/downloads/working_papers/ewp0902_revised.pdf
 
omg! said:
the maximum likelihood estimator of the aforementioned center coordinate \vec{x}_c=x_(\vec{x}_1+\vec{x}_2)/2 is given by

x_c=(\frac{x_1+d\cos\alpha}{\sigma_1^2}+\frac{x_2-d\cos\alpha}{\sigma_2^2})/(1/\sigma_1^2+1/\sigma_2^2)

y_c=\frac{y_1+d\sin\alpha}{\sigma_1^2}+\frac{y_2-d\sin\alpha}{\sigma_2^2}/(1/\sigma_1^2+1/\sigma_2^2)
(x_i,y_i, i=1,2 are the measured coordinate components now)

I can understand those formulae if your sample consists of one pair of 2-D points. Is your sample size that small? If not , then your notation needs some sort of summations in it.

The proper statement about maximum liklihood estimators is that they are "asymptotically unbiased", meaning they are approximately unbiased for large sample sizes. What sample size are you simulating?
 
Last edited:
omg! said:
are there any alternatives to the MLE approach that also provide estimates for the errors of the estimated parameters?

There is a whole alphabet soup of acronyms here. MVUEs (minimum variance unbiased estimators) would be a good place to start.

Once you have a devised a specific estimator, you can usually investigate its statstical properties. Obviously you need to find its expected value, otherwise you don't know if is it biased or not!
 
omg! said:
are there any alternatives to the MLE approach that also provide estimates for the errors of the estimated parameters?

What do you mean by an estimate of "the errors of the estimated parameters"? Does this refer to a probability distribution for the errors? - the distribution of the true values vis-a-vis the estimated value (or vice-versa)?

It wouldn't make sense to estimate the error between an estimator and the true value of a parameter as a single numerical value.
 
Stephen Tashi said:
I can understand those formulae if your sample consists of one pair of 2-D points. Is your sample size that small? If not , then your notation needs some sort of summations in it.

The proper statement about maximum liklihood estimators is that they are "asymptotically unbiased", meaning they are approximately unbiased for large sample sizes. What sample size are you simulating?

the data is only ONE pair of 2D coordinates, that's right! I suppose that could be the reason for the bias, as it was pointed out that MLE is asymptotically unbiased. And I would like to quantify the uncertainty of the estimation of x_c, y_c, alpha.

Obviously you need to find its expected value, otherwise you don't know if is it biased or not!

i'm afraid i don't know how to compute the expectation value of the estimator described in my previous post. as you can see, cos(alpha) and sin(alpha) involve a distance random variable in the nominator.

I will try to calculate the next order bias correction, as suggested by SW VandeCarr. thank you!
 
  • #10
omg! said:
i'm afraid i don't know how to compute the expectation value of the estimator described in my previous post. as you can see, cos(alpha) and sin(alpha) involve a distance random variable in the nominator.

Have you tried doing a Monte-Carlo simulation and then just using sample statistics to get your expectation?
 
Back
Top