Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

MLE is biased: are there other estimation methods?

  1. Nov 21, 2011 #1
    hi all,
    i would appreciate any help you can offer for the following problem.

    consider coordinates [itex]x_1, x_2[/itex] in the plane for which [itex]||x_1-x_2||=d[/itex].
    suppose that this pair of coordinates can be measured independently, and that the measurements are 2D normally distributed with means [itex]x_1, x_2[/itex] and variances [itex]\sigma^2_1, \sigma^2_2[/itex]. given the value of [itex]d[/itex] and with known variances, how do i estimate the real position [itex]x_1, x_2[/itex] from a pair of measured positions?

    the parameters to be estimated can be reduced to the coordinates of the center between [itex]x_1, x_2[/itex], giving 2 parameters (x,y coordinates), and an angle parameter to describe the angle of the vector [itex]x_1 - x_2[/itex]

    with MLE, i have written down the probability density function, which takes the form of a 4-variate normal distribution with 3 unknown parameters. the extremal point of the log-likelihood in terms of [itex][/itex] can be written down explicitly.

    the problem is that the solution seems to be biased, if the sqrt(variances) are on the order of [itex]d[/itex]. for very small variances it seems to work just fine.

    do you know why this could be the case? and are there any alternatives to the MLE approach that also provide estimates for the errors of the estimated parameters?

    thank you very much.
    cheers
     
  2. jcsd
  3. Nov 21, 2011 #2

    Stephen Tashi

    User Avatar
    Science Advisor

    Are you basing that statement on a Monte-Carlo simulation? , or a numerical solution of the equations satisfied by the MLE? I assume you wouldn't say "seems" if you can compute the expectation of the estimator explicitly.
     
  4. Nov 21, 2011 #3

    chiro

    User Avatar
    Science Advisor

    I'd be interested if you have done a Monte-Carlo simulation as Stephen Tashi has suggested since this should demonstrate how it relates to the theoretical results.
     
  5. Nov 22, 2011 #4
    thank you very much for your interest.

    the maximum likelihood estimator of the aforementioned center coordinate [itex]\vec{x}_c=x_(\vec{x}_1+\vec{x}_2)/2[/itex] is given by

    [itex]x_c=(\frac{x_1+d\cos\alpha}{\sigma_1^2}+\frac{x_2-d\cos\alpha}{\sigma_2^2})/(1/\sigma_1^2+1/\sigma_2^2)[/itex]

    [itex]y_c=\frac{y_1+d\sin\alpha}{\sigma_1^2}+\frac{y_2-d\sin\alpha}{\sigma_2^2}/(1/\sigma_1^2+1/\sigma_2^2)[/itex]
    ([itex]x_i,y_i, i=1,2[/itex] are the measured coordinate components now)
    while the maximum likelihood estimator of the angle [itex]\alpha[/itex] (the angle of [itex]\vec{x}_2-\vec{x}_1[/itex] with the x-axis) is
    [itex]\alpha=\arccos\frac{x_2-x_1}{||\vec{x}_2-\vec{x}_1||}[/itex]

    judging by these formulas, i would say that the estimators are not biased.
    contrary to my first post, i have found that a bias occurs only when the variances are different from each other, even when they are one order of magnitude smaller than d. the test was done with the normal random number found in MATLAB.
    additional observations:
    1. the theoretical variance of [itex]\alpha[/itex] (from the MLE covariance matrix) is much greater than the actual variance in the estimated [itex]\alpha[/itex], which is due to [itex]\alpha\in[0,2\pi][/itex].
    2. [itex]x_c,y_c[/itex] are independent, but not with respect to [itex]\alpha[/itex].

    do you think that i have chosen the right way to parameterize the distribution? is there a set of parameters that are independent, and unbounded?

    thank you again for your advice
     
    Last edited: Nov 22, 2011
  6. Nov 23, 2011 #5
    Ordinarily, MLE is an efficient estimator in that it minimizes the mean square error (MSE) more rapidly than other estimators with respect to sample size or repeated sampling. The MLE estimate of the mean is unbiased, but the estimate of variance is biased. The bias of the MLE is most important with small samples and near the boundary values of sample data points. However, there are corrections for these such as the Bessel correction for variance and the Cox and Snell correction for extreme values based on the Pareto distribution. You can look these up to see how they are used.

    http://web.uvic.ca/~dgiles/downloads/working_papers/ewp0902_revised.pdf
     
  7. Nov 23, 2011 #6

    Stephen Tashi

    User Avatar
    Science Advisor

    I can understand those formulae if your sample consists of one pair of 2-D points. Is your sample size that small? If not , then your notation needs some sort of summations in it.

    The proper statement about maximum liklihood estimators is that they are "asymptotically unbiased", meaning they are approximately unbiased for large sample sizes. What sample size are you simulating?
     
    Last edited: Nov 23, 2011
  8. Nov 23, 2011 #7

    AlephZero

    User Avatar
    Science Advisor
    Homework Helper

    There is a whole alphabet soup of acronyms here. MVUEs (minimum variance unbiased estimators) would be a good place to start.

    Once you have a devised a specific estimator, you can usually investigate its statstical properties. Obviously you need to find its expected value, otherwise you don't know if is it biased or not!
     
  9. Nov 23, 2011 #8

    Stephen Tashi

    User Avatar
    Science Advisor

    What do you mean by an estimate of "the errors of the estimated parameters"? Does this refer to a probability distribution for the errors? - the distribution of the true values vis-a-vis the estimated value (or vice-versa)?

    It wouldn't make sense to estimate the error between an estimator and the true value of a parameter as a single numerical value.
     
  10. Nov 23, 2011 #9
    the data is only ONE pair of 2D coordinates, that's right! I suppose that could be the reason for the bias, as it was pointed out that MLE is asymptotically unbiased. And I would like to quantify the uncertainty of the estimation of x_c, y_c, alpha.

    i'm afraid i don't know how to compute the expectation value of the estimator described in my previous post. as you can see, cos(alpha) and sin(alpha) involve a distance random variable in the nominator.

    I will try to calculate the next order bias correction, as suggested by SW VandeCarr. thank you!
     
  11. Nov 23, 2011 #10

    chiro

    User Avatar
    Science Advisor

    Have you tried doing a Monte-Carlo simulation and then just using sample statistics to get your expectation?
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: MLE is biased: are there other estimation methods?
Loading...