Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

MLE, Uniform Distribution, missing data

  1. Aug 17, 2010 #1
    I would like to determine the MLE for k in U(0,k) where U is the uniform pdf constant on the interval [0,k] and zero elsewhere. I would like this estimate in the case of missing data. To be specific, what is the MLE for k given the three draws X={1,3,*} where * is unknown.
  2. jcsd
  3. Aug 18, 2010 #2
    The only thing we can say for certain is that [tex]k\geq 3[/tex]. So what do you think the MLE of k would be?
  4. Aug 18, 2010 #3
    Yes. I think it should be the largest measured value, in this case three. Thank you for the verification.

    I had tried to look at it from doing Expectation Maximization: Assume the missing value x* as large and estimating k by using the expectation value of x* = x*/2. If this is greater than 3 iterate again using this as my new k. If it is less than 3, then K=3. This will ultimately always converge to the largest recorded value (=3). Is this a valid argument?

    I was troubled by the fact that I have information (additional mesurement(s)) that is being ignored. I guess that means that the MLE with missing information is even more biased by the fact that this information is ignored.
  5. Aug 18, 2010 #4
    It should be as long as your likelihood function ranges over distribution parameters for the data you actually have. I believe the MLE is biased toward underestimating k.

    What information are you ignoring? If a datum is missing the only alternative is to interpolate or simulate it. For this you might use the sample mean which is 2. I think the sample would be too small for MME but you could try it for n=3 if in fact the missing datum was included in the sample but lost.
    Last edited: Aug 18, 2010
  6. Aug 18, 2010 #5
    I think I am ignoring the fact that the missing data could be greater than 3 if the true value of the K is greater than three. Perhaps the likelihood looks something like

    If[x > 3, (3/k) (1/x)^3, 0]+If[x > k, ((k - 3)/k) (1/x)^3, 0]

    (where k is my assumed K>3), which is maximum for x=k=3, implying K=3 as above, but I am not sure of my likelihood. I am trying to stay with MLE since I got sidetracked into this while looking at the Expectation-Maximization algorithm
  7. Aug 18, 2010 #6
    Well, of course, k>3 is possible, but I assume to have good random sample, if very small. Given the small sample with a missing data point, I agree that this MLE is about as good as you are going to be able to achieve.

    Choosing to interpolate the missing data point as I described will not change your estimate. It will simply increase its power a bit. Of course this technique is usually used on much larger data sets (which are expensive to develop) where a few data points get "lost" somehow, and you want to maximize the power of your estimate. If you were to do this on this set, it would only be as an experiment, not for statistical inference. You really can't make any inferences from this tiny data set
    Last edited: Aug 18, 2010
  8. Aug 19, 2010 #7
    I agree with all that you are saying. I am not really trying to "improve" the estimate. What I am interested in is a functional form for the likelihood and thought if I understood the MLE for my toy n=3, U(0,K) example I would be one step closer to this real goal. Given the functional form I wanted to formally apply the EM algorithm. I thought the toy example would be insightful to learning about the EM algorithm but the piecewise continuous nature of the U(0,K) has lead me rather astray. I am back to using the exponential family of distributions to investigate the EM algorithm which make the expectation step more straightforward.

    I want to thank you very much for your kind help.
  9. Aug 19, 2010 #8
    You're welcome.
  10. Aug 21, 2010 #9
    As far as I understand it, the EM algorithm works like this: For the E step, since x* is U(0,k) with k>=3, the log-likelihood given x* and the 2 other datapoints is log(1/k^3), so the expected log-likelihood is also log(1/k^3). For the M step, this is maximized at k=3, so EM stops after 1 iteration.

    Also for uniform estimation it's worth taking a look at minimum variance estimators. Wikipedia has a good article on how the German Tank Problem was solved in this way.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook