Is Maximum Likelihood Estimation Valid with Limited Data?

senmeis
Messages
72
Reaction score
2
Hello,

I have a question to Maximum Likelihood Estimation. The typical form of MLE looks like this:

X = Hθ + W. W is gaussion with N(0, C).
θml = (HTC-1H)-1HTC-1X

I think θml can only be calculated after a lot of measurements are made, that is, there are plenty of samples of H and X. Or, it is impossible to get θml if only information about θ is known. Do I understand it correctly?

Senmeis
 
Physics news on Phys.org
The typical form of MLE is that you have some random variable X that depends on a parameter \theta, and has a density p(x,\theta), then given some samples x_1,...,x_n of X, you find the value of \theta maximizing
\prod_{j=1}^{n} p(x_j, \theta)

or something to that effect (it might be different if your samples are dependent, or you have samples from different random variables etc.). You seem to have a very specific application of this to a Gaussian model. You can do the calculation with any number of samples, but the more samples you have the better odds you have that your MLE estimate is a good estimate of the real value of the parameter.
 
  • Like
Likes 1 person
A measurement vector can be written as:

ø = Du + ε where ε is a zero mean gaussian random vector.

The MLE is Dml when P(ε) is maximum, but why maximum? I think the probability of ε shall be as small as possible. I know I must make a understanding mistake. Can anyone point it out?

Senmeis
 
In words, by picking D to maximize P(ε), you are saying 'My choice of D indicates that the events I just witnessed were not unusual in any way", whereas if you try to minimize P(ε), you are saying "My choice of D indicates the events I just witnessed will never happen again in the history of the universe".

To give a simple example, let's say I flip one hundred coins and all of them come up heads. I then ask you for a MLE of the probability that the coin lands on heads. If you want to maximize the probability that one hundred heads come up and notails comes up, you'll end up saying 'the coin has a probability of 1 of landing on heads' because if that is the case, then the probability that I get 100 heads in a row is 1. If you wanted to minimize the probability that the coin comes up heads 100 times in a row, you will tell me 'the coin has a probability of 0 of landing on heads.', and 100 heads coming up in a row will have a probability of 0. Which sounds more reasonable?
 
senmeis said:
A measurement vector can be written as:

ø = Du + ε where ε is a zero mean gaussian random vector.

The MLE is Dml when P(ε) is maximum, but why maximum? I think the probability of ε shall be as small as possible. I know I must make a understanding mistake. Can anyone point it out?

Senmeis

You want to find the model that most likely could have produced the data that you have. That is the goal of the MLE. If you have to chose between a model the is very unlikely to produce your data, and one that is likely to give those results, you pick the more likely one. If you have tossed a coin 50 times and got 50 heads, you would pick the model that the coin is rigged for heads, not the model that the coin is fair.
 
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Thread 'Detail of Diagonalization Lemma'
The following is more or less taken from page 6 of C. Smorynski's "Self-Reference and Modal Logic". (Springer, 1985) (I couldn't get raised brackets to indicate codification (Gödel numbering), so I use a box. The overline is assigning a name. The detail I would like clarification on is in the second step in the last line, where we have an m-overlined, and we substitute the expression for m. Are we saying that the name of a coded term is the same as the coded term? Thanks in advance.
Back
Top