What does the log likelihood say?

In summary: It is common statistical practice that likelihood functions are used in the comparison of models for the same data. Thus, interpretations of likelihoods are with respect to the plausibility of one model to that of another. If you do not intend to compare models for a given data set, do not worry about the actual value of a likelihood function. Comparing likelihoods from different data sets is like...comparing models of cars to each other- it's pointless.
  • #1
Lobotomy
58
0
Hi, here's some information after fitting measurements to a lognormal distribution.

What exactely does it mean that the log likelyhood is -67.175?? As of my understanding the log likelihood, is the natural logaritm of the likelihood function, which is the probability that these measurements comes from this distribution??

1. is that correct?
2. why is it negative?
3. How can i decide if this is a good distribution fit? (graphically it looks satisfying, I am not in a need for extremely exact numbers for what I am going to use them for)
Distribution: Lognormal
Log likelihood: -67.175
Domain: 0 < y < Inf
Mean: 9.04552
Variance: 1.51012

Parameter Estimate Std. Err.
mu 2.19313 0.0208669
sigma 0.135233 0.0150259

Estimated covariance of parameter estimates:
mu sigma
mu 0.000435428 -2.49709e-018
sigma -2.49709e-018 0.000225778
edit: the likelihood, taking the exp of the log likelihood would be
exp(-67.175)= something*10^-30 ! incredibly small number. This can't be correct? the likelihood of that distribution can't be that small? the fit is not THAT bad =)

i would expect a likelihood of about 0.7-0.95 right? does anyone know what this number means?
 
Last edited:
Physics news on Phys.org
  • #2
You're looking at the maximum log-likelihood estimate of the fitted curve. It's small because it's the result of a highly iterative procedure. In fact you don't really need to know much of the theory behind MLE. Your program will give Goodness of Fit and other stats. Basically, if it looks good, it probably is good.

Here's some information which isn't too technical. It treats MLE in the second part of the discussion. If you're interested in a more mathematical treatment, go to 'maximum likelihood estimation' in the Wikipedia.

http://www.appstate.edu/~whiteheadjc/service/logit/intro.htm#maxlike
 
Last edited:
  • #3
SW VandeCarr said:
You're looking at the maximum log-likelihood estimate of the fitted curve. It's small because it's the result of a highly iterative procedure. In fact you don't really need to know much of the theory behind MLE. Your program will give Goodness of Fit and other stats. Basically, if it looks good, it probably is good.

Here's some information which isn't too technical. It treats MLE in the second part of the discussion. If you're interested in a more mathematical treatment, go to 'maximum likelihood estimation' in the Wikipedia.

http://www.appstate.edu/~whiteheadjc/service/logit/intro.htm#maxlike

well this isn't curve fitting or regression fitting. its a fitting of distribution. and by the way when I've been fitting curves I've had much larger likelihoods like 80% but this time we're talking 10^-30 ! that would mean that the fit is incredibly, well astronomically unlikely! which i doubt is the correct interpretation of it so there must be something else.
 
  • #4
Lobotomy said:
well this isn't curve fitting or regression fitting. its a fitting of distribution. and by the way when I've been fitting curves I've had much larger likelihoods like 80% but this time we're talking 10^-30 ! that would mean that the fit is incredibly, well astronomically unlikely! which i doubt is the correct interpretation of it so there must be something else.

The number you're referencing has nothing to do with percents. It is negative because numbers in the interval [0,1] have negative logarithms. I'll not be responding to any more of your posts. Perhaps someone else will, but, so far, no one else has. Perhaps there is a reason.
 
  • #5
SW VandeCarr said:
The number you're referencing has nothing to do with percents. It is negative because numbers in the interval [0,1] have negative logarithms. I'll not be responding to any more of your posts. Perhaps someone else will, but, so far, no one else has. Perhaps there is a reason.

yeah i guess no one is able to give a straight answer to the meaning of this measurement. even if you google it you don't find very much info, and there's nothing in the MATLAB help but thanks for trying
 
  • #6
You cannot calculate the probability of observing a range of parameters from the likelihood without proper normalization. Your interpretation is fallacious. Frequentists believe that parameter values are fixed, unknown quantities. If you really want to calculate probabilities of parameters, consider fiducial inference or a Bayesian model.

It is common statistical practice that likelihood functions are used in the comparison of models for the same data. Thus, interpretations of likelihoods are with respect to the plausibility of one model to that of another. If you do not intend to compare models for a given data set, do not worry about the actual value of a likelihood function. Comparing likelihoods from different data sets is like comparing apples and oranges. The best approach to assess model fit is to visualize the model. A histogram of data with the proposed model curve or a residual plot from a LS-regression line are examples.
 
  • #7
The likelihood function is the same as the joint pdf or pmf of your data. The only difference is the order of conditioning. In the likelihood, you condition on the observed data. The joint pdf/pmf conditions on the parameter(s).
 
  • #8
d3t3rt said:
You cannot calculate the probability of observing a range of parameters from the likelihood without proper normalization. Your interpretation is fallacious. Frequentists believe that parameter values are fixed, unknown quantities. If you really want to calculate probabilities of parameters, consider fiducial inference or a Bayesian model.

It is common statistical practice that likelihood functions are used in the comparison of models for the same data. Thus, interpretations of likelihoods are with respect to the plausibility of one model to that of another. If you do not intend to compare models for a given data set, do not worry about the actual value of a likelihood function. Comparing likelihoods from different data sets is like comparing apples and oranges. The best approach to assess model fit is to visualize the model. A histogram of data with the proposed model curve or a residual plot from a LS-regression line are examples.

i get your point but I am actually comparing which distribution that has the best fit to my measurements. I'm not comparing measurements with other measurements, so just assume i have 100 values that i want to fit do a distribution and i only want to know what distribution has the best fit to these values.

so i try to fit normal, lognormal, logistic etc distributions to this set of measurements. thus the question is simple. if i have one set of measurements and the log likelihood value is for instance -65 for a normal distribution fitted to these measurements and -49 for a lognormal distribution fitted to these measurements:
- which distribution has the best fit??
 
  • #9
As your models don't seem to be nested best look for Akaikes information criterion (AIC) or Bayesian Information criterion (BIC).
 
  • #10
DrDu said:
As your models don't seem to be nested best look for Akaikes information criterion (AIC) or Bayesian Information criterion (BIC).
ok so there's not a straight answer to this question? (i know there's a debate between the frequentists and bayesian statistics, but I am not interested in the details of metaphysics here, i just want a good enough straight answer)

what you're saying is that by merely comparing the loglikelihood value there is no indication whatsoever which distribution has the best fit to a certain set of measurements? please answer yes or no.

i don't remember statistics being this complicated when i studied it =)
ok ALL I want to do is to know which distribution has the best fit to a certain set of measurements. Is this really that complicated? isn't there just a statistical test value such as log likelihood, a chi-square test or something else you can look at to compare fits to the same set of measurements? I'm not doing any rocket science here so the value just has to be an indication and doesn't have to be 100% foolproof from any philosophy of science kind of aspect...

so is there any such value or test that can indicate if a distribution has a better fit than another to a certain set of measurements? please answer yes or no.
if yes, which one?

for instance, i have a vague memory of that a chisquare goodness of fit test can be run on a set of measurements and it will or won't reject the null hypothesis at a certain level of significance. however this only works with the normal distribution and not the lognormal as far as i remember. what i am looking for is something similar, or of similar use for lognormal or distributions in general...
 
Last edited:
  • #11
Yes, you can use chi square distribution to compare models, but only if they are nested, that is one of the two models being compared is the other one with one or several parameters fixed at some predefined value. When you want to compare two models which are not nested, e.g. a gaussian model vs. a lognormal model, then you cannot use this chisquare approach. The AIC or BIC is basically what you want. It compares the models basically comparing the Deviance corrected for the number of freedoms in the respective models. In praxis it is very simple to apply.
 
  • #12
DrDu is right. If you want to see which model best describes the data, use AIC or BIC. These will use the log-likelihood value, but they will also take into account the number of estimated parameters. You cannot compare raw log-likelihoods between models.

This is the straight answer you wanted. Good luck!
 
  • #13
DrDu said:
Yes, you can use chi square distribution to compare models, but only if they are nested, that is one of the two models being compared is the other one with one or several parameters fixed at some predefined value. When you want to compare two models which are not nested, e.g. a gaussian model vs. a lognormal model, then you cannot use this chisquare approach. The AIC or BIC is basically what you want. It compares the models basically comparing the Deviance corrected for the number of freedoms in the respective models. In praxis it is very simple to apply.

ok, so AIC according to wikipedia is:

AIC=2k-2ln(L)
where k is the number of parameters in the model. for both normal and lognormal i guess that mu and sigma are the parameters, hence k is 2 in both cases.
ln(L) is the log likelihood i assume.

for the same set of measurements fitted to 2 different distributions, or models as you call them (normal and lognormal) we get this:

lognormal model loglikelihood: -62.6777
normal model loglikelihood: -70.8926

k = 2 in both cases right?
this gives us:

AIC for the lognormal = 2*2-2*(-62.6777)=129.35
AIC for normal model = 2*2-2*(-70.8926)=145.78so the lognormal is the best model of the two, since a lower AIC is better, is that a correct conclusion?ps. seems like AIC and BIC is more commonly used when doing regression fitting, then i understand the use of k... but I'm not doing regression fits here I am finding a distribution (k is basically always 1 or 2) but maybe it works anyway? just wanted to clearify that...d.s.
 
Last edited:
  • #14
Yes, my k are also mostly 1 or two :-) Should be ok anyhow.
To your other questions: The absolute value of the likelihood ratio is of little relevance. Note especially that it mostly refers to probability density which explains why the absolute likelihood is often so extremely small. A somewhat more usefull quantity is the deviance where one considers the difference of the log likelihood and the log likelihood of a "saturated" model in which there is one parameter for each measurement.
 
  • #15
DrDu said:
Yes, my k are also mostly 1 or two :-) Should be ok anyhow.
To your other questions: The absolute value of the likelihood ratio is of little relevance. Note especially that it mostly refers to probability density which explains why the absolute likelihood is often so extremely small. A somewhat more usefull quantity is the deviance where one considers the difference of the log likelihood and the log likelihood of a "saturated" model in which there is one parameter for each measurement.

when you say likelihood ratio you mean what i refer to as likelihood value? i.e. -62.6777 below?

in the case of AIC it dosn't mean anything unless you compare it to another one as i understand.

so what is a saturated model? my model is for instance a log normal distribution with a value of sigma and mu, what's the saturated model to this?
 
  • #16
I meant your likelihood function, not likelihood ratio, sorry.
 

What does the log likelihood say?

The log likelihood is a statistical measurement that represents the probability of obtaining a certain set of data given a specific statistical model. Essentially, it measures how well the data fits the chosen model. A higher log likelihood value indicates a better fit between the data and the model.

Why is the log likelihood important?

The log likelihood is an important tool in statistical analysis because it allows us to compare different models and choose the one that best fits the data. It also helps us to conduct hypothesis testing and make predictions based on the chosen model.

How is the log likelihood calculated?

The log likelihood is calculated by taking the natural logarithm of the likelihood function. The likelihood function is the probability of obtaining the observed data given a specific model and its parameters. By taking the natural logarithm, the log likelihood is easier to work with mathematically and also allows for easier comparison between models.

What is the difference between likelihood and log likelihood?

Likelihood and log likelihood are both measures of how well a statistical model fits the data. However, likelihood is the actual probability of obtaining the data given the model, while log likelihood is the natural logarithm of the likelihood. Log likelihood is often preferred because it is easier to work with mathematically and allows for easier comparison between models.

Can the log likelihood be negative?

Yes, the log likelihood can be negative. This occurs when the likelihood function is less than 1. However, it is important to note that the absolute value of the log likelihood is what is typically used for comparison between models, so a negative value does not necessarily indicate a bad fit.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
16
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
901
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
918
  • Set Theory, Logic, Probability, Statistics
Replies
9
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
21
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
460
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
2K
  • Calculus and Beyond Homework Help
Replies
7
Views
1K
  • High Energy, Nuclear, Particle Physics
Replies
2
Views
1K
Back
Top