Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

What does the log likelihood say?

  1. Apr 21, 2010 #1
    Hi, here's some information after fitting measurements to a lognormal distribution.

    What exactely does it mean that the log likelyhood is -67.175?? As of my understanding the log likelihood, is the natural logaritm of the likelihood function, which is the probability that these measurements comes from this distribution??

    1. is that correct?
    2. why is it negative?
    3. How can i decide if this is a good distribution fit? (graphically it looks satisfying, im not in a need for extremely exact numbers for what im going to use them for)

    Distribution: Lognormal
    Log likelihood: -67.175
    Domain: 0 < y < Inf
    Mean: 9.04552
    Variance: 1.51012

    Parameter Estimate Std. Err.
    mu 2.19313 0.0208669
    sigma 0.135233 0.0150259

    Estimated covariance of parameter estimates:
    mu sigma
    mu 0.000435428 -2.49709e-018
    sigma -2.49709e-018 0.000225778

    edit: the likelihood, taking the exp of the log likelihood would be
    exp(-67.175)= something*10^-30 !!! incredibly small number. This can't be correct? the likelihood of that distribution can't be that small? the fit is not THAT bad =)

    i would expect a likelihood of about 0.7-0.95 right? does anyone know what this number means?
    Last edited: Apr 22, 2010
  2. jcsd
  3. Apr 23, 2010 #2
    You're looking at the maximum log-likelihood estimate of the fitted curve. It's small because it's the result of a highly iterative procedure. In fact you don't really need to know much of the theory behind MLE. Your program will give Goodness of Fit and other stats. Basically, if it looks good, it probably is good.

    Here's some information which isn't too technical. It treats MLE in the second part of the discussion. If you're interested in a more mathematical treatment, go to 'maximum likelihood estimation' in the Wikipedia.

    Last edited: Apr 24, 2010
  4. Apr 24, 2010 #3
    well this isnt curve fitting or regression fitting. its a fitting of distribution. and by the way when ive been fitting curves ive had much larger likelihoods like 80% but this time we're talking 10^-30 !!! that would mean that the fit is incredibly, well astronomically unlikely! which i doubt is the correct interpretation of it so there must be something else.
  5. Apr 24, 2010 #4
    The number you're referencing has nothing to do with percents. It is negative because numbers in the interval [0,1] have negative logarithms. I'll not be responding to any more of your posts. Perhaps someone else will, but, so far, no one else has. Perhaps there is a reason.
  6. Apr 24, 2010 #5
    yeah i guess noone is able to give a straight answer to the meaning of this measurement. even if you google it you don't find very much info, and there's nothing in the matlab help but thanks for trying
  7. May 2, 2010 #6
    You cannot calculate the probability of observing a range of parameters from the likelihood without proper normalization. Your interpretation is fallacious. Frequentists believe that parameter values are fixed, unknown quantities. If you really want to calculate probabilities of parameters, consider fiducial inference or a Bayesian model.

    It is common statistical practice that likelihood functions are used in the comparison of models for the same data. Thus, interpretations of likelihoods are with respect to the plausibility of one model to that of another. If you do not intend to compare models for a given data set, do not worry about the actual value of a likelihood function. Comparing likelihoods from different data sets is like comparing apples and oranges. The best approach to assess model fit is to visualize the model. A histogram of data with the proposed model curve or a residual plot from a LS-regression line are examples.
  8. May 2, 2010 #7
    The likelihood function is the same as the joint pdf or pmf of your data. The only difference is the order of conditioning. In the likelihood, you condition on the observed data. The joint pdf/pmf conditions on the parameter(s).
  9. May 3, 2010 #8
    i get your point but im actually comparing which distribution that has the best fit to my measurements. I'm not comparing measurements with other measurements, so just assume i have 100 values that i want to fit do a distribution and i only want to know what distribution has the best fit to these values.

    so i try to fit normal, lognormal, logistic etc distributions to this set of measurements. thus the question is simple. if i have one set of measurements and the log likelihood value is for instance -65 for a normal distribution fitted to these measurements and -49 for a lognormal distribution fitted to these measurements:
    - which distribution has the best fit??
  10. May 3, 2010 #9


    User Avatar
    Science Advisor

    As your models don't seem to be nested best look for Akaikes information criterion (AIC) or Bayesian Information criterion (BIC).
  11. May 3, 2010 #10

    ok so there's not a straight answer to this question? (i know there's a debate between the frequentists and bayesian statistics, but im not interested in the details of metaphysics here, i just want a good enough straight answer)

    what you're saying is that by merely comparing the loglikelihood value there is no indication whatsoever which distribution has the best fit to a certain set of measurements? please answer yes or no.

    i don't remember statistics being this complicated when i studied it =)
    ok ALL I want to do is to know which distribution has the best fit to a certain set of measurements. Is this really that complicated? isn't there just a statistical test value such as log likelihood, a chi-square test or something else you can look at to compare fits to the same set of measurements? i'm not doing any rocket science here so the value just has to be an indication and doesnt have to be 100% foolproof from any philosophy of science kind of aspect....

    so is there any such value or test that can indicate if a distribution has a better fit than another to a certain set of measurements? please answer yes or no.
    if yes, which one?

    for instance, i have a vague memory of that a chisquare goodness of fit test can be run on a set of measurements and it will or wont reject the null hypothesis at a certain level of significance. however this only works with the normal distribution and not the lognormal as far as i remember. what i am looking for is something similar, or of similar use for lognormal or distributions in general...
    Last edited: May 3, 2010
  12. May 3, 2010 #11


    User Avatar
    Science Advisor

    Yes, you can use chi square distribution to compare models, but only if they are nested, that is one of the two models being compared is the other one with one or several parameters fixed at some predefined value. When you want to compare two models which are not nested, e.g. a gaussian model vs. a lognormal model, then you cannot use this chisquare approach. The AIC or BIC is basically what you want. It compares the models basically comparing the Deviance corrected for the number of freedoms in the respective models. In praxis it is very simple to apply.
  13. May 3, 2010 #12
    DrDu is right. If you want to see which model best describes the data, use AIC or BIC. These will use the log-likelihood value, but they will also take into account the number of estimated parameters. You cannot compare raw log-likelihoods between models.

    This is the straight answer you wanted. Good luck!
  14. May 4, 2010 #13
    ok, so AIC according to wikipedia is:

    where k is the number of parameters in the model. for both normal and lognormal i guess that mu and sigma are the parameters, hence k is 2 in both cases.
    ln(L) is the log likelihood i assume.

    for the same set of measurements fitted to 2 different distributions, or models as you call them (normal and lognormal) we get this:

    lognormal model loglikelihood: -62.6777
    normal model loglikelihood: -70.8926

    k = 2 in both cases right?
    this gives us:

    AIC for the lognormal = 2*2-2*(-62.6777)=129.35
    AIC for normal model = 2*2-2*(-70.8926)=145.78

    so the lognormal is the best model of the two, since a lower AIC is better, is that a correct conclusion?

    ps. seems like AIC and BIC is more commonly used when doing regression fitting, then i understand the use of k.... but i'm not doing regression fits here im finding a distribution (k is basically always 1 or 2) but maybe it works anyway? just wanted to clearify that....d.s.
    Last edited: May 4, 2010
  15. May 4, 2010 #14


    User Avatar
    Science Advisor

    Yes, my k are also mostly 1 or two :-) Should be ok anyhow.
    To your other questions: The absolute value of the likelihood ratio is of little relevance. Note especially that it mostly refers to probability density which explains why the absolute likelihood is often so extremely small. A somewhat more usefull quantity is the deviance where one considers the difference of the log likelihood and the log likelihood of a "saturated" model in which there is one parameter for each measurement.
  16. May 4, 2010 #15
    when you say likelihood ratio you mean what i refer to as likelihood value? i.e. -62.6777 below?

    in the case of AIC it dosn't mean anything unless you compare it to another one as i understand.

    so what is a saturated model? my model is for instance a log normal distribution with a value of sigma and mu, what's the saturated model to this?
  17. May 4, 2010 #16


    User Avatar
    Science Advisor

    I meant your likelihood function, not likelihood ratio, sorry.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook