1. Not finding help here? Sign up for a free 30min tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Maximum likelyhood extimater(MLE)

  1. Feb 16, 2005 #1
    so by definition, the likelyhood function(w, theta) is the product of the pdf fw(w, theta) evalutated at n data points.

    but I dont know how they do those calculations.....

    so for example:

    fy(y, theta) = [tex]1/\theta^2 ye^{-y/\theta}[/tex]

    L(theta) = [tex] \theta^{-2n}\prod y_{i}e^{-1/\theta}\sum y_{i} [/tex]

    so first of all, I'm looking at this but I dont know how they went from this to that....I look at another problme

    how did they go from [tex] e^{-(y-\theta)} [\tex]to [tex] \pro e^{-(y_{i}- \theta)} [/tex] [/theta]

    I dotn see a pattern...I compared it w/ the definitoin, but I just dont get it....
    I mean, when they did the L(theta) it seems that they added some "n' and i's somewhere....and I dotn know where they added these things.
    Last edited: Feb 16, 2005
  2. jcsd
  3. Feb 16, 2005 #2
    It looks like there are some typos in your expression.

    You are trying to estimate [tex]\theta[/tex] using n data points, labelled as [tex]y_i[/tex]. The single likelihood for [tex]\theta[/tex] given one data point[tex]y_i[/tex] is:

    [tex]1/\theta^2 y_{i}e^{-y_{i}/\theta}[/tex]

    In order to get the likelihood of [tex]\theta[/tex]for all n data points, then you need to multiply the single likelihoods together. And that's just a simple matter of multiplying terms and summing up what's in the exponential term.
  4. Feb 16, 2005 #3
    The indices are a shorthand for indeterminately long products. For example, given your definition of L, we evaluate the pdf at n values of y, {y1, y2, ..., yn} and take the product. So if
    [tex]f(y) = \frac{1}{\theta^2} ye^{-\frac{y}{\theta}}[/tex]
    we get
    [tex]f(y_1)f(y_2)...f(y_n) &= \prod_{i=1}^n f(y_i)[/tex]

    [tex] = \prod_{i=1}^n \frac{1}{\theta^2} y_i e^{-\frac{y_i}{\theta}}[/tex]

    [tex] = \frac{1}{\theta^{2n}} \prod_{i=1}^n y_i e^{-\frac{y_i}{\theta}}[/tex]

    [tex] = \frac{1}{\theta^{2n}} \left(y_1 e^{-\frac{y_1}{\theta}}...y_n e^{-\frac{y_n}{\theta}}\right)[/tex]

    [tex] = \frac{1}{\theta^{2n}} \left(y_1...y_n e^{-\frac{y_1+y_2+...+y_n}{\theta}}\right) [/tex]

    [tex] = \frac{1}{\theta^{2n}} \left(\prod_{i=1}^n y_i\right) \left(e^{-\frac{1}{\theta}\sum_{i=1}^n y_i}}\right) [/tex]
  5. Feb 19, 2005 #4
    ok, thanx, now that makes a little bit more sense, but i'll think about it some more...still a bit confusing.

    what about to get ln L(theta)? the book does some wierd stuff an I dont know what it did.

    ln L(theta) = [tex]-2n ln \theta + ln \prod yi - 1/\theta \sum yi [/tex]

    how did that happen?

    and also, i'm not understanding where they put the n's. maybe i'm having trouble w/ the definition. like, on the first problem, why did it become [tex]\theta^{-2n} [/tex]?

    and for ths problem,

    we have fy (y, theta) = [tex] e^{-(y-\theta)}, [/tex] so L(theta) = [tex] e^{\sum y + n\theta} [/tex]
    Last edited: Feb 19, 2005
  6. Feb 19, 2005 #5
    Because [tex]\theta^{-2}[/tex] multiplied by itself n times is [tex]\theta^{-2n}[/tex]. They just took it out of the n-product by associativity.

    What happens when you multiply ea with eb ? That's all that's going on here. :)
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?

Similar Discussions: Maximum likelyhood extimater(MLE)
  1. Maximum wavelength (Replies: 3)

  2. Maximum height (Replies: 9)