Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Homework Help: Maximum likelihood of Poisson distribution

  1. Aug 20, 2008 #1
    1. The problem statement, all variables and given/known data
    Suppose X has a Poisson distribution with parameter lambda. Given a random sample of n observations,
    Find the MLE of lambda, and hat lambda.
    Find the expected value and variance of hat lambda.
    Show that hat lambda is a consistent estimator of lambda.

    2. Relevant equations
    PX(x) = e^-lambda.lambda^x/x!
    E(X) = lambda ??? or mean
    var (X) = lambda ?? or mean

    3. The attempt at a solution
    I am really struggling with stats, hope someone can help.
    I try to find likelihood function
    L(x1,x2,...,xn|lambda) = e^-lambda.lambda^x/x!
    = capital pi from i=1 to n, e^-lambda.lambda^xi/xi!
    then I'm stuck.
    Differentiating with respect to lambda,
    -e^-lambda.xlambda^(x-1)/x!
    is that the answer???
    sorry about the notation also, is there somewhere on this site to get the symbols?

    Any help greatly appreciated.
     
  2. jcsd
  3. Aug 20, 2008 #2
    OK, I think Ive got the first part but if my work could be checked it would be grate.(at least I'll get a reply for that if not for anything else).
    for the sake of notation I'll use $ for lambda

    L($|x1, x2, ... , xn) =e^-$.$^xi/xi!...e^-$.$^xn/xn!
    =e^-n$.$^(sigma xi)/(xi!...xn!)

    log L = -n$ + (ln$)sigma xi -ln(product xi!)

    (d log L($))/d$ = -n + sigma xi/$ =0 gives
    hat$ = sigma xi/n , the MLE

    For part b, poisson distributions have lambda = mean = variance, so the mean and variance equal the result above.
    Part c , the sample mean is a consistent estimator for lambda when the Xi are distributed Poisson, and the sample mean is = to the MLE, therefore the MLE is a consistent estimator.

    Corrections are most welcome.
     
  4. Aug 20, 2008 #3

    statdad

    User Avatar
    Homework Helper

    You are correct in saying

    [tex]
    \hat \lambda = \frac 1 n \sum_{i=1}^n x_i
    [/tex]

    is the MLE of [tex] \lambda [/tex]. to obtain the mean and variance of [tex] \hat \lambda [/tex], think of the general properties that (since the [tex] x_i [/tex] are independent)

    [tex]
    E \hat \lambda = \frac 1 n \sum_{i=1}^n Ex_i
    [/tex]

    and

    [tex]
    \text{Var} \left( \hat \lambda \right) = \frac 1 {n^2} \sum_{i=1}^n \text{Var}(x_i)
    [/tex]

    simplifying the variance of [tex] \hat \lambda [/tex] should also help you show that it is a consistent estimator of [tex] \lambda [/tex]
     
  5. Aug 21, 2008 #4
    Thank you for your reply, but I'm still stuck. By using general properties do you mean that
    E(xi) = sum i=1 n, (xi e^-lambda lambda^xi)/xi! that is E(x) = sum x pX(x) for discrete rv.
    If so, I just seem to go round in circles
     
  6. Aug 21, 2008 #5

    statdad

    User Avatar
    Homework Helper

    No, rather these two notions.
    First,

    [tex]
    \text{E} \left(\sum_{i=1}^n \frac{X_i} n \right) = \sum_{i=1}^n \frac{\text{E}X_i}{n}
    [/tex]

    and

    [tex]
    \tex{Var} \left(\sum_{i=1}^n \frac{X_i} n\right) = \sum_{i=1}^n \frac{\text{Var}(X_i)}{n^2}
    [/tex]

    You already know the mean and variance of a Poisson random variable; calculating the mean and variance of the MLE for [tex] \lambda [/tex] will allow you to conclude that the variance goes to zero, so ...
     
  7. Aug 21, 2008 #6
    Please excuse my ignorance, but is EX(i) = lambda or lambda(i).
    If the latter then sample mean is sum 1 to n lambda(i)/n, and variance has n^2 as denominator.
    Variance approaches 0 as n approaches infinity, and 0 variance means that the estimator is consistent.
    Have I got it?
     
  8. Aug 21, 2008 #7

    statdad

    User Avatar
    Homework Helper

    Since all of the [tex] X_i [/tex] values come from the same distribution, they all have the same mean ([tex] \lambda [/tex]) and they all have the same variance. You should be able to show these points ([tex] \hat \lambda [/tex] is the MLE)

    1. [tex] \text{E}\left(\hat \lambda\right) = \lambda [/tex]

    2. [tex] \text{Var}\left(\hat \lambda\right) [/tex] converges to zero as [tex] n \to \infty [/tex]

    With these things done you've shown (by an appeal to the result you seem to mention in your most recent post) that the MLE is consistent.
     
  9. Aug 21, 2008 #8
    You are patient!
    So summing EXi gives n lambda as each EX = lambda, then dividing by n gives Ehatlambda = lambda.
    similarly Var(hat lambda) = lambda/n which converges to 0.
    Please tell me I'm done.
     
  10. Aug 21, 2008 #9

    statdad

    User Avatar
    Homework Helper

    Okay: You are done. :-)
    Write it up neatly.
     
  11. Aug 21, 2008 #10
    Thank you very much for your help
     
  12. Nov 24, 2009 #11
    Thanks for your help and patience with this, statdad! I googled a similar question and found this thread extremely helpful.
     
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook