Maximum likelihood of Poisson distribution

Click For Summary
SUMMARY

The discussion focuses on finding the Maximum Likelihood Estimator (MLE) of the Poisson distribution parameter lambda using a random sample of n observations. The MLE is derived as hat lambda = sigma xi / n, where sigma xi is the sum of the observations. The expected value and variance of hat lambda are both established as lambda and lambda/n, respectively, demonstrating that hat lambda is a consistent estimator of lambda as n approaches infinity.

PREREQUISITES
  • Understanding of Poisson distribution and its properties
  • Familiarity with Maximum Likelihood Estimation (MLE)
  • Knowledge of statistical concepts such as expected value and variance
  • Ability to differentiate and manipulate likelihood functions
NEXT STEPS
  • Study the derivation of MLE for different distributions, including Normal and Binomial
  • Learn about the properties of consistent estimators in statistical inference
  • Explore the implications of the Law of Large Numbers on estimator consistency
  • Investigate the use of R or Python for implementing MLE calculations
USEFUL FOR

Statisticians, data analysts, and students studying statistical inference, particularly those focusing on estimation techniques and the properties of estimators in the context of Poisson distributions.

dim&dimmer
Messages
20
Reaction score
0

Homework Statement


Suppose X has a Poisson distribution with parameter lambda. Given a random sample of n observations,
Find the MLE of lambda, and hat lambda.
Find the expected value and variance of hat lambda.
Show that hat lambda is a consistent estimator of lambda.

Homework Equations


PX(x) = e^-lambda.lambda^x/x!
E(X) = lambda ? or mean
var (X) = lambda ?? or mean

The Attempt at a Solution


I am really struggling with stats, hope someone can help.
I try to find likelihood function
L(x1,x2,...,xn|lambda) = e^-lambda.lambda^x/x!
= capital pi from i=1 to n, e^-lambda.lambda^xi/xi!
then I'm stuck.
Differentiating with respect to lambda,
-e^-lambda.xlambda^(x-1)/x!
is that the answer?
sorry about the notation also, is there somewhere on this site to get the symbols?

Any help greatly appreciated.
 
Physics news on Phys.org
OK, I think I've got the first part but if my work could be checked it would be grate.(at least I'll get a reply for that if not for anything else).
for the sake of notation I'll use $ for lambda

L($|x1, x2, ... , xn) =e^-$.$^xi/xi!...e^-$.$^xn/xn!
=e^-n$.$^(sigma xi)/(xi!...xn!)

log L = -n$ + (ln$)sigma xi -ln(product xi!)

(d log L($))/d$ = -n + sigma xi/$ =0 gives
hat$ = sigma xi/n , the MLE

For part b, poisson distributions have lambda = mean = variance, so the mean and variance equal the result above.
Part c , the sample mean is a consistent estimator for lambda when the Xi are distributed Poisson, and the sample mean is = to the MLE, therefore the MLE is a consistent estimator.

Corrections are most welcome.
 
You are correct in saying

<br /> \hat \lambda = \frac 1 n \sum_{i=1}^n x_i<br />

is the MLE of \lambda. to obtain the mean and variance of \hat \lambda, think of the general properties that (since the x_i are independent)

<br /> E \hat \lambda = \frac 1 n \sum_{i=1}^n Ex_i<br />

and

<br /> \text{Var} \left( \hat \lambda \right) = \frac 1 {n^2} \sum_{i=1}^n \text{Var}(x_i)<br />

simplifying the variance of \hat \lambda should also help you show that it is a consistent estimator of \lambda
 
Thank you for your reply, but I'm still stuck. By using general properties do you mean that
E(xi) = sum i=1 n, (xi e^-lambda lambda^xi)/xi! that is E(x) = sum x pX(x) for discrete rv.
If so, I just seem to go round in circles
 
No, rather these two notions.
First,

<br /> \text{E} \left(\sum_{i=1}^n \frac{X_i} n \right) = \sum_{i=1}^n \frac{\text{E}X_i}{n}<br />

and

<br /> \tex{Var} \left(\sum_{i=1}^n \frac{X_i} n\right) = \sum_{i=1}^n \frac{\text{Var}(X_i)}{n^2}<br />

You already know the mean and variance of a Poisson random variable; calculating the mean and variance of the MLE for \lambda will allow you to conclude that the variance goes to zero, so ...
 
Please excuse my ignorance, but is EX(i) = lambda or lambda(i).
If the latter then sample mean is sum 1 to n lambda(i)/n, and variance has n^2 as denominator.
Variance approaches 0 as n approaches infinity, and 0 variance means that the estimator is consistent.
Have I got it?
 
Since all of the X_i values come from the same distribution, they all have the same mean (\lambda) and they all have the same variance. You should be able to show these points (\hat \lambda is the MLE)

1. \text{E}\left(\hat \lambda\right) = \lambda

2. \text{Var}\left(\hat \lambda\right) converges to zero as n \to \infty

With these things done you've shown (by an appeal to the result you seem to mention in your most recent post) that the MLE is consistent.
 
You are patient!
So summing EXi gives n lambda as each EX = lambda, then dividing by n gives Ehatlambda = lambda.
similarly Var(hat lambda) = lambda/n which converges to 0.
Please tell me I'm done.
 
Okay: You are done. :-)
Write it up neatly.
 
  • #10
Thank you very much for your help
 
  • #11
Thanks for your help and patience with this, statdad! I googled a similar question and found this thread extremely helpful.
 

Similar threads

  • · Replies 8 ·
Replies
8
Views
1K
Replies
1
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
Replies
2
Views
2K
Replies
2
Views
1K
  • · Replies 2 ·
Replies
2
Views
2K
Replies
56
Views
5K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
5K
Replies
3
Views
1K