Likelihood function of the gamma distribution

AI Thread Summary
The discussion focuses on formulating the likelihood function for a gamma distribution given a random sample of size n with a known shape parameter r. The likelihood function is derived as the product of individual density functions, expressed as L(underline{y}; r, λ) = ∏ f(y_i; r, λ). After taking the logarithm of the likelihood function, the derivative with respect to λ is calculated, leading to the equation nr/λ = Σy. The maximum likelihood estimator (MLE) for λ is found to be λ̂ = rn/Σx, which is confirmed as the correct result. The conversation also touches on the unbiasedness of the MLE, emphasizing that the sample average does not equal the distribution mean.
safina
Messages
26
Reaction score
0
There is a random sample of size n from a gamma distribution, with known r. Please help me formulate the likelihood function of the gamma distribution.

I understand that the density function is the following:
f\left(y;r,\lambda\right)=\frac{\lambda}{\Gamma\left(r\right)}\left(\lambda x\right)^{r-1}e^{-\lambda x}

I also understand that the likelihood function is the product of the individual density functions.
Assuming independence, I write it as:
L\left(\underline{y};r, \lambda\right)=\left[f\left(y;r,\lambda\right)\right]^{n}
=\left[\frac{\lambda^{r}y^{r-1}e^{-\lambda y}}{\Gamma\left(r\right)}\right]^{n}

I am now stuck with the product of the y^{r-1} and \Gamma\left(r\right).

Please help me what to do, since I need the answer to find the maximum likelihood estimator of \lambda.
 
Physics news on Phys.org
Take the log of both sides. The log function is monotonic so "\lambda maximizes log L" iff "\lambda maximizes L."
 
EnumaElish said:
Take the log of both sides. The log function is monotonic so "\lambda maximizes log L" iff "\lambda maximizes L."

Okay, thank for that. Can you help me further for the exact form of the likelihood function so that I can take the log on both sides afterwards?
 
safina said:
I also understand that the likelihood function is the product of the individual density functions.
Assuming independence, I write it as:
L\left(\underline{y};r, \lambda\right)=\left[f\left(y;r,\lambda\right)\right]^{n}

Not quite - the likelihood function is
L\left(\underline{y};r, \lambda\right)=\prod_{i=1}^n f\left(y_i;r,\lambda\right)
since it's for a sample of size n. After taking the log and differentiating with respect to \lambda you'll find that terms like \Gamma(r) disappear.
 
bpet said:
Not quite - the likelihood function is
L\left(\underline{y};r, \lambda\right)=\prod_{i=1}^n f\left(y_i;r,\lambda\right)
since it's for a sample of size n. After taking the log and differentiating with respect to \lambda you'll find that terms like \Gamma(r) disappear.

Alright, thank you for all your replies. I've tried figuring them out. Here are the outcomes. Kindly check if these are right.

\frac{d}{d\lambda} log L\left(\underline{y}; r, \lambda\right) = \frac{nr}{\lambda} - \sum y
Equating the derivative above to zero results to:
\frac{nr}{\hat{\lambda}}= \sum y
solving for \hat{\lambda}, I have replaced n\bar{y} for \sum y, and were able to come up with an equation \hat{\lambda} = \lambda.

Is this the result am I suppose to have?

If this really is it, is this MLE unbiased?
 
You may not assume sample average = distribution mean. The sample average is just a random variable, like y itself; it does not have a constant value.
 
Last edited:
EnumaElish said:
You may not assume sample average = distribution mean. The sample average is just a random variable, like y itself; it does not have a constant value.

Oh, here's what I've done.
\frac{nr}{\hat{\lambda}}= \sum x
Solving for \lambda:
\hat{\lambda} = \frac{rn}{\sum x} = \frac{rn}{n \bar{x}} = \frac{rn}{n \frac{r}{\lambda}} = \lambda

Is this not right?
 
I haven't checked your math, but assuming that you haven't made a mistake, you should stop at lambda hat = r n / Sum(x). That's your MLE of lambda.
 
Back
Top