Finding maximum likelihood estimator

Click For Summary

Homework Help Overview

The discussion revolves around finding the maximum likelihood estimators for the parameters α and β of a given probability density function involving independent random variables. The original poster presents their attempts at deriving the log-likelihood function and setting its derivatives to zero to find the estimators.

Discussion Character

  • Exploratory, Mathematical reasoning, Assumption checking

Approaches and Questions Raised

  • Participants discuss the derivation of the log-likelihood function and the subsequent partial derivatives with respect to α and β. There are questions regarding the correctness of the log-likelihood function and the implications of setting derivatives equal to zero, particularly in the context of constraints on β.

Discussion Status

Some participants have offered hints regarding potential mistakes in the computation of the log-likelihood function. There is an exploration of the implications of constraints on the parameters, particularly how they affect the derivatives and the conditions for maximizing the likelihood function.

Contextual Notes

There are constraints discussed regarding the values of β, specifically that it must be greater than or equal to the maximum of the observed data. This introduces complexity into the optimization problem that is being analyzed.

ptolema
Messages
82
Reaction score
0

Homework Statement



The independent random variables X_1, ..., X_n have the common probability density function f(x|\alpha, \beta)=\frac{\alpha}{\beta^{\alpha}}x^{\alpha-1} for 0\leq x\leq \beta. Find the maximum likelihood estimators of \alpha and \beta.

Homework Equations



log likelihood (LL) = n ln(α) - nα ln(β) + (α-1) ∑(ln xi)

The Attempt at a Solution


When I take the partial derivatives of log-likelihood (LL) with respect to α and β then set them equal to zero, I get:
(1) d(LL)/dα = n/α -n ln(β) + ∑(ln xi) = 0 and
(2) d(LL)/dβ = -nα/β = 0

I am unable to solve for α and β from this point, because I get α=0 from equation (2), but this clearly does not work when you substitute α=0 into equation (1). Can someone please help me figure out what I should be doing?
 
Last edited:
Physics news on Phys.org
So there might be some mistakes in the way you computed the log (LL) function. The term premultiplying log(β) should probably be reworked. Hint: log(β^y) = ylog(β). But what is y? It is not αn.
 
ptolema said:

Homework Statement



The independent random variables X_1, ..., X_n have the common probability density function f(x|\alpha, \beta)=\frac{\alpha}{\beta^{\alpha}}x^{\alpha-1} for 0\leq x\leq \beta. Find the maximum likelihood estimators of \alpha and \beta.

Homework Equations



log likelihood (LL) = n ln(α) - nα ln(β) + (α-1) ∑(ln xi)

The Attempt at a Solution


When I take the partial derivatives of log-likelihood (LL) with respect to α and β then set them equal to zero, I get:
(1) d(LL)/dα = n/α -n ln(β) + ∑(ln xi) = 0 and
(2) d(LL)/dβ = -nα/β = 0

I am unable to solve for α and β from this point, because I get α=0 from equation (2), but this clearly does not work when you substitute α=0 into equation (1). Can someone please help me figure out what I should be doing?

Your expression for LL is correct, but condition (2) is wrong. Your problem is
\max_{a,b} LL = n \ln(a) - n a \ln(b) + (a-1) \sum \ln(x_i) \\<br /> \text{subject to } b \geq m \equiv \max(x_1, x_2, \ldots ,x_n)
Here, I have written ##a,b## instead of ##\alpha, \beta##. The constraint on ##b## comes from your requirement ##0 \leq x_i \leq b \; \forall i##. When you have a bound constraint you cannot necesssarily set the derivative to zero; in fact, what replaces (2) is:
\partial LL/ \partial b \leq 0, \text{ and either } \partial LL/ \partial b = 0 \text{ or } b = m

For more on this type of condition, see, eg.,
http://en.wikipedia.org/wiki/Karush–Kuhn–Tucker_conditions


In the notation of the above link, you want to maximize a function ##f = LL##, subject to no equalities, and an inequality of the form ##g \equiv m - b \leq≤ 0##. The conditions stated in the above link are that
\partial LL/ \partial a = \mu \partial g / \partial a \equiv 0 \\<br /> \partial LL / \partial b = \mu \partial g \partial b \equiv - \mu
Here. ##\mu \geq 0## is a Lagrange multiplier associated with the inequality constraint, and the b-condition above reads as ##\partial LL / \partial b \leq 0##, as I already stated. Furthermore, the so-called "complementary slackness" condition is that either ##\mu = 0## or ##g = 0##, as already stated.

Note that if ##a/b \geq 0## you have already satisfied the b-condition, and if ##a/b > 0## you cannot have ##\partial LL / \partial b = 0##, so you must have ##b = m##
 
Ray, log((b^a)^N) = a^N*log(b) ≠ aNlog(b)?
 
Mugged said:
Ray, log((b^a)^N) = a^N*log(b) ≠ aNlog(b)?

We have ## (b^a)^2 = b^a \cdot b^a = b^{2a},## etc.
 
Ray Vickson said:
We have ## (b^a)^2 = b^a \cdot b^a = b^{2a},## etc.

Ah..ok, my bad. This problem is harder than I thought...KKT coming in a statistics problem. Thanks.
 
Mugged said:
Ah..ok, my bad. This problem is harder than I thought...KKT coming in a statistics problem. Thanks.

It's not that complicated in this case. For ##a,b > 0## the function ##LL(a,b)## is strictly decreasing in ##b##, so for any ##a > 0## its maximum over ##b \geq m \,(m > 0)## lies at ##b = m##. You don't even need calculus to conclude this.
 

Similar threads

  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 1 ·
Replies
1
Views
1K
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
Replies
1
Views
1K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K