Finding maximum likelihood estimator

Now, for a fixed ##b = m## you have ##LL(a,m) = n \ln(a) + (a-1) \ln(m)##. This is an increasing function of ##a## for ##a > 0##. Therefore the maximum of ##LL(a,m)## over ##a > 0## occurs at the maximum of ##LL(a) = LL(a,m)## over ##a > 0##, which exists. To find this maximum, you can use calculus.
  • #1
ptolema
83
0

Homework Statement



The independent random variables [itex]X_1, ..., X_n[/itex] have the common probability density function [itex]f(x|\alpha, \beta)=\frac{\alpha}{\beta^{\alpha}}x^{\alpha-1}[/itex] for [itex]0\leq x\leq \beta[/itex]. Find the maximum likelihood estimators of [itex]\alpha[/itex] and [itex]\beta[/itex].

Homework Equations



log likelihood (LL) = n ln(α) - nα ln(β) + (α-1) ∑(ln xi)

The Attempt at a Solution


When I take the partial derivatives of log-likelihood (LL) with respect to α and β then set them equal to zero, I get:
(1) d(LL)/dα = n/α -n ln(β) + ∑(ln xi) = 0 and
(2) d(LL)/dβ = -nα/β = 0

I am unable to solve for α and β from this point, because I get α=0 from equation (2), but this clearly does not work when you substitute α=0 into equation (1). Can someone please help me figure out what I should be doing?
 
Last edited:
Physics news on Phys.org
  • #2
So there might be some mistakes in the way you computed the log (LL) function. The term premultiplying log(β) should probably be reworked. Hint: log(β^y) = ylog(β). But what is y? It is not αn.
 
  • #3
ptolema said:

Homework Statement



The independent random variables [itex]X_1, ..., X_n[/itex] have the common probability density function [itex]f(x|\alpha, \beta)=\frac{\alpha}{\beta^{\alpha}}x^{\alpha-1}[/itex] for [itex]0\leq x\leq \beta[/itex]. Find the maximum likelihood estimators of [itex]\alpha[/itex] and [itex]\beta[/itex].

Homework Equations



log likelihood (LL) = n ln(α) - nα ln(β) + (α-1) ∑(ln xi)

The Attempt at a Solution


When I take the partial derivatives of log-likelihood (LL) with respect to α and β then set them equal to zero, I get:
(1) d(LL)/dα = n/α -n ln(β) + ∑(ln xi) = 0 and
(2) d(LL)/dβ = -nα/β = 0

I am unable to solve for α and β from this point, because I get α=0 from equation (2), but this clearly does not work when you substitute α=0 into equation (1). Can someone please help me figure out what I should be doing?

Your expression for LL is correct, but condition (2) is wrong. Your problem is
[tex] \max_{a,b} LL = n \ln(a) - n a \ln(b) + (a-1) \sum \ln(x_i) \\
\text{subject to } b \geq m \equiv \max(x_1, x_2, \ldots ,x_n)[/tex]
Here, I have written ##a,b## instead of ##\alpha, \beta##. The constraint on ##b## comes from your requirement ##0 \leq x_i \leq b \; \forall i##. When you have a bound constraint you cannot necesssarily set the derivative to zero; in fact, what replaces (2) is:
[tex] \partial LL/ \partial b \leq 0, \text{ and either } \partial LL/ \partial b = 0 \text{ or } b = m [/tex]

For more on this type of condition, see, eg.,
http://en.wikipedia.org/wiki/Karush–Kuhn–Tucker_conditions


In the notation of the above link, you want to maximize a function ##f = LL##, subject to no equalities, and an inequality of the form ##g \equiv m - b \leq≤ 0##. The conditions stated in the above link are that
[tex] \partial LL/ \partial a = \mu \partial g / \partial a \equiv 0 \\
\partial LL / \partial b = \mu \partial g \partial b \equiv - \mu [/tex]
Here. ##\mu \geq 0## is a Lagrange multiplier associated with the inequality constraint, and the b-condition above reads as ##\partial LL / \partial b \leq 0##, as I already stated. Furthermore, the so-called "complementary slackness" condition is that either ##\mu = 0## or ##g = 0##, as already stated.

Note that if ##a/b \geq 0## you have already satisfied the b-condition, and if ##a/b > 0## you cannot have ##\partial LL / \partial b = 0##, so you must have ##b = m##
 
  • #4
Ray, log((b^a)^N) = a^N*log(b) ≠ aNlog(b)?
 
  • #5
Mugged said:
Ray, log((b^a)^N) = a^N*log(b) ≠ aNlog(b)?

We have ## (b^a)^2 = b^a \cdot b^a = b^{2a},## etc.
 
  • #6
Ray Vickson said:
We have ## (b^a)^2 = b^a \cdot b^a = b^{2a},## etc.

Ah..ok, my bad. This problem is harder than I thought...KKT coming in a statistics problem. Thanks.
 
  • #7
Mugged said:
Ah..ok, my bad. This problem is harder than I thought...KKT coming in a statistics problem. Thanks.

It's not that complicated in this case. For ##a,b > 0## the function ##LL(a,b)## is strictly decreasing in ##b##, so for any ##a > 0## its maximum over ##b \geq m \,(m > 0)## lies at ##b = m##. You don't even need calculus to conclude this.
 

What is a maximum likelihood estimator?

A maximum likelihood estimator (MLE) is a statistical method used to estimate the parameters of a population by maximizing the likelihood function, which measures the probability of obtaining a specific set of data from the population based on the chosen model. In simpler terms, it is a way to find the most likely values for the unknown parameters of a distribution based on the observed data.

How is a maximum likelihood estimator calculated?

To calculate a maximum likelihood estimator, you need to follow these steps:

  1. Choose a probability distribution that you believe is the most appropriate for your data.
  2. Write down the likelihood function, which is the product of the probabilities of obtaining each data point from the chosen distribution.
  3. Take the natural logarithm of the likelihood function to simplify the calculations.
  4. Maximize the log-likelihood function by taking its derivative with respect to each parameter and setting it equal to 0.
  5. Solve the resulting equations to find the values of the parameters that maximize the likelihood function.

What are the assumptions of maximum likelihood estimation?

The assumptions of maximum likelihood estimation include:

  • The data is independent and identically distributed (iid).
  • The chosen probability distribution is the correct model for the data.
  • The data is continuous.
  • The data is not censored or truncated.
  • The data is not missing or incomplete.

What are the advantages of maximum likelihood estimation?

The advantages of maximum likelihood estimation include:

  • It is a widely used and well-established method for estimating parameters of a population.
  • It produces unbiased and efficient estimates.
  • It allows for the comparison of different models to determine the best fit for the data.
  • It can handle complex data and multiple parameters.
  • It has a solid theoretical foundation and can be used for hypothesis testing.

What are the limitations of maximum likelihood estimation?

The limitations of maximum likelihood estimation include:

  • It relies on the correct choice of probability distribution, which may not always be known.
  • It assumes that the data is continuous and iid, which may not always be the case in real-world scenarios.
  • It can be computationally intensive, especially with large datasets or complex models.
  • It may produce biased estimates if the assumptions are not met.
  • It does not take into account the uncertainty in the estimates, which can be important in some applications.

Similar threads

  • Calculus and Beyond Homework Help
Replies
1
Views
916
  • Calculus and Beyond Homework Help
Replies
7
Views
1K
Replies
1
Views
928
  • Calculus and Beyond Homework Help
Replies
10
Views
391
  • Calculus and Beyond Homework Help
Replies
1
Views
771
  • Calculus and Beyond Homework Help
Replies
14
Views
527
  • Calculus and Beyond Homework Help
Replies
3
Views
2K
  • Calculus and Beyond Homework Help
Replies
4
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Calculus and Beyond Homework Help
Replies
2
Views
1K
Back
Top