Maximum likelihood estimator

In summary, we are given a probability distribution function f_y(y;\theta) = \frac{2y}{\theta^2} for 0 < y < \theta and are asked to find the maximum likelihood estimator for theta. After taking the natural log and solving for theta, we get the solution \hat{\theta}_{MLE} = \frac{\sum_1^n y_i}{n}. We are also asked to find the estimator using the method of moments, which yields the solution \hat{\theta}_{MM} = \frac{2}{3}\frac{\sum_1^n y_i^2}{n}. However, this solution may be too messy. Another way to find the MLE is by solving the equation
  • #1
semidevil
157
2
Maximum likelihood estimator...

ok, I'm stil a bit lost...so tell me if this is right:

[tex] f_y(y;\theta) = \frac{2y}{\theta^2}, for 0 < y < \theta [/tex]

find the MLE estimator for theta.

[tex] L(\theta) = 2yn\theta^{-2 \sum_1^n y_i [/tex].

is this even right to begin with?

then take the natural log

[tex]ln2yn + -2\sum_1^n y_i * ln\theta[/tex]

take derivative

[tex]\frac {1}{2yn} -\frac{ 2 * \sum_1^n yi}{\theta}. [/tex]


now how do I solve this in terms of theta? and after that, what do I do next?

this just doesn't look right

and I also need to find it using the method of moments, but I get [tex]\frac {y^4}{2\theta} [/tex] after the integral...


and this one too...this looks too messy:

[tex]fy(y;\theta) = \frac{y^3e^{\frac{-y}{\theta}}}{6\theta^4} [/tex]
 
Last edited:
Physics news on Phys.org
  • #2
The maximum likelihood estimator for θ is given by solving the equation: \frac {1}{6n\theta^4} -\frac{ 3 * \sum_1^n yi^2}{\theta^5} e^{\frac{-y_i}{\theta}} = 0.Can someone help me out?
 
  • #3




First, let's clarify the notation a bit. The function is actually f(y;θ), where y is the random variable and θ is the parameter we are trying to estimate. So the first step is to write out the likelihood function:

L(θ) = \prod_{i=1}^n f(y_i;θ) = \prod_{i=1}^n \frac{2y_i}{θ^2} = \frac{2^n}{θ^{2n}} \prod_{i=1}^n y_i

Next, we take the natural log of the likelihood function:

ln L(θ) = nln2 - 2nlnθ + \sum_{i=1}^n ln y_i

To find the maximum likelihood estimator, we set the derivative of ln L(θ) with respect to θ equal to 0 and solve for θ:

\frac{d}{dθ} ln L(θ) = -\frac{2n}{θ} + \sum_{i=1}^n \frac{1}{y_i} = 0

Solving for θ, we get:

\hat{θ}_{MLE} = \frac{1}{n} \sum_{i=1}^n y_i = \bar{y}

So the maximum likelihood estimator for θ is simply the sample mean.

Now for the method of moments, we set the first moment of the distribution (E[y]) equal to the first moment of the sample (\bar{y}) and solve for θ:

E[y] = \int_0^θ y \frac{2y}{θ^2} dy = \frac{2}{θ^2} \int_0^θ y^2 dy = \frac{2}{θ^2} \frac{θ^3}{3} = \frac{2}{3}θ

Setting this equal to the sample mean, we get:

\frac{2}{3}θ = \bar{y}

Solving for θ, we get:

\hat{θ}_{MoM} = \frac{3}{2} \bar{y}

So the method of moments estimator for θ is 1.5 times the sample mean.
 

What is a maximum likelihood estimator?

A maximum likelihood estimator (MLE) is a statistical method used to estimate the parameters of a probability distribution by finding the values that maximize the likelihood of the observed data. In other words, it is a technique for finding the most likely values for the unknown parameters of a model based on the available data.

How is a maximum likelihood estimator calculated?

The MLE is calculated by first defining a likelihood function, which is a function that measures how likely it is for the observed data to occur given a set of parameter values. Then, the MLE is found by finding the parameter values that maximize this likelihood function. This can be done using calculus or numerical optimization methods.

What is the difference between a maximum likelihood estimator and a method of moments estimator?

A method of moments estimator calculates the parameter values by equating the theoretical moments of a distribution to the sample moments, while a maximum likelihood estimator uses the likelihood function to find the values that maximize the probability of the observed data. In general, MLE tends to be more efficient than method of moments estimators, especially for small sample sizes.

What are the assumptions of maximum likelihood estimation?

The main assumptions of maximum likelihood estimation are that the data are independent and identically distributed (i.i.d.), and that the probability distribution being used to model the data is the correct one. Additionally, MLE assumes that the data are continuous and normally distributed.

What are the advantages of using maximum likelihood estimation?

One of the main advantages of using MLE is that it is a consistent estimator, meaning that as the sample size increases, the estimated values will converge to the true values. MLE also tends to have good efficiency and robustness properties. It is also a widely used and well-studied method, making it a reliable tool for statistical analysis.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
16
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
951
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
898
  • Set Theory, Logic, Probability, Statistics
Replies
19
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
735
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
8
Views
1K
Back
Top