Statistics: Method of Moments/Maximum Likelihood Estimation

Click For Summary
SUMMARY

The discussion focuses on estimating the parameter 'theta' in the exponential distribution defined by the function f(x;theta)=Exp(-x+theta) using Maximum Likelihood Estimation (MLE) and the Method of Moments (MOM). The MLE approach involves evaluating the log-likelihood function Log(L) and its derivative, while the MOM approach estimates theta as xbar + 1, which was corrected to xbar - 1. Participants clarified that the MLE and MOM can yield different estimates, and emphasized the importance of considering multiple observations (x_1, x_2, ..., x_n) for accurate parameter estimation.

PREREQUISITES
  • Understanding of Maximum Likelihood Estimation (MLE) principles
  • Familiarity with the Method of Moments (MOM) technique
  • Knowledge of exponential distribution properties
  • Ability to perform calculus operations, including derivatives and integrals
NEXT STEPS
  • Study the derivation of the log-likelihood function for MLE in exponential distributions
  • Explore the implications of using multiple observations in parameter estimation
  • Investigate the differences between MLE and MOM in various statistical distributions
  • Learn about constraints in parameter estimation and their effects on results
USEFUL FOR

Statisticians, data analysts, and students studying statistical estimation methods, particularly those working with exponential distributions and parameter estimation techniques.

EnginStudent
Messages
4
Reaction score
0

Homework Statement


f(x;theta)=Exp(-x+theta)
Find parameter estimates for variable 'theta' using maximum likelihood Estimator and Method of Moments.

Homework Equations


Log(x; theta) = Log(Exp(-x + theta)) -- For MLE
Integral from theta to infinity of (x*Exp(-x + theta)) = xbar -- For Method of Moments

The Attempt at a Solution


I evaluate the log-likelihood function to get Log(L)= -x + theta and then take the derivative of the log-likelihood function with respect to theta. The problem here arises when I take this derivative and set it equal to zero since it gives me 0 = 1 with none of my parameters left in the equation.

In performing the method of moments analysis i get that the estimation for theta is equal to the xbar + 1. Don't know if this is correct, but if someone could help me see what I'm doing wrong with either of these parts it would be most appreciated.
 
Physics news on Phys.org
Welcome to PF.

EnginStudent said:
f(x;theta)=Exp(-x+theta)

Is there a restriction on x and theta, such as 0<theta<x?

I evaluate the log-likelihood function to get Log(L)= -x + theta and then take the derivative of the log-likelihood function with respect to theta. The problem here arises when I take this derivative and set it equal to zero since it gives me 0 = 1 with none of my parameters left in the equation.

If the observations are x_1, ... , x_n, then the likelihood function is L(x_1,...,x_n;theta)=product of f(x_1;theta), ... f(x_n;theta).

Now to maximize L(theta), the usual way is to consider K(theta)=log L(theta) and take a derivative, etc. However, in this problem, you have to maximize L a different way. Think about the graph of L(theta) or K(theta). Use the conditions 0<theta<x_i and think about where the function would be maximized.

In performing the method of moments analysis i get that the estimation for theta is equal to the xbar + 1.

Do you want to show your work on this?
 
So for the MLE, The conditions are that x >= theta. Visualizing what this graph would look like, if x = theta, then you have the function equaling a value of 1, if If theta = x-1 you would get e^-1, which would be less that if x = theta, and this continues. So would it be correct to say that theta is maximized when it equals x?

For the Method of Moments, I use the integral from theta to infinity of x*Exp(-x+theta). Upon evaluating the integral i get theta + 1. I set this equal to the expected value for the first moment, which i am asumming would just be the average (or xbar). From this i just solved for theta to get theta = xbar + 1. I am not sure if this is correct though since different distributions have different first moments.
 
MOM is almost correct. Starting with E[X]=\theta + 1, you then solved for theta incorrectly (minus sign error). Then, as you said, you replace E[X] with \bar x to obtain the estimator of \theta.

For MLE, you are beginning to think in the right spirit. However, is there a reason you are only using one x? You should imagine n observations being given, x_1, x_2, \dots, x_n and you will estimate \theta in terms of those.
 
Fixing the MOM i now have theta = xbar - 1, thank you for catching my error.

I am a little confused on the MLE still. Do you mean that x should be evaluated as (x1, ..., xn)/n (xbar)? This gives theta = xbar. Is it a problem that the MLE and MOM give different estimates?
 
EnginStudent said:
I am a little confused on the MLE still. Do you mean that x should be evaluated as (x1, ..., xn)/n (xbar)? This gives theta = xbar.

Sorry, I don't understand those comments at all.

The likelihood function is L(\theta)=f(x_1;\theta)f(x_2;\theta)\dots f(x_n;\theta)=\Pi_{i=1}^n f(x_i;\theta)

You have f(x;\theta)=e^{-x+\theta} for x&gt;\theta so substitute f(x_i;\theta)=e^{-x_i+\theta} (for x_i&gt;\theta) into the formula for L.

Simplify L. By "studying" L (no derivatives), you have to decide what theta would maximize L.

Is it a problem that the MLE and MOM give different estimates?

Not a problem.
 
In performing the operations suggested in the last post and simplifying to find L, I got that L=Exp(theta-(xbar*n)) by using properties of products and exponents. (i.e. the product of Exp(x_i) from i to n is the same as saying Exp(sum of x_i from i to n) In order to maximize this function theta = n*xbar or am i missing something with the constraints? n= number of observations.
 
First note e^{-x_1+\theta}e^{-x_2+\theta}=e^{-x_1-x_2}e^{2\theta} which suggests you partially simplified correctly (n*xbar is good), but partially not (theta looks wrong).

Second, you are right that you must look at the constraints. You can't take theta=n*xbar, because that might violate theta<=x_i.

You are getting close. Remember, theta <= every x_i.
 

Similar threads

  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 0 ·
Replies
0
Views
979
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 2 ·
Replies
2
Views
4K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K