# Statistics: Method of Moments/Maximum Likelihood Estimation

1. Jul 18, 2009

### EnginStudent

1. The problem statement, all variables and given/known data
f(x;theta)=Exp(-x+theta)
Find parameter estimates for variable 'theta' using maximum likelihood Estimator and Method of Moments.

2. Relevant equations
Log(x; theta) = Log(Exp(-x + theta)) -- For MLE
Integral from theta to infinity of (x*Exp(-x + theta)) = xbar -- For Method of Moments

3. The attempt at a solution
I evaluate the log-likelihood function to get Log(L)= -x + theta and then take the derivative of the log-likelihood function with respect to theta. The problem here arises when I take this derivative and set it equal to zero since it gives me 0 = 1 with none of my parameters left in the equation.

In performing the method of moments analysis i get that the estimation for theta is equal to the xbar + 1. Don't know if this is correct, but if someone could help me see what I'm doing wrong with either of these parts it would be most appreciated.

2. Jul 20, 2009

### Billy Bob

Welcome to PF.

Is there a restriction on x and theta, such as 0<theta<x?

If the observations are x_1, ... , x_n, then the likelihood function is L(x_1,...,x_n;theta)=product of f(x_1;theta), ... f(x_n;theta).

Now to maximize L(theta), the usual way is to consider K(theta)=log L(theta) and take a derivative, etc. However, in this problem, you have to maximize L a different way. Think about the graph of L(theta) or K(theta). Use the conditions 0<theta<x_i and think about where the function would be maximized.

Do you want to show your work on this?

3. Jul 20, 2009

### EnginStudent

So for the MLE, The conditions are that x >= theta. Visualizing what this graph would look like, if x = theta, then you have the function equaling a value of 1, if If theta = x-1 you would get e^-1, which would be less that if x = theta, and this continues. So would it be correct to say that theta is maximized when it equals x?

For the Method of Moments, I use the integral from theta to infinity of x*Exp(-x+theta). Upon evaluating the integral i get theta + 1. I set this equal to the expected value for the first moment, which i am asumming would just be the average (or xbar). From this i just solved for theta to get theta = xbar + 1. I am not sure if this is correct though since different distributions have different first moments.

4. Jul 20, 2009

### Billy Bob

MOM is almost correct. Starting with $$E[X]=\theta + 1$$, you then solved for theta incorrectly (minus sign error). Then, as you said, you replace $$E[X]$$ with $$\bar x$$ to obtain the estimator of $$\theta$$.

For MLE, you are beginning to think in the right spirit. However, is there a reason you are only using one x? You should imagine n observations being given, $$x_1, x_2, \dots, x_n$$ and you will estimate $$\theta$$ in terms of those.

5. Jul 20, 2009

### EnginStudent

Fixing the MOM i now have theta = xbar - 1, thank you for catching my error.

I am a little confused on the MLE still. Do you mean that x should be evaluated as (x1, ..., xn)/n (xbar)? This gives theta = xbar. Is it a problem that the MLE and MOM give different estimates?

6. Jul 20, 2009

### Billy Bob

Sorry, I don't understand those comments at all.

The likelihood function is $$L(\theta)=f(x_1;\theta)f(x_2;\theta)\dots f(x_n;\theta)=\Pi_{i=1}^n f(x_i;\theta)$$

You have $$f(x;\theta)=e^{-x+\theta}$$ for $$x>\theta$$ so substitute $$f(x_i;\theta)=e^{-x_i+\theta}$$ (for $$x_i>\theta$$) into the formula for L.

Simplify L. By "studying" L (no derivatives), you have to decide what theta would maximize L.

Not a problem.

7. Jul 20, 2009

### EnginStudent

In performing the operations suggested in the last post and simplifying to find L, I got that L=Exp(theta-(xbar*n)) by using properties of products and exponents. (i.e. the product of Exp(x_i) from i to n is the same as saying Exp(sum of x_i from i to n) In order to maximize this function theta = n*xbar or am i missing something with the constraints? n= number of observations.

8. Jul 20, 2009

### Billy Bob

First note $$e^{-x_1+\theta}e^{-x_2+\theta}=e^{-x_1-x_2}e^{2\theta}$$ which suggests you partially simplified correctly (n*xbar is good), but partially not (theta looks wrong).

Second, you are right that you must look at the constraints. You can't take theta=n*xbar, because that might violate theta<=x_i.

You are getting close. Remember, theta <= every x_i.