Statistics: Method of Moments/Maximum Likelihood Estimation

In summary: So what is the largest possible theta?In summary, the conversation discusses finding parameter estimates for variable 'theta' using the maximum likelihood estimator and method of moments. The log-likelihood function and integral are used for these methods, but it is noted that the derivative of the log-likelihood function may not always produce a valid solution. The method of moments gives an estimate of theta equal to the average of the observations plus one. However, for the maximum likelihood estimator, the constraints must be taken into account and the largest possible theta must be determined.
  • #1
EnginStudent
4
0

Homework Statement


f(x;theta)=Exp(-x+theta)
Find parameter estimates for variable 'theta' using maximum likelihood Estimator and Method of Moments.

Homework Equations


Log(x; theta) = Log(Exp(-x + theta)) -- For MLE
Integral from theta to infinity of (x*Exp(-x + theta)) = xbar -- For Method of Moments

The Attempt at a Solution


I evaluate the log-likelihood function to get Log(L)= -x + theta and then take the derivative of the log-likelihood function with respect to theta. The problem here arises when I take this derivative and set it equal to zero since it gives me 0 = 1 with none of my parameters left in the equation.

In performing the method of moments analysis i get that the estimation for theta is equal to the xbar + 1. Don't know if this is correct, but if someone could help me see what I'm doing wrong with either of these parts it would be most appreciated.
 
Physics news on Phys.org
  • #2
Welcome to PF.

EnginStudent said:
f(x;theta)=Exp(-x+theta)

Is there a restriction on x and theta, such as 0<theta<x?

I evaluate the log-likelihood function to get Log(L)= -x + theta and then take the derivative of the log-likelihood function with respect to theta. The problem here arises when I take this derivative and set it equal to zero since it gives me 0 = 1 with none of my parameters left in the equation.

If the observations are x_1, ... , x_n, then the likelihood function is L(x_1,...,x_n;theta)=product of f(x_1;theta), ... f(x_n;theta).

Now to maximize L(theta), the usual way is to consider K(theta)=log L(theta) and take a derivative, etc. However, in this problem, you have to maximize L a different way. Think about the graph of L(theta) or K(theta). Use the conditions 0<theta<x_i and think about where the function would be maximized.

In performing the method of moments analysis i get that the estimation for theta is equal to the xbar + 1.

Do you want to show your work on this?
 
  • #3
So for the MLE, The conditions are that x >= theta. Visualizing what this graph would look like, if x = theta, then you have the function equaling a value of 1, if If theta = x-1 you would get e^-1, which would be less that if x = theta, and this continues. So would it be correct to say that theta is maximized when it equals x?

For the Method of Moments, I use the integral from theta to infinity of x*Exp(-x+theta). Upon evaluating the integral i get theta + 1. I set this equal to the expected value for the first moment, which i am asumming would just be the average (or xbar). From this i just solved for theta to get theta = xbar + 1. I am not sure if this is correct though since different distributions have different first moments.
 
  • #4
MOM is almost correct. Starting with [tex]E[X]=\theta + 1[/tex], you then solved for theta incorrectly (minus sign error). Then, as you said, you replace [tex]E[X][/tex] with [tex]\bar x[/tex] to obtain the estimator of [tex]\theta[/tex].

For MLE, you are beginning to think in the right spirit. However, is there a reason you are only using one x? You should imagine n observations being given, [tex] x_1, x_2, \dots, x_n[/tex] and you will estimate [tex]\theta[/tex] in terms of those.
 
  • #5
Fixing the MOM i now have theta = xbar - 1, thank you for catching my error.

I am a little confused on the MLE still. Do you mean that x should be evaluated as (x1, ..., xn)/n (xbar)? This gives theta = xbar. Is it a problem that the MLE and MOM give different estimates?
 
  • #6
EnginStudent said:
I am a little confused on the MLE still. Do you mean that x should be evaluated as (x1, ..., xn)/n (xbar)? This gives theta = xbar.

Sorry, I don't understand those comments at all.

The likelihood function is [tex]L(\theta)=f(x_1;\theta)f(x_2;\theta)\dots f(x_n;\theta)=\Pi_{i=1}^n f(x_i;\theta) [/tex]

You have [tex]f(x;\theta)=e^{-x+\theta}[/tex] for [tex]x>\theta[/tex] so substitute [tex]f(x_i;\theta)=e^{-x_i+\theta}[/tex] (for [tex]x_i>\theta[/tex]) into the formula for L.

Simplify L. By "studying" L (no derivatives), you have to decide what theta would maximize L.

Is it a problem that the MLE and MOM give different estimates?

Not a problem.
 
  • #7
In performing the operations suggested in the last post and simplifying to find L, I got that L=Exp(theta-(xbar*n)) by using properties of products and exponents. (i.e. the product of Exp(x_i) from i to n is the same as saying Exp(sum of x_i from i to n) In order to maximize this function theta = n*xbar or am i missing something with the constraints? n= number of observations.
 
  • #8
First note [tex]e^{-x_1+\theta}e^{-x_2+\theta}=e^{-x_1-x_2}e^{2\theta}[/tex] which suggests you partially simplified correctly (n*xbar is good), but partially not (theta looks wrong).

Second, you are right that you must look at the constraints. You can't take theta=n*xbar, because that might violate theta<=x_i.

You are getting close. Remember, theta <= every x_i.
 

1. What is the Method of Moments and how does it work?

The Method of Moments is a statistical technique used to estimate the parameters of a probability distribution. It works by equating the theoretical moments of a distribution to the sample moments, and solving for the unknown parameters.

2. What is Maximum Likelihood Estimation (MLE) and how is it different from the Method of Moments?

Maximum Likelihood Estimation is another statistical technique used to estimate the parameters of a probability distribution. Unlike the Method of Moments, MLE uses the likelihood function to find the parameter values that maximize the probability of observing the data. This makes MLE more efficient and generally produces more accurate estimates.

3. When should I use the Method of Moments and when should I use Maximum Likelihood Estimation?

The choice between using the Method of Moments and Maximum Likelihood Estimation depends on the specific problem at hand. In general, MLE is preferred when the sample size is large, while the Method of Moments is more suitable for smaller sample sizes. Additionally, MLE is more robust when dealing with complex distributions, while the Method of Moments may be easier to use for simpler distributions.

4. What are the assumptions underlying the Method of Moments and Maximum Likelihood Estimation?

The Method of Moments and Maximum Likelihood Estimation both assume that the data comes from a specific probability distribution. Additionally, MLE assumes that the observations are independent and identically distributed, while the Method of Moments does not have this assumption.

5. Can I use both the Method of Moments and Maximum Likelihood Estimation for the same problem?

Yes, it is possible to use both techniques for the same problem and compare the results. However, it is important to note that the estimates may not always be the same and the interpretation of the results may also differ. It is recommended to try both methods and choose the one that is most appropriate for the specific problem at hand.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
1K
Replies
0
Views
277
  • Calculus and Beyond Homework Help
Replies
11
Views
1K
  • Calculus and Beyond Homework Help
Replies
4
Views
1K
  • Calculus and Beyond Homework Help
Replies
5
Views
1K
  • Calculus and Beyond Homework Help
Replies
7
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
16
Views
1K
  • Calculus and Beyond Homework Help
Replies
1
Views
1K
Replies
1
Views
613
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
818
Back
Top