What is the Confidence Interval Formula for Maximum Likelihood Estimation?

In summary: I hope that helps.In summary, the conversation discusses a formula for finding the maximum likelihood estimator for a given function and how to use it to find a confidence interval for a parameter. The correct formula for the confidence interval is derived and explained.
  • #1
superwolf
184
0
[tex]
L(x_1,x_2,...,x_n;\theta)=\Pi _{i=1}^n (\frac{\theta}{2})^x (1-\frac{\theta}{2})^{1-x} = (\frac{\theta}{2})^{\Sigma^n_{i=1}x_i}(1-frac{\theta}{2})^{n-\Sigma^n_{i=1}x_i}
[/tex]

Correct so far if [tex]f(x) = (\frac{\theta}{2})^x (1-\frac{\theta}{2})^{1-x} [/tex]?

[tex]
lnL(x_1,x_2,...,x_n;\theta) = \Sigma^n_{i=1} x_i ln(\frac{\theta}{2}) + (n-\Sigma_{i=1}x_i) ln(\frac{1}{2 - \theta})
[/tex]

[tex]
\frac{d lnL}{d \theta}(x_1,x_2,...,x_n;\theta) = Sigma^n_{i=1}x_i \cdot \frac{1}{\theta - }(n-\Sigma^n_{i=1}x_i) \frac{1}{2-\theta}
[/tex]
 
Last edited:
Physics news on Phys.org
  • #2
Mostly right, with some typos.

[tex]
lnL(x_1,x_2,...,x_n;\theta) = \Sigma^n_{i=1} x_i ln(\frac{\theta}{2}) + (n-\Sigma_{i=1}x_i) ln(\frac{1}{2 - \theta})
[/tex]

[tex]
\ln L(x_1,x_2,...,x_n;\theta) = \Sigma_{i} x_i \ln(\frac{\theta}{2}) + (n-\Sigma_{i}x_i) \ln(\frac{2 - \theta}{2})
[/tex]

[tex]
\frac{d lnL}{d \theta}(x_1,x_2,...,x_n;\theta) = Sigma^n_{i=1}x_i \cdot \frac{1}{\theta - }(n-\Sigma^n_{i=1}x_i) \frac{1}{2-\theta}
[/tex]

[tex]
\frac{d\ln L}{d \theta}(x_1,x_2,...,x_n;\theta) =\frac{\Sigma_{i}x_i}{\theta}\,
-\,\frac{n-\Sigma_{i}x_i}{2-\theta}
[/tex]

Now combine into a single fraction, set equal to zero, and solve for theta.
 
  • #3
I'll try.
 
Last edited:
  • #4
It worked! Great stuff, thanks!


[tex]
\hat{\theta}=\frac{2}{n}\Sigma^n_{i=1}X_i
[/tex]

Now, [tex]E(\hat{\theta}) = \theta[/tex] and [tex]Var(\hat{\theta})=\frac{2 \theta}{n}(1-\frac{\theta}{2})[/tex].

Find a 95% confidence interval for [tex]\theta[/tex] by using that [tex]\frac{\hat{\theta} - \theta}{\sqrt{\frac{2 \hat{\theta}}{n}(1-\frac{\hat{\theta}}{2})}}[/tex]

Can I use this formula?

244qs9g.jpg


With n=100 and [tex]\Sigma^n_{i=1}X_i = 32[/tex] I get

[tex]\hat{\theta}=0.64[/tex], but using the formula above gives doesn't give me the correct interval, which is [0.46, 0.82]

Edit: I get the right interval when I don't include the [tex]\sqrt{n}[/tex] in the formula above. Is the formula wrong?
 
Last edited:
  • #5
You used the wrong formula. You have to derive your own confidence interval formula.

I assume you meant to say that [tex]\frac{\hat{\theta} - \theta}{\sqrt{\frac{2 \hat{\theta}}{n}(1-\frac{\hat{\theta}}{2})}}[/tex] is standard normal.

Then [tex]P(-1.96<\frac{\hat{\theta} - \theta}{\sqrt{\frac{2 \hat{\theta}}{n}(1-\frac{\hat{\theta}}{2})}} <1.96 )=0.95[/tex]

Now rearrange to get

[tex]P(\text{thing}<-\theta<\text{other thing})=0.95[/tex]

and then finally

[tex]P(-\text{thing}>\theta>-\text{other thing})=0.95[/tex]
 
Last edited:

1. What is Maximum Likelihood Estimation (MLE)?

Maximum Likelihood Estimation (MLE) is a statistical method used to estimate the parameters of a probability distribution by finding the values that maximize the likelihood of the observed data. It is based on the principle that the most likely values of the parameters will produce the observed data.

2. How does MLE differ from other estimation methods?

MLE differs from other estimation methods in that it is based on the likelihood function, which is a measure of how likely a particular set of parameters is to produce the observed data. Other methods, such as least squares estimation, use different criteria to find the best fitting parameters.

3. What types of data are suitable for MLE?

MLE can be applied to any type of data that can be described by a probability distribution, such as continuous or discrete variables. It is commonly used in fields such as biology, economics, and engineering to estimate parameters of complex models.

4. What are the advantages of using MLE?

One of the main advantages of MLE is that it provides a rigorous and efficient way to estimate parameters of a probability distribution. It also allows for the incorporation of multiple variables and can handle large datasets. Additionally, MLE is asymptotically unbiased, meaning that as the sample size increases, the estimated parameters will approach the true values.

5. Are there any limitations to MLE?

One limitation of MLE is that it assumes the data is independent and identically distributed, which may not always be the case in real-world scenarios. It also requires knowledge of the underlying probability distribution, which may be difficult to determine in some cases. Additionally, MLE can be sensitive to outliers in the data, which can affect the estimated parameters.

Similar threads

  • Calculus and Beyond Homework Help
Replies
8
Views
876
  • Calculus and Beyond Homework Help
Replies
5
Views
287
  • Calculus and Beyond Homework Help
Replies
3
Views
561
  • Calculus and Beyond Homework Help
Replies
5
Views
1K
Replies
1
Views
571
  • Calculus and Beyond Homework Help
Replies
2
Views
873
  • Calculus and Beyond Homework Help
Replies
2
Views
275
  • Calculus and Beyond Homework Help
Replies
11
Views
1K
Replies
14
Views
1K
  • Calculus and Beyond Homework Help
Replies
3
Views
881
Back
Top