Help with Statistical Inference

  • Context: MHB 
  • Thread starter Thread starter trousmate
  • Start date Start date
  • Tags Tags
    Statistical
Click For Summary
SUMMARY

This discussion focuses on statistical inference, specifically the maximum likelihood estimation (MLE) for a binomial distribution. The likelihood function is defined as L(θ|Y) = (r! / (Y!(r-Y)!))θ^Y(1-θ)^(r-Y), with the log-likelihood derived to find the estimator θ̂ = Y/r. The conversation highlights the importance of correctly interpreting the data, emphasizing that the problem involves the number of successes from r trials rather than a vector of results. Participants provide detailed mathematical derivations to support their conclusions.

PREREQUISITES
  • Understanding of likelihood functions in statistics
  • Familiarity with maximum likelihood estimation (MLE)
  • Basic knowledge of binomial distributions
  • Proficiency in calculus for deriving log-likelihood functions
NEXT STEPS
  • Study the properties of binomial distributions and their applications
  • Learn about the derivation and application of maximum likelihood estimators
  • Explore the concept of log-likelihood and its significance in statistical inference
  • Investigate common pitfalls in interpreting statistical data and results
USEFUL FOR

Students and professionals in statistics, data analysis, and research who need to understand statistical inference and maximum likelihood estimation techniques.

trousmate
Messages
1
Reaction score
0
View attachment 3490

See image.

I have to present this on Monday and don't really know what I'm doing having missed a couple of lectures through illness. Any help or hints would be very much appreciated.

Thanks,
TM
 

Attachments

  • Screen Shot 2014-11-07 at 12.38.28.png
    Screen Shot 2014-11-07 at 12.38.28.png
    6.8 KB · Views: 96
Physics news on Phys.org
trousmate said:
https://www.physicsforums.com/attachments/3490

See image.

I have to present this on Monday and don't really know what I'm doing having missed a couple of lectures through illness. Any help or hints would be very much appreciated.

Thanks,
TM

Wellcome on MHB trousmate!...

If You have rsamples of a r.v. Y then the likelihood function is... $\displaystyle f(y_{1}, y_{2}, ..., y_{r} | p) = p^{\sum y_{i}}\ (1-p)^{r - \sum y_{i}}\ (1)$... so that... $\displaystyle \ln f= \sum_{i} y_{i}\ \ln p - (r - \sum_{i} y_{i} )\ \ln (1 - p) \implies\frac{d \ln f}{d p } = \frac{\sum_{i} y_{i}} {p} - \frac {(r -\sum_{i} y_{i})}{1 - p}\ (2)$... and imposing$\displaystyle \frac{d \ln f}{d p} = 0$ You arrive to write... $\displaystyle \overline {p}= \frac{\sum_{i} y_{i}}{r}\ (3)$Kind regards$\chi$ $\sigma$
 
trousmate said:
https://www.physicsforums.com/attachments/3490

See image.

I have to present this on Monday and don't really know what I'm doing having missed a couple of lectures through illness. Any help or hints would be very much appreciated.

Thanks,
TM

For this problem you have the likelihood:

$$L(\theta|Y) = b(Y,r,\theta)=\frac{r!}{Y!(r-Y)!}\theta^Y(1-\theta)^{r-Y}$$

Then the log-likelihood is:

$$LL(\theta|Y) =\log(r!) - \log(Y!) -\log((r-Y)!)+Y\log(\theta) +(r-Y)\log(1-\theta)$$

Then to find the value of $\theta$ that maximises the log-likelihood we take the partial derivative with respect to $\theta$ and equate that to zero:

$$\frac{\partial}{\partial \theta}LL(\theta|Y)=\frac{Y}{\theta}-\frac{r-Y}{1-\theta}
$$
So for the maximum (log-)likelihood estimator $\hat{\theta}$ we have:
$$\frac{Y}{\hat{\theta}}-\frac{r-Y}{1-\hat{\theta}}=0$$

which can be rearranged to give:$$\hat{\theta}(r-Y)=(1-\hat{\theta})Y$$ or $$\hat{\theta}=\frac{Y}{r}$$

and as you should be aware of; the maximum log-likelihood estimator for this sort of problem is also the maximum likelihood estimator.

.
 
chisigma said:
Wellcome on MHB trousmate!...

If You have rsamples of a r.v. Y then the likelihood function is... $\displaystyle f(y_{1}, y_{2}, ..., y_{r} | p) = p^{\sum y_{i}}\ (1-p)^{r - \sum y_{i}}\ (1)$... so that... $\displaystyle \ln f= \sum_{i} y_{i}\ \ln p - (r - \sum_{i} y_{i} )\ \ln (1 - p) \implies\frac{d \ln f}{d p } = \frac{\sum_{i} y_{i}} {p} - \frac {(r -\sum_{i} y_{i})}{1 - p}\ (2)$... and imposing$\displaystyle \frac{d \ln f}{d p} = 0$ You arrive to write... $\displaystyle \overline {p}= \frac{\sum_{i} y_{i}}{r}\ (3)$Kind regards$\chi$ $\sigma$

Very nice but this is the solution to the wrong problem, it just happens to have the same solution as the problem as asked, but it is still the wrong problem. The data is the number of successes from r trials not a vector of results.

.
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 30 ·
2
Replies
30
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 24 ·
Replies
24
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 3 ·
Replies
3
Views
1K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 2 ·
Replies
2
Views
1K