MHB Help with Statistical Inference

  • Thread starter Thread starter trousmate
  • Start date Start date
  • Tags Tags
    Statistical
Click For Summary
The discussion revolves around a request for assistance with statistical inference, specifically regarding likelihood functions and maximum likelihood estimators. Participants provide detailed mathematical formulations for likelihood and log-likelihood functions, emphasizing the importance of derivatives to find estimators. There is a correction noted about the relevance of the problem being addressed, clarifying that the data pertains to the number of successes from trials rather than individual results. The conversation highlights the need for clear understanding and application of statistical concepts for effective presentation preparation. Overall, the thread serves as a collaborative effort to clarify statistical inference methods.
trousmate
Messages
1
Reaction score
0
View attachment 3490

See image.

I have to present this on Monday and don't really know what I'm doing having missed a couple of lectures through illness. Any help or hints would be very much appreciated.

Thanks,
TM
 

Attachments

  • Screen Shot 2014-11-07 at 12.38.28.png
    Screen Shot 2014-11-07 at 12.38.28.png
    6.8 KB · Views: 88
Physics news on Phys.org
trousmate said:
https://www.physicsforums.com/attachments/3490

See image.

I have to present this on Monday and don't really know what I'm doing having missed a couple of lectures through illness. Any help or hints would be very much appreciated.

Thanks,
TM

Wellcome on MHB trousmate!...

If You have rsamples of a r.v. Y then the likelihood function is... $\displaystyle f(y_{1}, y_{2}, ..., y_{r} | p) = p^{\sum y_{i}}\ (1-p)^{r - \sum y_{i}}\ (1)$... so that... $\displaystyle \ln f= \sum_{i} y_{i}\ \ln p - (r - \sum_{i} y_{i} )\ \ln (1 - p) \implies\frac{d \ln f}{d p } = \frac{\sum_{i} y_{i}} {p} - \frac {(r -\sum_{i} y_{i})}{1 - p}\ (2)$... and imposing$\displaystyle \frac{d \ln f}{d p} = 0$ You arrive to write... $\displaystyle \overline {p}= \frac{\sum_{i} y_{i}}{r}\ (3)$Kind regards$\chi$ $\sigma$
 
trousmate said:
https://www.physicsforums.com/attachments/3490

See image.

I have to present this on Monday and don't really know what I'm doing having missed a couple of lectures through illness. Any help or hints would be very much appreciated.

Thanks,
TM

For this problem you have the likelihood:

$$L(\theta|Y) = b(Y,r,\theta)=\frac{r!}{Y!(r-Y)!}\theta^Y(1-\theta)^{r-Y}$$

Then the log-likelihood is:

$$LL(\theta|Y) =\log(r!) - \log(Y!) -\log((r-Y)!)+Y\log(\theta) +(r-Y)\log(1-\theta)$$

Then to find the value of $\theta$ that maximises the log-likelihood we take the partial derivative with respect to $\theta$ and equate that to zero:

$$\frac{\partial}{\partial \theta}LL(\theta|Y)=\frac{Y}{\theta}-\frac{r-Y}{1-\theta}
$$
So for the maximum (log-)likelihood estimator $\hat{\theta}$ we have:
$$\frac{Y}{\hat{\theta}}-\frac{r-Y}{1-\hat{\theta}}=0$$

which can be rearranged to give:$$\hat{\theta}(r-Y)=(1-\hat{\theta})Y$$ or $$\hat{\theta}=\frac{Y}{r}$$

and as you should be aware of; the maximum log-likelihood estimator for this sort of problem is also the maximum likelihood estimator.

.
 
chisigma said:
Wellcome on MHB trousmate!...

If You have rsamples of a r.v. Y then the likelihood function is... $\displaystyle f(y_{1}, y_{2}, ..., y_{r} | p) = p^{\sum y_{i}}\ (1-p)^{r - \sum y_{i}}\ (1)$... so that... $\displaystyle \ln f= \sum_{i} y_{i}\ \ln p - (r - \sum_{i} y_{i} )\ \ln (1 - p) \implies\frac{d \ln f}{d p } = \frac{\sum_{i} y_{i}} {p} - \frac {(r -\sum_{i} y_{i})}{1 - p}\ (2)$... and imposing$\displaystyle \frac{d \ln f}{d p} = 0$ You arrive to write... $\displaystyle \overline {p}= \frac{\sum_{i} y_{i}}{r}\ (3)$Kind regards$\chi$ $\sigma$

Very nice but this is the solution to the wrong problem, it just happens to have the same solution as the problem as asked, but it is still the wrong problem. The data is the number of successes from r trials not a vector of results.

.
 
There is a nice little variation of the problem. The host says, after you have chosen the door, that you can change your guess, but to sweeten the deal, he says you can choose the two other doors, if you wish. This proposition is a no brainer, however before you are quick enough to accept it, the host opens one of the two doors and it is empty. In this version you really want to change your pick, but at the same time ask yourself is the host impartial and does that change anything. The host...

Similar threads

  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 30 ·
2
Replies
30
Views
3K
Replies
4
Views
2K
Replies
24
Views
2K
  • · Replies 3 ·
Replies
3
Views
1K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 6 ·
Replies
6
Views
1K
Replies
1
Views
2K
  • · Replies 20 ·
Replies
20
Views
4K
  • · Replies 7 ·
Replies
7
Views
2K