MHB Log likelihood and Maximum likelihood

Click For Summary
To find the log-likelihood function for a binomial distribution with four trials, start by creating the joint probability distribution by multiplying the individual probability density functions, resulting in L = p²(1-p)². Taking the logarithm gives the log-likelihood function l = 2ln p + 2ln(1-p). The sample mean x̄ is identified as 1/2, leading to the identities 2 = 4x̄ and 2 = 4 - 4x̄. Substituting these into the log-likelihood function yields l = 4x̄ln p + (4 - 4x̄)ln(1-p), from which the derivative can be taken to find the desired result. This process clarifies the origin of the coefficient 4 in the derivative.
cajswn
Messages
1
Reaction score
0
Screenshot 2020-11-08 at 21.03.30.png

I'm not sure how to get this first derivative (mainly where does the 4 come from?)
I know x̄ is the sample mean (which I think is 1/2?)
Can someone suggest where to start with finding the log-likelihood?

I know the mass function of a binomial distribution is:
Screenshot 2020-11-08 at 21.05.01.png


Thanks!
 
Last edited:
Physics news on Phys.org
Hi cajswn,

To determine the likelihood function we need to create the joint probability distribution. Since the 4 individual trials of the binomial process are independent, we simply multiply the 4 individual probability density functions to get the needed joint probability distribution: $$L = p^{2}(1-p)^{2}.$$ Now take the logarithm to get the log-likelihood function: $$l = 2\ln p + 2\ln (1-p).$$ Since $\bar{x} = 1/2$ we have $2=4\bar{x}$ and $2 = 4-4\bar{x}$. Using these two identities we get: $$l = 4\bar{x}\ln p +(4-4\bar{x})\ln (1-p).$$ From here, take the derivative of $l$ with respect to $p$ to get the result you're looking for.
 
The standard _A " operator" maps a Null Hypothesis Ho into a decision set { Do not reject:=1 and reject :=0}. In this sense ( HA)_A , makes no sense. Since H0, HA aren't exhaustive, can we find an alternative operator, _A' , so that ( H_A)_A' makes sense? Isn't Pearson Neyman related to this? Hope I'm making sense. Edit: I was motivated by a superficial similarity of the idea with double transposition of matrices M, with ## (M^{T})^{T}=M##, and just wanted to see if it made sense to talk...

Similar threads

  • · Replies 16 ·
Replies
16
Views
2K
  • · Replies 6 ·
Replies
6
Views
1K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 11 ·
Replies
11
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K