Maximum likelihood estimator of binominal distribution

In summary, the conversation discusses the use of the likelihood function L(x1...xn,p) to estimate the probability of a success outcome in a Bernoulli trial. The speaker questions the validity of omitting the multiplicative pi symbol and suggests using L(p) instead as p is the only parameter. They also express confusion over the use of x in the function.
  • #1
superwolf
184
0
[tex]
L(x_1,...,x_n;p)=\Pi_{i=1}^{n}(\stackrel{n}{x_i}) p^{x_i}(1-p)^{n-x_i}
[/tex]

Correct so far?

The solution tells me to skip the [tex]\Pi[/tex]:

[tex]
L(x_1,...,x_n;p)=(\stackrel{n}{x}) p^{x}(1-p)^{n-x}
[/tex]

This is contradictory to all the examples in my book. Why?
 
Last edited:
Physics news on Phys.org
  • #2
I don't understand why you wrote L(x1...xn,p). I thought the purpose was to estimate p, the probability of a designated success outcome in a Bernoulli trial. So it should be L (p) as p is the only parameter.

I also don't see any sense in omitting the multiplicative pi symbol. What is x here, anyway? x_i all refer to the observed no. of succeses of each sample size n. So what is x?
 

What is a maximum likelihood estimator (MLE)?

A maximum likelihood estimator is a method used to estimate the parameters of a statistical model by finding the set of values that maximizes the likelihood of the observed data. In other words, it is a way to determine the most likely values for the unknown parameters of a model based on the observed data.

How does the MLE work for the binomial distribution?

In the case of the binomial distribution, the MLE involves finding the value of the parameter p (the probability of success) that maximizes the likelihood function. This is typically done using calculus and optimization techniques. The resulting estimate, denoted as ^p, is the maximum likelihood estimator of p for the given data.

What assumptions are necessary for the MLE to be valid?

The MLE assumes that the data follows a specific distribution, in this case the binomial distribution. It also assumes that the data is independent and identically distributed (iid), meaning that each observation is independent from the others and has the same underlying probability of success. Additionally, the MLE assumes that the data is complete, meaning that there are no missing values.

How is the MLE different from other estimation methods?

The MLE is different from other estimation methods, such as the method of moments or least squares, in that it aims to find the values of the parameters that make the observed data most likely. This is in contrast to other methods that may focus on minimizing the difference between the observed data and the estimated values.

What are the advantages of using the MLE for the binomial distribution?

The MLE for the binomial distribution has several advantages. It is a relatively simple and intuitive method that can be easily applied to a wide range of data sets. It also has desirable statistical properties, such as consistency and efficiency, meaning that as the sample size increases, the estimate becomes closer to the true value and has a smaller variance compared to other estimation methods. Additionally, the MLE is asymptotically unbiased, meaning that as the sample size increases, the estimate becomes less biased.

Similar threads

  • Calculus and Beyond Homework Help
Replies
10
Views
1K
  • Calculus and Beyond Homework Help
Replies
7
Views
1K
  • Calculus and Beyond Homework Help
Replies
5
Views
1K
  • Calculus and Beyond Homework Help
Replies
6
Views
2K
Replies
5
Views
379
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
1K
Replies
0
Views
350
  • Calculus and Beyond Homework Help
Replies
4
Views
1K
  • Calculus and Beyond Homework Help
Replies
9
Views
1K
Replies
16
Views
2K
Back
Top