How to Find Maximum Likelihood Estimators for Sample Data?

In summary, the conversation discusses a method for estimating a parameter using a sample. The method involves using a likelihood function and solving for the parameter that makes the derivative of the function equal to zero. The person asking for help is unsure how to apply this method when given a sample. They plan to seek assistance from a graduate TA and search for examples online.
  • #1
Scootertaj
97
0
problem.jpg

Homework Equations



[tex]L(x,p) = \prod_{i=1}^npdf[/tex]
[tex]l= \sum_{i=1}^nlog(pdf)[/tex]
Then solve [tex]\frac{dl}{dp}=0[/tex] for p (parameter we are seeking to estimate)

The Attempt at a Solution



I know how to do this when we are given a pdf, but I'm confused how to do this when we have a sample.
 
Physics news on Phys.org
  • #2
Scootertaj said:
problem.jpg



Homework Equations



[tex]L(x,p) = \prod_{i=1}^npdf[/tex]
[tex]l= \sum_{i=1}^nlog(pdf)[/tex]
Then solve [tex]\frac{dl}{dp}=0[/tex] for p (parameter we are seeking to estimate)

The Attempt at a Solution



I know how to do this when we are given a pdf, but I'm confused how to do this when we have a sample.

Are there no similar examples solved in your textbook or course notes? Could you find nothing at all on-line?

RGV
 
  • #3
Not so far, though I'll be talking to a graduate TA about it.
 

What is a Maximum Likelihood Estimator (MLE)?

A Maximum Likelihood Estimator (MLE) is a statistical method used to estimate the values of unknown parameters in a probability distribution. It is based on the principle of choosing the parameter values that make the observed data most likely to occur.

How is the Maximum Likelihood Estimator calculated?

The Maximum Likelihood Estimator is calculated by finding the parameter values that maximize the likelihood function. This is typically done using calculus and optimization techniques such as gradient descent.

What are the assumptions made in Maximum Likelihood Estimation?

The main assumptions made in Maximum Likelihood Estimation are that the data follows a specific probability distribution and that the observations are independent and identically distributed.

What are the advantages of using Maximum Likelihood Estimators?

Maximum Likelihood Estimators have several advantages, including being consistent, efficient, and asymptotically normal. They also have a strong theoretical foundation and can be used in a wide range of statistical models.

What are some common applications of Maximum Likelihood Estimators?

Maximum Likelihood Estimators are commonly used in statistical modeling and inference, such as in linear regression, logistic regression, and time series analysis. They are also used in machine learning algorithms, such as in neural networks and hidden Markov models.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • High Energy, Nuclear, Particle Physics
Replies
1
Views
854
  • Calculus and Beyond Homework Help
Replies
7
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
Replies
0
Views
298
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
1K
  • Calculus and Beyond Homework Help
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
16
Views
1K
  • Calculus and Beyond Homework Help
Replies
6
Views
2K
  • Calculus and Beyond Homework Help
Replies
25
Views
1K
Back
Top