Determining the Likelihood function

In summary, the likelihood function is a function that measures the plausibility of a particular set of parameters given the observed data. It is not a probability function, as it does not integrate to 1 and the parameter is treated as a fixed, unknown constant rather than a random variable. The example of a Gaussian random variable with a fixed mean and varying variance illustrates that the likelihood function can take on different forms depending on the role of the data and parameters in the probability distribution.
  • #1
daviddoria
97
0
I was under the impression that the likelihood function was simply the probability density function but viewing the parameter theta as the variable instead of the observations x. Ie
p(x|theta) = L(theta|x)

However, the likelihood function is no longer a probability function

See Example 1 here:
http://en.wikipedia.org/wiki/Likelihood_function

Can anyone explain this a little bit?

Thanks,
Dave
 
Physics news on Phys.org
  • #2
daviddoria said:
I was under the impression that the likelihood function was simply the probability density function but viewing the parameter theta as the variable instead of the observations x.

Right. The observations are fixed in the context of a likelihood function.

daviddoria said:
However, the likelihood function is no longer a probability function

Yeah, it doesn't typically integrate to 1 (and sometimes doesn't integrate at all). Note that in most pdf's, the data and parameters play very different roles, so we shouldn't expect the function to have the same integral over each of them. Also, in maximum likelihood estimation, the parameter is assumed to be a fixed, unknown constant (and NOT a random variable), so there is no distribution assigned to it.

I could probably give a better answer if you could explain why you think the likelihood function should be a probability function.
 
  • #3
maybe I used a bad example

I considered a Gaussian random variable with variance 1,

so f(x|u=3) is a Gaussian distribution centered at 3.

Then L(u|x=5) is a Gaussian distribution centered at 5.

Is that correct? That must just be a "special" case and hence a bad example?

Dave
 
  • #4
daviddoria said:
maybe I used a bad example

I considered a Gaussian random variable with variance 1,

so f(x|u=3) is a Gaussian distribution centered at 3.

Then L(u|x=5) is a Gaussian distribution centered at 5.

Is that correct? That must just be a "special" case and hence a bad example?

Yeah, that's a special case, as the Gaussian mean and the data play interchangeable roles in the pdf. Try the same thing with a fixed mean and considering the variance to be the parameter, and the resulting likelihood will not be a probability function.

The problem of estimating the mean from data drawn from a unit-variance Gaussian distribution is one of the most pervasive "special cases" in statistics. The MLE for such a situation is equivalent to the MVUE, for example.
 

1. What is the likelihood function and why is it important in scientific research?

The likelihood function is a mathematical function that measures the probability of obtaining a specific set of data, given a certain hypothesis or model. It is an essential tool in statistical inference and helps scientists evaluate the plausibility of different hypotheses based on observed data.

2. How is the likelihood function calculated?

The likelihood function is calculated by multiplying the probabilities of each observed data point under the assumed hypothesis or model. This can be expressed mathematically as the product of the probability density function or probability mass function for each data point.

3. What is the difference between the likelihood function and the probability function?

The likelihood function and the probability function are closely related, but serve different purposes. While the likelihood function measures the probability of obtaining a specific set of data under a given hypothesis, the probability function calculates the probability of obtaining a specific outcome given a known probability distribution.

4. How can the likelihood function be used to compare different hypotheses?

The likelihood function can be used to calculate the likelihood ratio, which is the ratio of the likelihood of one hypothesis to another. This allows scientists to compare the plausibility of different hypotheses and choose the one that best fits the observed data.

5. What are some limitations of the likelihood function?

One limitation of the likelihood function is that it cannot determine causality. It only measures the probability of obtaining the observed data, but does not indicate whether a particular hypothesis caused the data. Additionally, the likelihood function may not work well with small sample sizes or when the data is highly skewed or has outliers.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
16
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
8
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
2K
Replies
0
Views
283
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
918
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
827
Back
Top