Probability of a given data set given the parameters

Click For Summary
SUMMARY

This discussion centers on the distinction between calculating the probability of a data set given specific parameters versus calculating the probability of parameters given a data set. It highlights that in traditional probability, parameters of a distribution, such as the normal distribution, are assumed known, and the focus is on determining the probability of outcomes. In maximum likelihood estimation, parameters are treated as fixed values without associated probability distributions, while Bayesian estimation considers parameters as random variables, allowing for the calculation of their probabilities based on observed data.

PREREQUISITES
  • Understanding of basic probability distributions, such as the normal distribution
  • Familiarity with maximum likelihood estimation (MLE)
  • Knowledge of Bayesian statistics and Maximum A Posteriori (MAP) estimation
  • Concept of likelihood functions in statistical analysis
NEXT STEPS
  • Study the principles of maximum likelihood estimation (MLE) in depth
  • Explore Bayesian statistics and the concept of prior distributions
  • Learn about the likelihood function and its applications in statistical modeling
  • Investigate the differences between frequentist and Bayesian approaches to parameter estimation
USEFUL FOR

Statisticians, data scientists, and researchers involved in statistical modeling and parameter estimation will benefit from this discussion.

tronter
Messages
183
Reaction score
1
Why do we look at the probability of a given data set given the parameters, as opposed to the probability of the parameters given the data set?

So [tex]y(x) = y(x;a_{1} \ldots a_{M})[/tex]
 
Physics news on Phys.org


I'm not sure I understand your question. In "Probability" we always assume we are given some basic probability distribution- i.e. the parameters of some general distribution (such as the normal distribution). The problems in probability then are generally, "given the parameters of the probability distribution, determine the probability of various outcomes (i.e. the data set)".

probability is then used as the theory behind "Statistics" in which we do just the opposite: given an outcome (data set) try to estimate the parameters of the probability distribution.
 


tronter said:
Why do we look at the probability of a given data set given the parameters, as opposed to the probability of the parameters given the data set?

Well, in maximum likelihood estimation, the reason is simple: the parameters are not random variables, and so there is no probability distribution defined on them. Instead, we use the likelihood function, which is similar.

In Bayesian/MAP estimation, on the other hand, the parameters are viewed as random variables, and we do use the probability of the parameters given the data.
 

Similar threads

  • · Replies 16 ·
Replies
16
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 23 ·
Replies
23
Views
4K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 9 ·
Replies
9
Views
3K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 5 ·
Replies
5
Views
3K