Probability of a given data set given the parameters

AI Thread Summary
The discussion centers on the distinction between analyzing the probability of a data set given specific parameters versus the probability of parameters given a data set. In probability theory, the focus is on determining outcomes based on known parameters of a distribution, such as in maximum likelihood estimation where parameters are treated as fixed values. Conversely, in Bayesian estimation, parameters are considered random variables, allowing for the calculation of their probability given the observed data. This approach highlights the different methodologies and interpretations within statistical analysis. Understanding these concepts is crucial for applying the correct statistical techniques in various scenarios.
tronter
Messages
183
Reaction score
1
Why do we look at the probability of a given data set given the parameters, as opposed to the probability of the parameters given the data set?

So y(x) = y(x;a_{1} \ldots a_{M})
 
Physics news on Phys.org


I'm not sure I understand your question. In "Probability" we always assume we are given some basic probability distribution- i.e. the parameters of some general distribution (such as the normal distribution). The problems in probability then are generally, "given the parameters of the probability distribution, determine the probability of various outcomes (i.e. the data set)".

Probabilty is then used as the theory behind "Statistics" in which we do just the opposite: given an outcome (data set) try to estimate the parameters of the probability distribution.
 


tronter said:
Why do we look at the probability of a given data set given the parameters, as opposed to the probability of the parameters given the data set?

Well, in maximum likelihood estimation, the reason is simple: the parameters are not random variables, and so there is no probability distribution defined on them. Instead, we use the likelihood function, which is similar.

In Bayesian/MAP estimation, on the other hand, the parameters are viewed as random variables, and we do use the probability of the parameters given the data.
 
I was reading documentation about the soundness and completeness of logic formal systems. Consider the following $$\vdash_S \phi$$ where ##S## is the proof-system making part the formal system and ##\phi## is a wff (well formed formula) of the formal language. Note the blank on left of the turnstile symbol ##\vdash_S##, as far as I can tell it actually represents the empty set. So what does it mean ? I guess it actually means ##\phi## is a theorem of the formal system, i.e. there is a...
Back
Top