B Regarding bayesian analysis/inference/ predictions

1. Aug 28, 2015

Javaxcore

I am forever hearing heady claims that bayesian (something or other) can help people to make better decisions and overall get closer to the truth of things.

However I have yet to discover an article, audio lecture or anything that really explains in a layway how to use or even understand this bayesian stuff please help?

2. Aug 28, 2015

Staff: Mentor

3. Aug 28, 2015

Javaxcore

Thank you alot it is extremely hard To get your head round

4. Aug 29, 2015

chiro

Bayesian probability and inference is basically conditional probability applied to probability models and statistics (which is looking at taking data and finding estimates, inferences, hypotheses, and models with said data).

In terms of the conditional distributions assumptions and origin - that is where it gets philosophical.

If you understand conditional probability very well then Bayesian statistics is actually not that difficult.

In Bayesian statistics you have the constants of a distribution (like say mu and sigma in a Normal distribution) have their own distribution. It's an extra level of abstraction and what often happens in Bayesian statistics is that they try and find a prior distribution that makes sense both from a data and a deductive (strictly logical) point of view. The methods are derived to get the estimators for various statistics given various distributions and you also look at MCMC methods which can be used to simulate really complex distributions.

With the conditional distributions it means you have to look at how one event changes another if its not independent. If you can exhaust the state space of a random variable and do calculations to get the conditional distribution then you can do a lot of the problems because that's all it is - conditional probability applied to statistics.

5. Sep 2, 2015

gill1109

The core of Bayesian statistics is Bayes' rule, which is an elementary fact from elementary probability theory. Posterior odds = prior odds times likelihood ratio. Let H and K be two hypotheses, and D be some data. Then notice that
P(H | D) / P(K | D) = P(H) / P(D) * P(D | H) / P(D | K)
P(H) / P(K) is called the prior odds: your initial relative degree of belief in H versus K
P(D | H) / P(D | K) is called the likelihood ratio or Bayes factor: the ratio between the chances of getting data D, under the two hypotheses H and K
P(H | D) / P(K | D) is called the posterior odds: your final relative degree of belief in H versus K, after seeing the data D.

If you don't like odds (ratios of probabilities), we can state it in terms of straight probabilities as "posterior is proportional to prior times likelihood".

So for instance if you start by believing it 100 times more likely that H is true than that K is true, but then you observe data D which is 1 000 000 times more likely if K is true than if H is true, then you should revise your beliefs about H and K, and the correct revision is: you now believe it 10 000 times more likely that K is true than that H is true.

6. Sep 2, 2015