Will someone please explain Bayesian statistics?

moonman239
Messages
276
Reaction score
0
I don't know much algebra (I kind of skipped it and went to geometry), but I do sort of understand statistics. I can certainly perform a z-test or a t-test and know when they should or shouldn't be used.
 
Physics news on Phys.org
moonman239 said:
I don't know much algebra (I kind of skipped it and went to geometry), but I do sort of understand statistics. I can certainly perform a z-test or a t-test and know when they should or shouldn't be used.

How much do you understand about conditional probability? Do you understand the different variations of Bayes Theorem and its implications?

This is the best place to start for understanding the Bayesian approach.
 
moonman239 said:
I can certainly perform a z-test or a t-test and know when they should or shouldn't be used.

What does a t-test tell you? It quantifies the probability of the observed data given that you assume some idea about the data is correct ( i.e. the "null hypothesis"). Does it tell you the probability the null hypothesis is true given the observed data? No. And it doesn't tell you the probability that the null hypothesis is false given the observed data. There is a difference between "the probability A given B" and the "probability of B given A".

Want to know the probability that some idea is true given the observed data? Then you need to use the Bayesian approach.
 
In Bayesian statistics you always have to assume something before reaching any conclusion, when you know nothing then you assume the most not informative prior.

In Bayesian Inference you would never get a 0 probability for anything, for example, imaging you toss 5 million times a coin and you get 5 million heads, a Frequentist would estimate that given the information available the chances for tails to appear are zero, whereas a Bayesian will give you a very small value but not zero (this is due the the prior).

So in my opinion Bayesian... er... I'd rather not, Frequentist vs Fisherian vs Bayesian can easily flame a thread. Yet, I will only say that for experimental sciences is better not to assume priors, though Bayesians will debate this to death anyway ;)
 
Stephen Tashi said:
What does a t-test tell you? ... Does it tell you the probability the null hypothesis is true given the observed data? No. And it doesn't tell you the probability that the null hypothesis is false given the observed data. There is a difference between "the probability A given B" and the "probability of B given A".

Quite. Nor does it usually tell you the probability of the data given that the null hypothesis is false. That's because the null hypothesis is often a single point (the effectiveness of a drug, say) on a continuum.
 
haruspex said:
Quite. Nor does it usually tell you the probability of the data given that the null hypothesis is false. That's because the null hypothesis is often a single point (the effectiveness of a drug, say) on a continuum.

If you are talking about single-points in terms of a continuum, this is out of context for the discussion since you will always get zero-probabilities for any single point for continuous random variables which results in having to supply an interval.

This does not change what Stephen Tashi has said about the probabilities and it also don't change whether the probabilities are from countable or uncountable distributions: you treat them in the right way based on if they uncountable or not, but again this is not in any kind of contradiction whatsoever to what has been said above.

If you have a continuous distribution, your probability for any hypothesis of this nature will always involve an interval and not a point, if the area of interest is continuous (you can also have mixed distributions, but that's another issue).
 
chiro said:
If you are talking about single-points in terms of a continuum, this is out of context for the discussion since you will always get zero-probabilities for any single point for continuous random variables which results in having to supply an interval.
No, that's not what I was trying to say. An example will help:
Say we're tossing a coin and the null hypothesis is that the coin is fair. You compute the probability of the observed results on that basis. What if we suppose the coin is not fair? There's no way to compute the probability of the results unless we specify exactly HOW unfair. Fairness is just one point on a continuum of possibilities.
In Bayesian stats, an entire a priori distribution for the fairness parameter should be provided.
 
Back
Top