Will someone please explain Bayesian statistics?

Click For Summary

Discussion Overview

The discussion centers around Bayesian statistics, particularly its principles, comparisons with frequentist approaches, and the implications of conditional probability. Participants explore the foundational concepts of Bayesian inference, the role of priors, and the interpretation of statistical tests like t-tests and z-tests.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • Some participants express familiarity with z-tests and t-tests, noting their limitations in conveying the probability of hypotheses given observed data.
  • There is a discussion on the difference between "the probability A given B" and "the probability of B given A," emphasizing the need for Bayesian methods to assess the probability of hypotheses based on observed data.
  • One participant mentions that Bayesian statistics requires assumptions about prior distributions, which can lead to non-zero probabilities for events that frequentist approaches might deem impossible.
  • Another participant highlights that the null hypothesis often represents a single point on a continuum, complicating the interpretation of probabilities in hypothesis testing.
  • There is a mention of the necessity to provide an entire a priori distribution for parameters in Bayesian statistics, contrasting with point estimates in frequentist methods.

Areas of Agreement / Disagreement

Participants express differing views on the use of priors in Bayesian statistics and the implications of hypothesis testing. There is no consensus on the superiority of Bayesian versus frequentist approaches, and the discussion remains unresolved regarding the best practices in statistical inference.

Contextual Notes

Participants note that the interpretation of probabilities can vary significantly depending on whether one is using a Bayesian or frequentist framework, and that assumptions about distributions play a critical role in these interpretations.

moonman239
Messages
276
Reaction score
0
I don't know much algebra (I kind of skipped it and went to geometry), but I do sort of understand statistics. I can certainly perform a z-test or a t-test and know when they should or shouldn't be used.
 
Physics news on Phys.org
moonman239 said:
I don't know much algebra (I kind of skipped it and went to geometry), but I do sort of understand statistics. I can certainly perform a z-test or a t-test and know when they should or shouldn't be used.

How much do you understand about conditional probability? Do you understand the different variations of Bayes Theorem and its implications?

This is the best place to start for understanding the Bayesian approach.
 
moonman239 said:
I can certainly perform a z-test or a t-test and know when they should or shouldn't be used.

What does a t-test tell you? It quantifies the probability of the observed data given that you assume some idea about the data is correct ( i.e. the "null hypothesis"). Does it tell you the probability the null hypothesis is true given the observed data? No. And it doesn't tell you the probability that the null hypothesis is false given the observed data. There is a difference between "the probability A given B" and the "probability of B given A".

Want to know the probability that some idea is true given the observed data? Then you need to use the Bayesian approach.
 
In Bayesian statistics you always have to assume something before reaching any conclusion, when you know nothing then you assume the most not informative prior.

In Bayesian Inference you would never get a 0 probability for anything, for example, imaging you toss 5 million times a coin and you get 5 million heads, a Frequentist would estimate that given the information available the chances for tails to appear are zero, whereas a Bayesian will give you a very small value but not zero (this is due the the prior).

So in my opinion Bayesian... er... I'd rather not, Frequentist vs Fisherian vs Bayesian can easily flame a thread. Yet, I will only say that for experimental sciences is better not to assume priors, though Bayesians will debate this to death anyway ;)
 
Stephen Tashi said:
What does a t-test tell you? ... Does it tell you the probability the null hypothesis is true given the observed data? No. And it doesn't tell you the probability that the null hypothesis is false given the observed data. There is a difference between "the probability A given B" and the "probability of B given A".

Quite. Nor does it usually tell you the probability of the data given that the null hypothesis is false. That's because the null hypothesis is often a single point (the effectiveness of a drug, say) on a continuum.
 
haruspex said:
Quite. Nor does it usually tell you the probability of the data given that the null hypothesis is false. That's because the null hypothesis is often a single point (the effectiveness of a drug, say) on a continuum.

If you are talking about single-points in terms of a continuum, this is out of context for the discussion since you will always get zero-probabilities for any single point for continuous random variables which results in having to supply an interval.

This does not change what Stephen Tashi has said about the probabilities and it also don't change whether the probabilities are from countable or uncountable distributions: you treat them in the right way based on if they uncountable or not, but again this is not in any kind of contradiction whatsoever to what has been said above.

If you have a continuous distribution, your probability for any hypothesis of this nature will always involve an interval and not a point, if the area of interest is continuous (you can also have mixed distributions, but that's another issue).
 
chiro said:
If you are talking about single-points in terms of a continuum, this is out of context for the discussion since you will always get zero-probabilities for any single point for continuous random variables which results in having to supply an interval.
No, that's not what I was trying to say. An example will help:
Say we're tossing a coin and the null hypothesis is that the coin is fair. You compute the probability of the observed results on that basis. What if we suppose the coin is not fair? There's no way to compute the probability of the results unless we specify exactly HOW unfair. Fairness is just one point on a continuum of possibilities.
In Bayesian stats, an entire a priori distribution for the fairness parameter should be provided.
 

Similar threads

  • · Replies 26 ·
Replies
26
Views
5K
  • · Replies 22 ·
Replies
22
Views
2K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 131 ·
5
Replies
131
Views
10K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 30 ·
2
Replies
30
Views
5K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K