Will someone please explain Bayesian statistics?

In summary: For example, a fair coin might have a probability of 0.5, while an unfair coin might have a probability of 1.0.In summary, this conversation is about Bayesian statistics. Bayesian statistics involves assuming something before reaching any conclusion. The Bayesian approach is better than the Frequentist or Fisherian approaches because it always assumes something before reaching a conclusion. Bayesian statistics also involves providing an entire a priori distribution for the fairness parameter.
  • #1
moonman239
282
0
I don't know much algebra (I kind of skipped it and went to geometry), but I do sort of understand statistics. I can certainly perform a z-test or a t-test and know when they should or shouldn't be used.
 
Physics news on Phys.org
  • #2
moonman239 said:
I don't know much algebra (I kind of skipped it and went to geometry), but I do sort of understand statistics. I can certainly perform a z-test or a t-test and know when they should or shouldn't be used.

How much do you understand about conditional probability? Do you understand the different variations of Bayes Theorem and its implications?

This is the best place to start for understanding the Bayesian approach.
 
  • #3
moonman239 said:
I can certainly perform a z-test or a t-test and know when they should or shouldn't be used.

What does a t-test tell you? It quantifies the probability of the observed data given that you assume some idea about the data is correct ( i.e. the "null hypothesis"). Does it tell you the probability the null hypothesis is true given the observed data? No. And it doesn't tell you the probability that the null hypothesis is false given the observed data. There is a difference between "the probability A given B" and the "probability of B given A".

Want to know the probability that some idea is true given the observed data? Then you need to use the Bayesian approach.
 
  • #4
In Bayesian statistics you always have to assume something before reaching any conclusion, when you know nothing then you assume the most not informative prior.

In Bayesian Inference you would never get a 0 probability for anything, for example, imaging you toss 5 million times a coin and you get 5 million heads, a Frequentist would estimate that given the information available the chances for tails to appear are zero, whereas a Bayesian will give you a very small value but not zero (this is due the the prior).

So in my opinion Bayesian... er... I'd rather not, Frequentist vs Fisherian vs Bayesian can easily flame a thread. Yet, I will only say that for experimental sciences is better not to assume priors, though Bayesians will debate this to death anyway ;)
 
  • #5
Stephen Tashi said:
What does a t-test tell you? ... Does it tell you the probability the null hypothesis is true given the observed data? No. And it doesn't tell you the probability that the null hypothesis is false given the observed data. There is a difference between "the probability A given B" and the "probability of B given A".

Quite. Nor does it usually tell you the probability of the data given that the null hypothesis is false. That's because the null hypothesis is often a single point (the effectiveness of a drug, say) on a continuum.
 
  • #6
haruspex said:
Quite. Nor does it usually tell you the probability of the data given that the null hypothesis is false. That's because the null hypothesis is often a single point (the effectiveness of a drug, say) on a continuum.

If you are talking about single-points in terms of a continuum, this is out of context for the discussion since you will always get zero-probabilities for any single point for continuous random variables which results in having to supply an interval.

This does not change what Stephen Tashi has said about the probabilities and it also don't change whether the probabilities are from countable or uncountable distributions: you treat them in the right way based on if they uncountable or not, but again this is not in any kind of contradiction whatsoever to what has been said above.

If you have a continuous distribution, your probability for any hypothesis of this nature will always involve an interval and not a point, if the area of interest is continuous (you can also have mixed distributions, but that's another issue).
 
  • #7
chiro said:
If you are talking about single-points in terms of a continuum, this is out of context for the discussion since you will always get zero-probabilities for any single point for continuous random variables which results in having to supply an interval.
No, that's not what I was trying to say. An example will help:
Say we're tossing a coin and the null hypothesis is that the coin is fair. You compute the probability of the observed results on that basis. What if we suppose the coin is not fair? There's no way to compute the probability of the results unless we specify exactly HOW unfair. Fairness is just one point on a continuum of possibilities.
In Bayesian stats, an entire a priori distribution for the fairness parameter should be provided.
 

1. What is Bayesian statistics?

Bayesian statistics is a mathematical framework for updating beliefs and making decisions based on new evidence. It allows for the incorporation of prior knowledge or beliefs into the analysis, making it a powerful tool for decision-making in uncertain situations.

2. How is Bayesian statistics different from traditional statistics?

Traditional statistics relies on frequentist methods, which use a single fixed value for unknown parameters and do not incorporate prior beliefs. In contrast, Bayesian statistics allows for the use of prior knowledge and updates beliefs based on new evidence, resulting in more flexible and accurate conclusions.

3. How do you calculate probabilities in Bayesian statistics?

In Bayesian statistics, probabilities are calculated using Bayes' theorem, which is a mathematical formula that describes the relationship between prior beliefs, new evidence, and updated beliefs. This involves multiplying the prior probability by the likelihood of the new evidence, and then normalizing the result to get the posterior probability.

4. What are the main advantages of using Bayesian statistics?

Some of the main advantages of Bayesian statistics include its ability to incorporate prior knowledge, its flexibility in handling complex and uncertain data, and its ability to update beliefs as new evidence is obtained. It also allows for the use of intuitive and interpretable probability statements, making it useful for decision-making.

5. Are there any limitations to using Bayesian statistics?

One potential limitation of Bayesian statistics is the need for prior knowledge or beliefs, which may be difficult to obtain or may introduce bias into the analysis. Additionally, the calculation of probabilities can become complex and computationally intensive in more complex models. However, with appropriate methods and techniques, these limitations can be mitigated.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
26
Views
3K
  • Set Theory, Logic, Probability, Statistics
Replies
9
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
681
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
30
Views
2K
  • Quantum Interpretations and Foundations
Replies
3
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
5
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
14
Views
218
  • Set Theory, Logic, Probability, Statistics
Replies
14
Views
249
Back
Top