# Baynsian Statistics?

Suppose I have a regular quarter and I had to guess heads or tails. I have a 50% chance of getting heads or tails. After I flip it say I get the result: heads. If it is to be flipped a second time, classically I would say I still have a 50% chance pf getting heads or tails. However, from Bayesian statistics I was told that I should lean more towards tails.
Why?

Apparently Bayesian statistics accounts for subjective probability. Being born and raised on classical stats "subjective" and "probability" should not be together. Could someone give me some clear reasoning on Bayesian stats?

MathematicalPhysicist
Gold Member
Are you sure that's what it says?
then how do you calculate this probability that after getting head what are the probabilities to get again heads immediatly afterwards.

Suppose I have a regular quarter and I had to guess heads or tails. I have a 50% chance of getting heads or tails. After I flip it say I get the result: heads. If it is to be flipped a second time, classically I would say I still have a 50% chance pf getting heads or tails. However, from Bayesian statistics I was told that I should lean more towards tails.
Why?

No. Bayesian statistics does not say that prior flips of a coin influence the outcome of next coin flip. These are assumed to be independent events under both frequentist and Bayesian inference. There's a lot of misunderstanding about this.

First, Bayes Theorem is a statement about conditional probability, not about what is called Bayesian statistics.

So called Bayesian statistics is really about the concept of statistical likelihood. A likelihood (L) is derived from probabilities but is not itself a probability as it ranges over all positive real numbers whereas probabilities range over the closed interval 0,1. In practice lnL and likelihood ratios are used.

The important difference between frequentist inference and Bayesian inference is that in the former, the distribution is assumed and the probability of the data is estimated under this assumption. In Bayesian inference the likelihood of a distribution is estimated given the data. This means that maximum likelihood estimation (MLE) is robust for any underlying distribution whereas frequentist inference is not.

Last edited:
mgb_phys
Homework Helper
No. Bayesian statistics does not say that prior flips of a coin influence the outcome of next coin flip. These are assumed to be independent events under both frequentist and Bayesian inference. There's a lot of misunderstanding about this.
An unfair coin flip is often used as an example of Bayesian statistics.

The joke is that after 50 heads a frequentist still believes that the next flip has a 50:50 chance of being tails.
While a Bayesian at least starts to suspect he is dealing with a rigged coin!

An unfair coin flip is often used as an example of Bayesian statistics.

The joke is that after 50 heads a frequentist still believes that the next flip has a 50:50 chance of being tails.
While a Bayesian at least starts to suspect he is dealing with a rigged coin!

That is a joke. Unfortunately many people believe it. With either type of inference, assumptions need to be made regarding independent events. However if you don't make this assumption, the data is the basis for inference under MLE, not a presumed underlying distribution.

Last edited:
mgb_phys