Bayesian Statistics Explained: Why Guess Tails After Getting Heads?

  • Context: Undergrad 
  • Thread starter Thread starter Winzer
  • Start date Start date
  • Tags Tags
    Statistics
Click For Summary

Discussion Overview

The discussion revolves around the interpretation of Bayesian statistics in the context of coin flips, specifically addressing the question of whether the outcome of a previous flip influences the probability of the next flip. Participants explore the differences between Bayesian and frequentist approaches to probability and inference.

Discussion Character

  • Technical explanation
  • Debate/contested

Main Points Raised

  • One participant suggests that after flipping a coin and getting heads, Bayesian statistics would imply a lean towards tails for the next flip, questioning the reasoning behind this.
  • Another participant challenges this view, asserting that both Bayesian and frequentist statistics treat coin flips as independent events, meaning prior outcomes do not influence future probabilities.
  • A participant explains that Bayesian statistics focuses on statistical likelihood rather than conditional probabilities, emphasizing that likelihoods can range over all positive real numbers.
  • Some participants mention that frequentists maintain a 50:50 chance for the next flip regardless of prior outcomes, while Bayesians might suspect a rigged coin after observing multiple heads.
  • There is a discussion about the assumptions underlying both Bayesian and frequentist methods, with one participant noting that Bayesian inference is particularly useful when the underlying assumptions are uncertain.
  • Another participant argues that deviations from expected outcomes can be detected by frequentist methods, suggesting that Bayesian inference may not offer significant advantages in cases with a well-accepted uniform distribution.
  • One participant clarifies that making inferences based on a single outcome (like one heads) would lead Bayesian inference to favor heads again, contradicting the initial suggestion that it would favor tails.

Areas of Agreement / Disagreement

Participants express disagreement regarding the influence of prior outcomes on future probabilities in Bayesian statistics. While some assert that Bayesian inference would lead to a bias towards tails after observing heads, others firmly state that both Bayesian and frequentist approaches consider coin flips as independent events.

Contextual Notes

There are unresolved assumptions regarding the nature of the coin flips and the underlying distributions. The discussion highlights the complexity and nuances of applying Bayesian versus frequentist methods in statistical reasoning.

Winzer
Messages
597
Reaction score
0
Suppose I have a regular quarter and I had to guess heads or tails. I have a 50% chance of getting heads or tails. After I flip it say I get the result: heads. If it is to be flipped a second time, classically I would say I still have a 50% chance pf getting heads or tails. However, from Bayesian statistics I was told that I should lean more towards tails.
Why?

Apparently Bayesian statistics accounts for subjective probability. Being born and raised on classical stats "subjective" and "probability" should not be together. Could someone give me some clear reasoning on Bayesian stats?
 
Physics news on Phys.org
Are you sure that's what it says?
then how do you calculate this probability that after getting head what are the probabilities to get again heads immediatly afterwards.
 
Winzer said:
Suppose I have a regular quarter and I had to guess heads or tails. I have a 50% chance of getting heads or tails. After I flip it say I get the result: heads. If it is to be flipped a second time, classically I would say I still have a 50% chance pf getting heads or tails. However, from Bayesian statistics I was told that I should lean more towards tails.
Why?

No. Bayesian statistics does not say that prior flips of a coin influence the outcome of next coin flip. These are assumed to be independent events under both frequentist and Bayesian inference. There's a lot of misunderstanding about this.

First, Bayes Theorem is a statement about conditional probability, not about what is called Bayesian statistics.

So called Bayesian statistics is really about the concept of statistical likelihood. A likelihood (L) is derived from probabilities but is not itself a probability as it ranges over all positive real numbers whereas probabilities range over the closed interval 0,1. In practice lnL and likelihood ratios are used.

The important difference between frequentist inference and Bayesian inference is that in the former, the distribution is assumed and the probability of the data is estimated under this assumption. In Bayesian inference the likelihood of a distribution is estimated given the data. This means that maximum likelihood estimation (MLE) is robust for any underlying distribution whereas frequentist inference is not.
 
Last edited:
No. Bayesian statistics does not say that prior flips of a coin influence the outcome of next coin flip. These are assumed to be independent events under both frequentist and Bayesian inference. There's a lot of misunderstanding about this.
An unfair coin flip is often used as an example of Bayesian statistics.

The joke is that after 50 heads a frequentist still believes that the next flip has a 50:50 chance of being tails.
While a Bayesian at least starts to suspect he is dealing with a rigged coin!
 
mgb_phys said:
An unfair coin flip is often used as an example of Bayesian statistics.

The joke is that after 50 heads a frequentist still believes that the next flip has a 50:50 chance of being tails.
While a Bayesian at least starts to suspect he is dealing with a rigged coin!

That is a joke. Unfortunately many people believe it. With either type of inference, assumptions need to be made regarding independent events. However if you don't make this assumption, the data is the basis for inference under MLE, not a presumed underlying distribution.
 
Last edited:
SW VandeCarr said:
That is a joke. Unfortunately many people believe it. With either type of inference, assumptions need to be made regarding independent events.
But the whole point of Bayesian is that it's for when you don't know the underlying assumptions - like most of science!
 
mgb_phys said:
But the whole point of Bayesian is that it's for when you don't know the underlying assumptions - like most of science!

I edited the post you're responding to. I agree, but is some cases you can get into trouble. In the coin flip example the presumed underlying distribution is a uniform p=0.5 H or T. This is widely accepted. Deviations from this are to be expected. How much deviation is acceptable? There is no particular advantage using Bayesian inference here as the frequentist will notice when the results deviate significantly from expectations. The real advantage of Bayesian inference is where the assumption of a particular underlying distribution is weak.

Edit: What I was particularly objecting to was the suggestion to the OP that if the first toss was heads, the Bayesian would say that the probability slightly favored tails in the second toss. This is simply wrong. I you were to make a Bayesian inference re n coin tosses, the inference would be based on the outcome of n tosses. If you were dumb enough to make an inference based on one toss that came out heads, Bayesian inference would favor heads again.
 
Last edited:

Similar threads

  • Sticky
  • · Replies 12 ·
Replies
12
Views
7K
  • · Replies 26 ·
Replies
26
Views
5K
  • · Replies 126 ·
5
Replies
126
Views
9K
  • · Replies 22 ·
Replies
22
Views
2K
  • · Replies 9 ·
Replies
9
Views
4K
  • · Replies 18 ·
Replies
18
Views
4K
  • · Replies 45 ·
2
Replies
45
Views
6K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 15 ·
Replies
15
Views
4K