- #71
- 8,938
- 2,945
vanhees71 said:Ok, then what's in your view the difference between Bayesian and frequentist interpretations of probabilities, particularly the statement probabilities make sense for a single event?
You can go one better: Bayesian statistics allows us to have a probability for something with zero events. Of course, in that case, it's just a guess (although you can have a principled way of making such guesses). A single event provides a correction to your guess. More events provide better correction.
E.g., when they say in the weather forecast, there's a 99% probability to have snow tomorrow, and tomorrow it doesn't snow. Does that tell you anything about the validity of the probability given by the forecast? I don't think so.
It doesn't tell you a lot, but it tells you something. If the forecast is for 99% chance of snow, and it doesn't snow, then (for a Bayesian), the confidence that the forecast is accurate will decline slightly. If for 100 days in a row, the weather service predicts 99% chance of snow, and it doesn't snow any of those days, then for the Bayesian, the confidence that the reports are accurate will decline smoothly each time. It would never decline to zero, because there's always a nonzero chance that that an accurate probabilistic prediction is wrong 100 times in a row, just like there is a nonzero chance that a fair coin will yield heads 100 times in a row.
The frequentist would (presumably) have some cutoff value for significance. The first few times that the weather report proves wrong, they would say that no conclusion can be drawn, since the sample size was so small. Then at some point, he would conclude that he had a large enough sample to make a decision, and would decide that the reports are wrong.
Note that both the Bayesian and the frequentist makes use of arbitrary parameters--the Bayesian has an arbitrary a priori notion of probability of events. The frequentist has an arbitrary cutoff for determining significance. The difference is that the Bayesian smoothly takes into account new data, while the frequentist withholds any judgement until some threshold amount of data, then makes a discontinuous decision.
It's just a probability based on experience (i.e., the collection of many weather data over a long period) and weather models based on very fancy hydrodynamics on big computers. The probabilistic statement can only be checked by evaluating a lot of data based on weather observations.
Of course, there's Bayes's theorem on conditional probabilities, which has nothing to do with interpretations or statistics but is a theorem that can be proven within the standard axiom system by Kolmogorov:
$$P(A|B) P(B) = P(B|A) P(A),$$
which is of course not the matter of any debate.
Bayes' formula is of course valid whether you are a Bayesian or a frequentist, but the difference is that the Bayesian associates probabilities with events that have never happened before, and so can make sense of any amount of data. So for the example we're discussing, there would be an a priori probability of snow, and an a priori probability of the weather forecaster being correct. With each day that the forecaster makes a prediction, and each day that it does or does not snow, those two probabilities are adjusted based on the data, according to Bayes' formula.
So Bayes' formula, together with a priori values for probabilities, allows the bayesian to make probabilistic statements based on whatever data is available.
I'm really unable to understand why there is such a hype about Qbism,
Well, I'm not defending Qbism. I was just talking about bayesian versus frequentist views of probability. As I said previous, I don't think that Qbism gives any new insight into the meaning of quantum mechanics, whether or not you believe in bayesian probability.