gentzen
Science Advisor
Gold Member
- 1,106
- 833
The Bayesian interpretation (both objective and subjective) too has certain issue. Luckily those are different issues than plague the frequentist interpretation. One interesting issue mentioned in your link is that despite your prior being subjective, it can still be wrong:bhobba said:Isn't this the same issue in any probabilistic prediction? We all know the frequentist interpretation has foundational issues:
https://math.ucr.edu/home/baez/bayes.html
But applied mathematicians use it all the time, like taking dx etc., as tiny changes in x.
Note however that the attempt to solve this issue by replacing the word *wrong* by the word *silly* is ridiculous. It makes the Bayesian interpretation "not even wrong" for no good reason. Even worse, it makes you blind to actual problems of the Bayesian interpretation, namely that there are situations were you should not assign definitive probabilities with too high precision, despite the apparent catch-all fallback that they are just subjective personal judgments.Now there's a whole sub-thread stemming from this plaint by Daryl McCullough:
I don't want to be told how my probabilistic guesstimates are supposed to change with time.
This has already been answered well:- it's the natural probabilistic way they ought to change, if you have any probability models at all. But I can see how you might still feel grumpy about this. Why shouldn't you go back and change your prior if it looks like the subsequent data a making it look *really* stupid!
This is tough to answer. For one thing, if your prior was so silly as to have zero probabilities in it, (or zero-density intervals, in the continuous case), then you may *have* to. F'rinstance, if you declared that there was *zero* prior chance of a six turning up on a dice - but then a six *did* turn up; well, you're completely stuffed! You just have to go back and start again without the silly zeros. And it'd be much the same if you had the prior not quite zero but about 10^(-35). It'd still take billions of sixes turning up before you'd posteriorly admit there was a reasonable chance of getting some sixes. Clearly that was a silly prior. (Not *wrong*, note, just silly; even by your own standards.)
This blindness also plagues QBism, when it just stops after a statement like:
Staying silent is a suboptimal way of dealing with the question what to do when you were wrong. Just because Bruno de Finetti didn't address this question is no good excuse for ignoring it forever.That probability-1 assignments are personal judgments, like any other probability assignments, is essential to the coherence of QBism.
The work of A. Neumaier contains important ideas and elaborations how to overcome critical issues of the frequentist interpretation. One of those critical issues is that it doesn't apply to single systems, but only to ensembles. And part of the solution is to be realistic about the precision of magnitudes (including probabilities) appropriate for the concrete situation you want to talk about.
Another issue mentioned in your link are improper priors:
A. Neumaier discussed this sort of trouble in older material (his still unpublished book "Classical and quantum mechanics via Lie algebras" or now "Algebraic quantum physics"). In the end, this is the place where I see the non-intuitiveness emerge again in his interpretation. In a certain sense, I believe that he knows this, but he has polished it away in newer material for the moment. I am really curious whether his unpublished book will ever appear, and whether that material will stay or not. But in a certain sense, it would really be unfair if this would be held against his interpretation, because the issue was there all the time for the Bayesian interpretation, and they coped with it by simply staying silent. And it is a difficult to understand technical issue. To get some feeling for how technical, see for example:concept of an "improper prior" (by which I guess you must mean an infinite measure, ...) ... You can get into big trouble with them (e.g., the paradoxes where you "pick a random real number"), but there still seem to be cases (like the above) where one wants to think of them as a kind of prior.
Consistency and strong inconsistency of group-invariant predictive inferences (1999) by Morris L. Eaton and William D. Sudderth
Dutch book against some `objective' priors (2004) by Morris L. Eaton and David A. Freedman