Quantum Bayesian Interpretation of QM

Click For Summary
The Quantum Bayesian interpretation, or QBism, posits that the quantum wave function is not a representation of physical reality but rather a mental abstraction. Critics, such as Chris Fields, argue that QBism fails to distinguish between observers and observed systems, complicating its operational implementation and challenging the notion of subjective probabilities. The discussion highlights epistemological issues, suggesting that interpretations of quantum mechanics often reflect personal beliefs rather than objective truths. Some participants draw parallels between QBism and traditional ensemble interpretations of probability, questioning its novelty and effectiveness. Overall, while QBism offers a unique perspective, its implications for understanding quantum mechanics remain contentious.
  • #31
stevendaryl said:
"Infinitesimally close" is defined in terms of probability, but a notion of probability that is NOT relative frequency. So frequencies cannot be the core definition of probability.

I don't have any complaints with the use of frequencies as "frequentists" use them. It works for practical purposes, but it's not an actually consistent theory. In that sense, frequentism is sort of like the Copenhagen or "collapse" version of quantum mechanics. It's a set of rules for using probabilities, and it works well enough, but doesn't actually count as a rigorous theory.

Yes - by the Kolmogorov axioms. One of those axioms is the entire event space has probability 1. This means if an element of the event space is 1 it can be considered the entire event space. The law of large numbers shows for large n the outcomes are in proportion to the probabilities with probability so close to 1 it is for all practical purposes equal to it ie it is the entire event space.

If you don't consider that consistent then there are many areas in applied math where infinitesimal quantities are ignored so I guess you have trouble with those as well.

Thanks
Bill
 
Physics news on Phys.org
  • #32
stevendaryl said:
That's my point---frequentism can't be the basis of a rigorous development of probability.

It isn't - the Kolmogorov axioms are. And that is what rigorous tretments use - not the frequentest interpretation. But the law of large numbers show for all practical purposes the frequentest interpretation is equivalent to it.

However I am taken back to what I was warned about all those years ago - you wouldn't read the tomes based on it - and in my experience that is very true. Even Fellers book, especially volume 2, is hard going, and believe me there are worse than that about.

Thanks
Bill
 
  • #33
Bayesianism vs Falsifiability

vanhees71 said:
Perhaps it would help me to understand the Bayesian view, if you could explain how to test a probilistic theoretical statement empirically from this point of view. Is there a good book for physicists to understand the Bayesian point of view better?

To me, Bayesianism provides a slightly different point of view about what it means to do science than the Karl Popper view that places "falsifiability" at the core.

In the Karl Popper view, we create a theory, we use the theory to make a prediction, we perform an experiment to test the prediction, and we discard the theory if the prediction is not confirmed. That's a nice, neat way of thinking about science, but it's an over-simplification, and it's incomplete. Why do I say that?

First, why is it an over-simplification? Because there is no way that a single experiment can ever falsify a theory. Whatever outcome happens from an experiment is capable of having multiple explanations. Some of those explanations imply that the theory used to make the prediction is simply wrong, and some do not. For example, if there is a glitch in a piece of equipment, you can't hold that against the theory. Also, every experiment that attempts to test a theory relies on interpretations that go beyond the theory. You don't directly measure "electron spin", for example, you measure the deflection of the electron in a magnetic field. So you need to know that there are no other forces at work on the electron that might deflect it, other than its magnetic moment (and you also need the theory connecting the electron's spin with its magnetic moment). So when a theory fails to make a correction, the problem could be in the theory, or it could be in the equipment, or it could be some random glitch, or it could be in the theory used to interpret the experimental result, or whatever. So logically, you can never absolutely falsify a theory. It gets even worse when the theories themselves are probabilistic. If the theory predicts something with 50% probability, and you observe that it has happened 49% of the time, has the theory been falsified, or not? There is no definite way to say.

Second, why do I say that falsifiability is incomplete? It doesn't provide any basis for decision-making. You're building a rocket, say, and you want to know how to design it so that it functions as you would like it to. Obviously, you want to use the best understanding of physics in the design of the rocket, but what does "best" mean? At any given time, there are infinitely many theories that have not yet been falsified. How do you pick out one as the "current best theory"? You might say that that's not science, that's engineering, but the separation is not that clear-cut, because, as I said, you need to use engineering in designing experimental equipment, and you have to use theory to interpret experimental results. You have to pick a "current best theory" or "current best engineering practice" in order to do experiments to test theories.

So how does Bayesianism come to the rescue? Well, nobody really uses Bayesianism in its full glory, because it's mathematically intractable, but it provides a model for how to do science that allows us to see what's actually done as a pragmatic short-cut.

In the Bayesian view, nothing besides pure mathematical claims is ever proved true or false. Instead, claims have likelihoods, and performing an experiment allows you to adjust those likelihoods.

So rather than saying that a theory is falsified by an experiment, the Bayesian would say that the theory's likelihood is decreased. And that's the way it really works. There was no single experiment that falsified Newtonian mechanics. Instead, there was a succession of experiments that cast more and more doubt on it. There was never a point where Newtonian mechanics was impossible to believe, it's just that at some point, the likelihood of Special Relativity and quantum mechanics rose to be higher (and by today, significantly higher) than Newtonian mechanics.

The other benefit, at least in principle, if not in practice, for Bayesianism is that it actually gives us a basis for making decisions about things like how to design rockets, even when we don't know for certain what theory applies. What you can do is figure out what you want to accomplish (get a man safely on the moon, for example), and try to maximize the likelihood of that outcome. If there are competing theories, then you include ALL of them in the calculation of likelihood. Mathematically, if O is the desired outcome, and E is the engineering approach to achieving it, and T_1, T_2, ... are competing theories, then

P(O | E) = \sum_i P(T_i) P(O | E, T_i)

You don't have to know for certain what theory is true to make a decision.
 
  • #34
bhobba said:
It isn't - the Kolmogorov axioms are. And that is what rigorous tretments use - not the frequentest interpretation. But the law of large numbers show for all practical purposes the frequentest interpretation is equivalent to it.

However I am taken back to what I was warned about all those years ago - you wouldn't read the tomes based on it - and in my experience that is very true. Even Fellers book, especially volume 2, is hard going, and believe me there are worse than that about.

Thanks
Bill

Okay, I guess if by "frequentism" you mean a particular methodology for using probabilities, then I don't have any big problems with it. But if it's supposed to explain the meaning of probabilities, I don't think it can actually do that, because you have to already have a notion of probability in order to connect relative frequencies to probabilities.
 
  • #35
stevendaryl said:
Okay, I guess if by "frequentism" you mean a particular methodology for using probabilities, then I don't have any big problems with it. But if it's supposed to explain the meaning of probabilities, I don't think it can actually do that, because you have to already have a notion of probability in order to connect relative frequencies to probabilities.

In modern times, just like many areas of mathematics, probability is defined in terms of axioms that are not given any meaning. Have a look at the Kolmogorov axioms - there is no meaning assigned to it whatsoever. The frequentest interpretation is one way to give it meaning - Baysian is another. This is the great strength of the formalist approach - a proof applies to however you relate it to stuff out there. Its great weakness is without the help of the intuition of an actual application you have to derive everything formally.

I agree justifying the frequentest interpretation without the Kolmogorov axioms leads to problems such as circular arguments. But then again I don't know of any book on probability that does that - all I have ever seen start with the Kolmogorov axioms and show, with varying degrees of rigor, and actually proving the key theorems, that the frequentest view follows from it.

Thanks
Bill
 
  • #36
vanhees71 said:
Perhaps it would help me to understand the Bayesian view, if you could explain how to test a probilistic theoretical statement empirically from this point of view.

Here's a simplified example. Suppose we have two competing theories about a coin: Theory A says that it is a fair coin, giving "heads" 1/2 of the time. Theory B says that it is a trick coin, weighted to give "heads" 2/3 of the time. To start off with, we don't have any reason for preferring one theory over the other, so we write:

P(A) = P(B) = \dfrac{1}{2}

Now flip the coin 4 times, and suppose you get HHTT. Call this event E. We compute probabilities:

P(E|A) = 0.0625

P(E|B) = 0.0494

P(E) = P(E|A) P(A) + P(E|B) P(B) = 0.0560

Now, the Bayesian rules say that we revise our likelihood of the two theories in light of this new information:

P'(A) = \dfrac{P(A) P(E|A)}{P(E)} = 0.558
P'(B) = \dfrac{P(B) P(E|B)}{P(E)} = 0.441

So based on this one experiment, the likelihood of theory A has risen, and the likelihood of B has fallen.
 
  • #37
bhobba said:
I agree justifying the frequentest interpretation without the Kolmogorov axioms leads to problems such as circular arguments. But then again I don't know of any book on probability that does that - all I have ever seen start with the Kolmogorov axioms and show, with varying degrees of rigor, and actually proving the key theorems, that the frequentest view follows from it.

I think maybe there's some disagreement about what "the frequentist view" is. If you mean that for many trials, the relative frequency gives you (with high probability) a good approximation to the probability, that's a conclusion from the axioms of probability, whether frequentist or bayesian. I thought that "the frequentist view" was that the meaning of probability is given by relative frequencies. That is not possible in a consistent way.
 
  • #38
stevendaryl said:
Here's a simplified example. Suppose we have two competing theories about a coin: Theory A says that it is a fair coin, giving "heads" 1/2 of the time. Theory B says that it is a trick coin, weighted to give "heads" 2/3 of the time. To start off with, we don't have any reason for preferring one theory over the other, so we write:

P(A) = P(B) = \dfrac{1}{2}

Now flip the coin 4 times, and suppose you get HHTT. Call this event E. We compute probabilities:

P(E|A) = 0.0625

P(E|B) = 0.0494

P(E) = P(E|A) P(A) + P(E|B) P(B) = 0.0560

Now, the Bayesian rules say that we revise our likelihood of the two theories in light of this new information:

P'(A) = \dfrac{P(A) P(E|A)}{P(E)} = 0.558
P'(B) = \dfrac{P(B) P(E|B)}{P(E)} = 0.441

So based on this one experiment, the likelihood of theory A has risen, and the likelihood of B has fallen.

I see, but that's nothing else than what I get with my "frequentist" approach. Here, you have a somewhat small ensemble of only 4 realizations of the experiment, but that's how I would do this statistical analysis as a "frequentist".
 
  • #39
vanhees71 said:
I see, but that's nothing else than what I get with my "frequentist" approach. Here, you have a somewhat small ensemble of only 4 realizations of the experiment, but that's how I would do this statistical analysis as a "frequentist".

I don't see that. What sense, in a frequentist approach, does it mean to say that theory A has probability 1/2 of being true, and theory B has a probability 1/2 of being true? That doesn't mean that half the time A will be true, and half the time B will be true.

I don't see that this example is compatible with frequentism, at all.
 
  • #40
vanhees71 said:
Particularly the subjectivity makes it highly suspicious for me.

In the natural sciences (and hopefully also in medicine and the social sciences) to the contrary, one has to try to make statements with the "least prejudice", given the (usually incomplete) information.
That can be done very easily in Bayesian statistics. Simply take what is called a non-informative prior, or an ignorance prior. Bayesian statistics and frequentist statistics are generally the same thing for a non-informative prior and a large amount of data.

However, Bayesian statistics let's you rationally account for information that you DO have in the form of an informed prior. Consider the recent FTL neutrino results from CERN, before the glitch was discovered. Most scientists looked at those results and rationally said something like "this new evidence is unlikely under SR, but we have all of this other evidence supporting SR so we still think that P(SR) is quite high even considering the new evidence, we await further information". That is a very Bayesian approach, and is the approach that rational people actually take when reasoning under uncertainty. When they have prior knowledge they integrate it into their evaluation of new evidence.

vanhees71 said:
Any experiment must be able to be reproducible precisely enough such that you can get "high enough statistics" to check a hypothesis quantitatively, i.e., to get the statistical significance of your measurement.
But this is exactly what you can not do with frequentist statistics. With frequentist methods you never test the hypothesis given the data, you always test the data given the hypothesis. When you do a frequentist statistical test the p value you obtain is the probability of the data, given the hypothesis. When doing science (at least outside of QM), most people think of the hypothesis as being the uncertain thing, not the data, but that is simply not what frequentist statistical tests measure.

vanhees71 said:
Perhaps it would help me to understand the Bayesian view, if you could explain how to test a probilistic theoretical statement empirically from this point of view. Is there a good book for physicists to understand the Bayesian point of view better?
I liked this series of video lectures by Trond Reitan:
http://www.youtube.com/playlist?list=PL066F123E80494F77

Be forwarned, it is a very low-budget production. He spends relatively little time on the philosophical aspects of Bayesian probability, but quite a bit of time on Bayesian inference and methods. I found it appealed to my "shut up and calculate" side quite a bit. The Bayesian methods and tests are scientifically more natural, regardless of how you choose to interpret the meaning of probability.
 
  • #41
stevendaryl said:
I think maybe there's some disagreement about what "the frequentist view" is. If you mean that for many trials, the relative frequency gives you (with high probability) a good approximation to the probability, that's a conclusion from the axioms of probability, whether frequentist or bayesian. I thought that "the frequentist view" was that the meaning of probability is given by relative frequencies. That is not possible in a consistent way.

I think our discussion has been slightly marred by a bit of a misunderstanding of what the other meant. I now see where you are coming from and agree. Basing probability purely on a frequentest interpretation has problems conceptually in that it can become circular. I suspect it can be overcome by a suitable amount of care - but why bother - the mathematically 'correct' way is via the Kolmogorov axioms and starting with that the frequentest interpretation is seen as a perfectly valid realization of those axioms based rigorously on the law of large numbers. Every book on probability I have read does it that way. Bayesian probability theory fits into exactly the same framework - although I personally haven't come across textbooks that do that but my understanding is they certainly exist, and in some areas of statistical inference may be a more natural framework. At least the university I went to certainly offers courses on it.

Thanks
Bill
 
Last edited:
  • #42
My take as a mathematicians is that a probability space is a type of mathematical structure just like a group or vector space or metric space. One wouldn't spend time arguing about whether this or that particular vector space is the real or more fundamental vector space, so why do it with probability spaces. Frequencies of results of repeatable experiments can be described by a probability space, so can a persons state of knowledge of factors contributing to the outcome of a single non-repeatable event. Probability is just a special case of the more general mathematical concept of a measure - a probability is a measure applied to parts of a whole indicating the relative extent the parts contribute to the whole in the manner under consideration. Saying something has a probability of 1/3 might mean that it came up 1/3 of the time in repeated experiment if that is what you are talking about (frequencies of outcomes) or it might mean that you know 30 scenarios 10 of which produce the outcome if that is what you a considering. Neither case is a more right or wrong example of probability, in the same way that neither SO(3) nor GL(4,C) is a more right or wrong use of group theory.
 
  • #44
vanhees71 said:
Perhaps it would help me to understand the Bayesian view, if you could explain how to test a probilistic theoretical statement empirically from this point of view. Is there a good book for physicists to understand the Bayesian point of view better?

I like Jaynes, Probability theory: The logic of science.
 
  • Like
Likes 1 person
  • #45
Have any of the QBist papers been accepted for publication? (As opposed to merely being uploaded to arxiv?) I'm not sure if I want to spend time reading lots of stuff that might turn out to be half-baked. (Various subtle puns in that sentence intended :) )
 
  • #46
Salman2 said:
Any comments (pro-con) on this Quantum Bayesian interpretation of QM by Fuchs & Schack ?: http://arxiv.org/pdf/1301.3274.pdf

I propose another variant of a "quantum Bayesian" interpretation, see arxiv.org:1103.3506

It is not completely Bayesian, instead, it is in part realistic, following de Broglie-Bohm about the reality of the configuration q(t). But it is Bayesian about the wave function.

Again, with care: What is interpreted as Bayesian is only the wave function of a closed system - that means, that of the whole universe. There is also the wave function we work with in everyday quantum mechanics. But this is only an effective wave function, It is defined, as in dBB theory, from the global wave function and the configuration of the environment, that means, mainly from the macroscopic measurement results of the devices used for the preparation of the particular quantum state.

Thus, because the configuration of the environment is ontic, the effective wave function is also defined by these ontic variables, thus, is essentially ontic. Therefore, no contradiction with the PBR theorem.

With this alternative in mind, I criticize QBism as following the wrong direction: Away from realism, away from nontrivial hypotheses about more fundamental theories. But this is what is IMHO the most important thing why it is important for scientists to think about interpretations at all. For computations, the minimal interpretation is sufficient. But it will never serve as a guide to find a more fundamenal theory.

This is different for dBB-like interpretations. They make additional hypotheses, about real trajectories q(t). Ok, we cannot test them now, and, because of the equivalence theorems, will be unable to test them in future too. A problem? Not really. Because we have internal problems of the interpretation itself, and these internal problems are a nice guide. We can try to find solutions for them, and these solutions may contain new, different physics, which becomes, then, testable.

This is also not empty talk. One internal problem of dBB are the infinities of the velocities \dot{q}(t) near the zeros of the wave function. Another one, related, is the Wallstom objection - the necessity to explain why probability and probabiliy flow combine into a wave function which appears if one does not consider the wave function as fundamental. To solve these problems, one has to make nontrivial assumptions about a subquantum theory, see arxiv.org:1101.5774. So, the interpretation gives strong hints where we have to look for physics different from quantum physics, in this case near the zeros of the wave function.

QBism, instead, does not lead to such hints where to look for new subquantum physics. The new mathematics of QBism looks like mathematics in the other, positivistic direction - not more but less restrictive, not less but more general. At least this is my impression.
 
  • #47
I just read for the first time the term QBism and found this discussion. On a first look, the paper by Fuchs and Schack looks horrible. Why pages full of speculations about what Feynman may have meant?
Isn't there some crisp axiomatic paper available?
 
  • #48
Last edited by a moderator:
  • #49
Mathematech said:
I've just come across this book http://www.springer.com/physics/the...+computational+physics/book/978-3-540-36581-5 based on Streater's website http://www.mth.kcl.ac.uk/~streater/lostcauses.html . I'm only just started reading it, seems that his views are totally against any notion of non-locality and that probability explains all the weirdness in QM. Comments?

The following says it all:
'This page contains some remarks about research topics in physics which seem to me not to be suitable for students. Sometimes I form this view because the topic is too difficult, and sometimes because it has passed its do-by date. Some of the topics, for one reason or another, have not made any convincing progress.'

There are many interpretations of QM - some rather 'backwater' like Nelson Stochastics. Some very mainstream and of great value in certain situations such as the Path Integral approach.

But as with any of them its pure speculation until someone can figure out an experiment to decide between them and have it carried out.

While discussion of interpretations in on topic in this forum, its kept on a tight leash to stop it degenerating into philosophy, which is off-topic.

So exactly what do you want to discuss - if you have in mind some interpretation, or issues in a specific interpretation, you want classification on, then fire away and me or others will see if they can help. Or do you want a general waffle about interpretations such as this doesn't tell us about realty (whatever that is - those that harp on it seldom define it - for good reason - its a philosophical minefield) or whatever which would be off topic.

Thanks
Bill
 
Last edited by a moderator:
  • #50
I want to discuss Streater's take that there is no need for assuming non-locality and that EPR etc can purely be understood via correct application of probability.
 
  • #51
Mathematech said:
I want to discuss Streater's take that there is no need for assuming non-locality and that EPR etc can purely be understood via correct application of probability.

Its well known non locality is not required by simply abandoning that objects have properties independent of measurement context. Bells theorem proves, and its pretty watertight, you can't have locality and objects having properties.

Thats about all there is really.

Thanks
Bill
 
  • #52
Mathematech said:
I want to discuss Streater's take that there is no need for assuming non-locality and that EPR etc can purely be understood via correct application of probability.

I have criticized some points of Streater's texts at http://ilja-schmelzer.de/realism/dBBarguments.php.

I think that realism - in the sense of what has been used, except locality, by Bell to prove his inequalities - is a simple minimal standard of explanation. If you are unable to desciribe some observation using a realistic theory, you have not understood or explained it.

That these assumptions about realism are, really, such a minimal standard of explanation, is, of course, an argument which can be discussed. In particular, by thinking about what "explanations" become possible if we weaken one or another part of this minimal standard. Roughly speaking, you cannot weaken realism without accepting, after this, "and then a miracle happens" as a valid explanation.
 
  • #53
bhobba said:
Its well known non locality is not required by simply abandoning that objects have properties independent of measurement context.

No, that's wrong. Looks like the classical error of identifying the conclusions of the first part of Bell's proof (the EPR part) with assumptions made by Bell.

What has to be assumed is realism, in a very weak sense. The reality λ should not even consist of localized objects, it can be whatever you can imagine.
 

Similar threads

  • · Replies 9 ·
Replies
9
Views
3K
  • · Replies 39 ·
2
Replies
39
Views
4K
  • · Replies 292 ·
10
Replies
292
Views
12K
  • · Replies 70 ·
3
Replies
70
Views
2K
  • · Replies 41 ·
2
Replies
41
Views
6K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 17 ·
Replies
17
Views
2K
  • · Replies 33 ·
2
Replies
33
Views
4K
  • Poll Poll
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 34 ·
2
Replies
34
Views
1K