Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Quantum Bayesian Interpretation of QM

  1. May 19, 2013 #1
    Any comments (pro-con) on this Quantum Bayesian interpretation of QM by Fuchs & Schack ?:

  2. jcsd
  3. May 19, 2013 #2
    Yes, I'd also like to know if anyone has any insights on this new model, dubbed the "QBism" model. The general idea is that the quantum wave function does NOT represent any actuality in the real physical world. It is an abstraction of the mind...and it goes from there. The arxiv article Salman2 posted is called the No nonsense version, which sounds like a good initial review. However, for those who want an even briefer survey should check the most recent SciAm issue:

  4. May 19, 2013 #3
    "The general case of conscious perception is the negative perception, namely, 'perceiving the stone as not grey'...
    "Consciousness is the subjective form involved in feeling the contrast between the 'theory' which may be erroneous and the fact which is given."

    Alfred North Whitehead, Process and Reality, ISBN 0-02-934570-7, p. 161.

    "In other words, consciousness enters into the subjective forms of feelings, when those feelings are components in an integral feeling whose datum is the contrast between a nexus which is, and a proposition which in its own nature negates the decision of its truth or falsehood."

    p. 261

    Again, H P Stapp sees Whitehead as providing a container for QM. I agree. If he had lived a little longer, he would have been in the QM Pantheon.

    Last edited: May 19, 2013
  5. May 19, 2013 #4
    Chris Fields offer a critique of the QBism model presented by Fuchs based on the nature of the wavefunction of the agent (observer):


    Here from the end of the paper is the major objection of Fields to QBism:

    "QBism provides no physical distinction between observers and the systems they observe, treating all quantum systems as autonomous agents that respond to observations by updating beliefs and employ quantum mechanics as a “users’ manual” to guide behavior. However, it treats observation itself as a physical process in which an “observer” acts on a “system” with a POVM and the system” selects a POVM component as the “observer’s experience” in return. This requirement renders the assumption that systems be well-defined - i.e. have constant d-impossible to implement operationally. It similarly forces the consistent QBist to regard the environment as an effectively omniscient observer, threatening the fundamental assumption of subjective probabilities and forcing the conclusion that QBist observers cannot segment their environments into objectively separate systems."


    Another paper by Fields, discussion of QBism starts on p. 27:

    Last edited: May 19, 2013
  6. May 20, 2013 #5
    Epistemic-Epistemic view of the wave function.
  7. Jun 6, 2013 #6
    Subjective Reality-

    The discussion of QBism poses iepistemological, and semantic problems for the reader. The subtitle- It's All ln Your Mind- is a tautology. Any theory or interpretation of observed physical phenomena is in the mind, a product of the imagination, or logical deduction, or some other mental process. Heisenberg ( The Physical Principles of the Quantum Theory), [n discussing the uncertainty principle, cautioned that human language permits the construction of sentences that have no content since they imply no experimentally observable consequences , even though they may conjure up a mental picture. He particularly cautioned against the use of the term, " real " in relation to such statements. as is done in the article. Mr. von Burgers also described QBism as representing subjective beliefs-whose? Bertrand Russell (Human Knowlrdge, Its Scope and Limits) described " belief" as a word not easy to define. It is certainly not defined in the context of the article.
    Heisenberg also showed that the uncertainty principle and several other results of quantum mechanical theory could be deduced without reference to a wave function, so this aspect of the new interpretations is not unique. Similarly Feynman (QED, The Strange Theory of Light and Matter) dealt with the diffraction of light through a pair of slots, by a formulation based on the actions of photons without reference to wave functions. The statement in the article that the wave function is
    " only a tool" to enable mathematical calculations is puzzling- any theoretical formulation of quantum mechanics is a tool for mathematical calculations relating to the properties of physical systems.

    In spite of the tendency in Mr. von Burgers' article to overplay the virtues of QBism relative to other formulations, as an additional way to contemplate quantum mechanics, it has potential value, As Feynman ( The Character of Physical Law) stated, any good theoretical physicist knows six or seven theoretical representations for exactly the same physics. One or another of these may be the most advantageous way of contemplating how to extend the theory into new domains and discover new laws. Time will tell.

    Last edited by a moderator: Jun 6, 2013
  8. Jun 6, 2013 #7


    User Avatar
    Science Advisor
    Gold Member

    Welcome to PhysicsForums, Alexander!

    Are you familiar with the PBR theorem? Although I can't say I fully understand the examples in the OP's QBism paper, it seems to flow directly opposite to PBR. One says the wave function maps directly to reality, the other says it does not.
  9. Jun 7, 2013 #8
    Some interesting remarks on Bayes' Theorem

    "Bayesian inference is one of the more controversial approaches to statistics, with both the promise and limitations of being a closed system of logic. There is an extensive literature, which sometimes seems to overwhelm that of Bayesian inference itself, on the advantages and disadvantages of Bayesian approaches"

    "Bayes' Theorem is a simple formula that relates the probabilities of two different events that are conditional upon each other"

    sound familiar, no ?
    (in physics, i mean)
    Last edited: Jun 7, 2013
  10. Jun 7, 2013 #9


    Staff: Mentor

    I have gone through the PBR theorem and my view is exactly the same as Matt Leifer:

    He divides interpretations into three types:

    1. Wavefunctions are epistemic and there is some underlying ontic state. Quantum mechanics is the statistical theory of these ontic states in analogy with Liouville mechanics.
    2. Wavefunctions are epistemic, but there is no deeper underlying reality.
    3. Wavefunctions are ontic (there may also be additional ontic degrees of freedom, which is an important distinction but not relevant to the present discussion).

    PBR says nothing about type 2 - in fact the paper specifically excludes it. What it is concerned about is theories of type 1 and 3 - it basically says type 1 is untenable - its really type 3 in disguise.

    That's an interesting result but I am scratching my head why its considered that important. Most interpretations are type 2 (eg Copenhagen and the Ensemble interpretation), many others are type 3 (eg MWI and BM) and only a few type 1.

    Maybe I am missing something but from what I can see its not that big a deal.

    Last edited: Jun 8, 2013
  11. Jun 7, 2013 #10


    Staff: Mentor

    Regarding the Quantum Baysean interpretation its a perfectly good way of coming to grips with the probability part of QM.

    Normally that is done by means of an ensemble view of probability which talks about the proportion of a very large number of similar systems and the proportion with a particular property or whatever is the probability. This is a view very commonly used in applied math. But Baysian probability theory (perhaps framework is a better word) - is just as valid. In fact there is some evidence it leads to a slicker axiomatic formulation:

    Even if it isn't as slick I prefer the ensemble view because of its greater pictorial vividness.

    IMHO its not really an issue to get too concerned about. In applying probability to all sorts of areas the intuitive view most have is from my experience more than adequate without being strict about it.

    Last edited: Jun 8, 2013
  12. Jun 7, 2013 #11


    Staff: Mentor

    As far as I can see its simply the ensemble interpretation in another guise - where the pictorial vividness of an ensemble is replaced by beliefs about information.

    Information seems to be one of the buzz things in physics these days but personally I can't see the appeal - although am willing to be convinced.

    You might be interested in the following where QM is derived from all systems with the same information carrying capacity are the same (plus a few other things):

    It's my favorite foundational basis of QM these days - but I have to say it leaves some cold.

    The interesting thing though is if you remove information from the axioms and say - all systems that are observationally the same are equivalent it doesn't change anything in the derivation - which sort of makes you wonder.

  13. Jun 8, 2013 #12


    User Avatar
    Science Advisor
    Gold Member
    2017 Award

    Well, maybe I'm just too much biased by my training as a physicist to make sense of the whole Baysian interpretation of probabilities. In my opinion this has nothing to do with quantum theory but with any kind of probabilistic statement. It is also good to distinguish some simple categories of content of a physical theory.

    A physical theory, if stated in a complete way like QT, is first some mathematical "game of our minds". There is a well-defined set of axioms or postulates which give a formal set of rules which establishes how to calculate abstract things. In QT that's the "state of the system", given by a self-adjoint positive semidefinite trace-class operator on a (rigged) Hilbert space, an "algebra of observables", represented by self-adjoint operators, and a Hamiltonian among the observables that defines the dynamics of the system. That's just the formal rules of the game. It's just a mathematical universe, you can make statements (prove theorems), do calculations. I think this part is totally free of interpretational issues, because no connection to the "real world" (understood as reproducible objective observations) has been made yet.

    Now comes the difficult part, namely this connection with the real world, i.e., with reproducible obejective observations in nature. In my opinion, the only consistent interpretation is the Minimal Statistical Interpretation, which is basically defined by Born's Rule, saying that for a given preparation of a system in a quantum state, represented by the Statistical Operator [itex]\hat{R}[/itex] the probability (density) to measure a complete set of compatible operators is given by
    [tex]P(A_1,\ldots, A_n|\hat{R})=\langle A_1,\ldots, A_n|\hat{R}|A_1,\ldots, A_n \rangle[/tex]
    where [itex]|A_1,\ldots, A_n\rangle [/itex] is a (generalized) common eigenvector normalized to 1 (a [itex]\delta[/itex] distribution) of the self-adjoint operators representing the complete set of compatible operators.

    Now the interpretation is shifted to the interpretation of probabilities. QT makes no other predictions about the outcome of measurements than these probabilities, and now we have to think about the meaning of probabilities. It's clear that probability theory also is given as a axiomatic set of rules (e.g., the Kolmogorov axioms), which is unproblematic since its just a mathematical abstraction. The question now is, how to interpret probabilities in the sense of physical experiments. Physics is about the test of hypothesis about real-world experiments and thus we must make this connection between probabilities and outcomes of such real-world measurements. I don't see, how else you can define this connection than by repeating the measurement on a sufficiently large ensemble of identically and independently prepared experimental setups. The larger the ensemble the higher the statistical significance for proving or disproving the predicted probabilities for the outcome of measurements.

    The Bayesian view, for me, is just a play with words, trying to give a physically meaningful interpretation of probability for a single event. In practice, however, you cannot prove anything about a probabilistic statement with only looking at a single event. If I predict a probability of 10% chance of rain tomorrow, and then the fact whether it rains or doesn't rain on the next day doesn't tell anything about the validity of my probabilistic prediction. The only thing one can say is that for many days with the weather conditions of today on average it will rain in 10% of all cases on the next day; no more no less. Whether it will rain or not on one specific date cannot be predicted by giving a probability.

    So for the practice of physics the Bayesian view of probabilities is simply pointless, because doesn't tell anything about the outcome real experiments.
  14. Jun 8, 2013 #13


    Staff: Mentor

    I think it goes beyond physics. My background is in applied math and it invariably uses the frequentest interpretation which is basically the same as the ensemble interpretation. To me this Bayesian stuff seems just a play on words as well.

    That said, and I cant comment because I didn't take those particular courses, applied Bayesian modelling and inference is widely taught - courses on it were certainly available where I went. I am not convinced however it requires the Bayesian interpretation.

  15. Jun 8, 2013 #14


    User Avatar
    Staff Emeritus
    Science Advisor

    Well, the history of the universe only happens once, so we're stuck with having to reason about singular events, if we are to describe the universe.

    More concretely, let's say that we have a theory that predicts that some outcome of an experiment has a 50/50 probability. So you perform the experiment 100 times, say, and find that outcome happens 49 times out 100. So that's pretty good. But logically speaking, how is making the conclusion based on 100 trials any more certain than making a conclusion based on 1 trial, or 10 trials? The outcome, 49 out of 100, is consistent with just about any probability at all. You haven't narrow down the range of probabilities at all. What have you accomplished, then? You've changed your confidence, or belief, that the probability is around 1/2.

    Mathematically speaking, the frequentist account of probability is nonsense. Probability 1/2 doesn't mean that something will happen 1/2 of the time, no matter how many experiments you perform. And it's nonsensical to add "...in the limit as the number of trials goes to infinity...", also. There is no guarantee that relative frequencies approach any limit whatsoever.
  16. Jun 8, 2013 #15


    User Avatar
    Staff Emeritus
    Science Advisor

    The frequentist interpretation really doesn't make any sense, to me. As a statement about ensembles, it doesn't make any sense, either. If you perform an experiment, such as flipping a coin, there is no guarantee that the relative frequency approaches anything at all in the limit as the number of coin tosses goes to infinity. Furthermore, since we don't really ever do things infinitely often, then what can you conclude, as a frequentist, from 10 trials of something. Or 100? Or 1000? You can certainly dutifully write down the frequency, but every time you do another trial, than number is going to change, by a tiny amount. Is the probability changing every time you perform the experiment?
  17. Jun 8, 2013 #16


    User Avatar
    Staff Emeritus
    Science Advisor

    In practice, people who claim to be doing "frequentist probability" use "confidence intervals". So if you perform an experiment 100 times, and you get a particular outcome 49 times, then you can say something like: The probability is 49% +/- E, where E is a confidence interval. But it isn't really true. The "true" probability could be 99%. Or the "true" probability could be 1%. You could have just had a weird streak of luck. The choice of E is pretty much ad hoc.
  18. Jun 8, 2013 #17


    User Avatar
    Staff Emeritus
    Science Advisor

    John Baez gives a discussion of Bayesianism here:

    Here's a snippet:

  19. Jun 8, 2013 #18


    Staff: Mentor

    First, I don't know enough QM to have any opinion on interpretations of QM, but I do use Bayesian statistics in other things (e.g. analysis of medical tests)

    You should really invest a little time into it. The Bayesian approach to probaility is more in line with the scientific method than the frequentist approach.

    In the scientific method you formulate a hypothesis, then you acquire data, then you use that data to decide to keep or reject your hypothesis. In other words, you want to determine the likelyhood of the hypothesis given the data, which is exactly what Bayesian statistics calculate. Unfortunately, frequentist statistical tests simply don't measure that. Instead they calculate the likelyhood of the data given the hypothesis.

    I think that the big problem with Bayesian statistics right now is the lack of standardized tests. If you say "my t-test was significant with p=0.01" then everyone understands what mathematical test you ran on your data and what you got. There is no corresponding "Bayesian t-test" that you can simply report and expect everyone to know what you did.

    Most likely, your preference for frequentist statistics is simply a matter of familiarity, born of the fact that the tools are well-developed and commonly-used. This seems to be the case for bhobba also.
  20. Jun 8, 2013 #19
    actually are merging. (frequentist and bayesian)

    "Efron also compares more recent statistical theories such as frequentism to Bayes' theorem, and looks at the newly proposed fusion of Bayes' and frequentist ideas in Empirical Bayes. Frequentism has dominated for a century and does not use prior information, considering future behavior instead"

    Read more at: http://phys.org/news/2013-06-bayesian-statistics-theorem-caution.html#jCp
  21. Jun 8, 2013 #20


    User Avatar
    Staff Emeritus
    Science Advisor

    I don't think that the frequentist interpretation of probability can be taken seriously, for the reasons that John Baez gives in the passage I quoted. On the other hand, a purely subjective notion of probability doesn't seem like the whole story, either.

    For example, one could model a coin flip by using an unknown parameter [itex]h[/itex] reflecting the probability of getting a "heads". One could start off with the completely unknown probability distribution on [itex]h[/itex]: it could be anything between 0 and 1. Then you flip the coin a few times, and you use Bayes' theorem to get an adjusted probability distribution on the parameter [itex]h[/itex]. For example, if I flip twice, and get 1 head and 1 tail, then the adjusted probability distribution is [itex]P(h) = 6h (1-h)[/itex], which has a maximum at [itex]h=\frac{1}{2}[/itex].

    The weird thing here is that you have probability appearing as an unknown parameter, [itex]h[/itex], and you also have it appearing as a subjective likelihood of that parameter. It doesn't make sense to me that it could all be subjective probability, because how can there be an unknown subjective probability [itex]h[/itex]?
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Similar Threads - Quantum Bayesian Interpretation Date
I Representations in Quantum Physics Today at 10:07 AM
I Understanding black body radiation Yesterday at 10:38 PM
B Causality in quantum physics Yesterday at 7:08 PM
B Quantum theory for high-school students Wednesday at 5:22 AM
Quantum Bayesianism Aug 14, 2014