Quantum Bayesian Interpretation of QM

  • Thread starter Salman2
  • Start date
  • #1
95
0
Any comments (pro-con) on this Quantum Bayesian interpretation of QM by Fuchs & Schack ?:

http://arxiv.org/pdf/1301.3274.pdf
 

Answers and Replies

  • #2
1,188
512
Yes, I'd also like to know if anyone has any insights on this new model, dubbed the "QBism" model. The general idea is that the quantum wave function does NOT represent any actuality in the real physical world. It is an abstraction of the mind...and it goes from there. The arxiv article Salman2 posted is called the No nonsense version, which sounds like a good initial review. However, for those who want an even briefer survey should check the most recent SciAm issue:

http://www.scientificamerican.com/a...tum-beyesnism-fix-paradoxes-quantum-mechanics
 
  • #3
"The general case of conscious perception is the negative perception, namely, 'perceiving the stone as not grey'...
"Consciousness is the subjective form involved in feeling the contrast between the 'theory' which may be erroneous and the fact which is given."

Alfred North Whitehead, Process and Reality, ISBN 0-02-934570-7, p. 161.

"In other words, consciousness enters into the subjective forms of feelings, when those feelings are components in an integral feeling whose datum is the contrast between a nexus which is, and a proposition which in its own nature negates the decision of its truth or falsehood."

p. 261

Again, H P Stapp sees Whitehead as providing a container for QM. I agree. If he had lived a little longer, he would have been in the QM Pantheon.

CW
 
Last edited:
  • #4
95
0
Chris Fields offer a critique of the QBism model presented by Fuchs based on the nature of the wavefunction of the agent (observer):

http://arxiv.org/pdf/1108.2024v2.pdf

Here from the end of the paper is the major objection of Fields to QBism:

"QBism provides no physical distinction between observers and the systems they observe, treating all quantum systems as autonomous agents that respond to observations by updating beliefs and employ quantum mechanics as a “users’ manual” to guide behavior. However, it treats observation itself as a physical process in which an “observer” acts on a “system” with a POVM and the system” selects a POVM component as the “observer’s experience” in return. This requirement renders the assumption that systems be well-defined - i.e. have constant d-impossible to implement operationally. It similarly forces the consistent QBist to regard the environment as an effectively omniscient observer, threatening the fundamental assumption of subjective probabilities and forcing the conclusion that QBist observers cannot segment their environments into objectively separate systems."

==

Another paper by Fields, discussion of QBism starts on p. 27:

http://arxiv.org/pdf/1108.4865.pdf
 
Last edited:
  • #6
1
0
Subjective Reality-

The discussion of QBism poses iepistemological, and semantic problems for the reader. The subtitle- It's All ln Your Mind- is a tautology. Any theory or interpretation of observed physical phenomena is in the mind, a product of the imagination, or logical deduction, or some other mental process. Heisenberg ( The Physical Principles of the Quantum Theory), [n discussing the uncertainty principle, cautioned that human language permits the construction of sentences that have no content since they imply no experimentally observable consequences , even though they may conjure up a mental picture. He particularly cautioned against the use of the term, " real " in relation to such statements. as is done in the article. Mr. von Burgers also described QBism as representing subjective beliefs-whose? Bertrand Russell (Human Knowlrdge, Its Scope and Limits) described " belief" as a word not easy to define. It is certainly not defined in the context of the article.
,
Heisenberg also showed that the uncertainty principle and several other results of quantum mechanical theory could be deduced without reference to a wave function, so this aspect of the new interpretations is not unique. Similarly Feynman (QED, The Strange Theory of Light and Matter) dealt with the diffraction of light through a pair of slots, by a formulation based on the actions of photons without reference to wave functions. The statement in the article that the wave function is
" only a tool" to enable mathematical calculations is puzzling- any theoretical formulation of quantum mechanics is a tool for mathematical calculations relating to the properties of physical systems.

In spite of the tendency in Mr. von Burgers' article to overplay the virtues of QBism relative to other formulations, as an additional way to contemplate quantum mechanics, it has potential value, As Feynman ( The Character of Physical Law) stated, any good theoretical physicist knows six or seven theoretical representations for exactly the same physics. One or another of these may be the most advantageous way of contemplating how to extend the theory into new domains and discover new laws. Time will tell.

Alexander
 
Last edited by a moderator:
  • #7
DrChinese
Science Advisor
Gold Member
7,358
1,150
The discussion of QBism poses iepistemological, and semantic problems for the reader. ...
Welcome to PhysicsForums, Alexander!

Are you familiar with the PBR theorem? Although I can't say I fully understand the examples in the OP's QBism paper, it seems to flow directly opposite to PBR. One says the wave function maps directly to reality, the other says it does not.
 
  • #8
463
7
Some interesting remarks on Bayes' Theorem

http://www.stat.columbia.edu/~gelman/research/published/badbayesmain.pdf
"Bayesian inference is one of the more controversial approaches to statistics, with both the promise and limitations of being a closed system of logic. There is an extensive literature, which sometimes seems to overwhelm that of Bayesian inference itself, on the advantages and disadvantages of Bayesian approaches"


---------
"Bayes' Theorem is a simple formula that relates the probabilities of two different events that are conditional upon each other"

sound familiar, no ?
(in physics, i mean)
 
Last edited:
  • #9
9,476
2,560
Are you familiar with the PBR theorem? Although I can't say I fully understand the examples in the OP's QBism paper, it seems to flow directly opposite to PBR. One says the wave function maps directly to reality, the other says it does not.
I have gone through the PBR theorem and my view is exactly the same as Matt Leifer:
http://mattleifer.info/2011/11/20/can-the-quantum-state-be-interpreted-statistically/

He divides interpretations into three types:

1. Wavefunctions are epistemic and there is some underlying ontic state. Quantum mechanics is the statistical theory of these ontic states in analogy with Liouville mechanics.
2. Wavefunctions are epistemic, but there is no deeper underlying reality.
3. Wavefunctions are ontic (there may also be additional ontic degrees of freedom, which is an important distinction but not relevant to the present discussion).

PBR says nothing about type 2 - in fact the paper specifically excludes it. What it is concerned about is theories of type 1 and 3 - it basically says type 1 is untenable - its really type 3 in disguise.

That's an interesting result but I am scratching my head why its considered that important. Most interpretations are type 2 (eg Copenhagen and the Ensemble interpretation), many others are type 3 (eg MWI and BM) and only a few type 1.

Maybe I am missing something but from what I can see its not that big a deal.

Thanks
Bill
 
Last edited:
  • #10
9,476
2,560
Regarding the Quantum Baysean interpretation its a perfectly good way of coming to grips with the probability part of QM.

Normally that is done by means of an ensemble view of probability which talks about the proportion of a very large number of similar systems and the proportion with a particular property or whatever is the probability. This is a view very commonly used in applied math. But Baysian probability theory (perhaps framework is a better word) - is just as valid. In fact there is some evidence it leads to a slicker axiomatic formulation:
http://arxiv.org/pdf/quant-ph/0210017v1.pdf

Even if it isn't as slick I prefer the ensemble view because of its greater pictorial vividness.

IMHO its not really an issue to get too concerned about. In applying probability to all sorts of areas the intuitive view most have is from my experience more than adequate without being strict about it.

Thanks
Bill
 
Last edited:
  • #11
9,476
2,560
Yes, I'd also like to know if anyone has any insights on this new model, dubbed the "QBism" model. The general idea is that the quantum wave function does NOT represent any actuality in the real physical world.
As far as I can see its simply the ensemble interpretation in another guise - where the pictorial vividness of an ensemble is replaced by beliefs about information.

Information seems to be one of the buzz things in physics these days but personally I can't see the appeal - although am willing to be convinced.

You might be interested in the following where QM is derived from all systems with the same information carrying capacity are the same (plus a few other things):
http://arxiv.org/pdf/0911.0695v1.pdf

It's my favorite foundational basis of QM these days - but I have to say it leaves some cold.

The interesting thing though is if you remove information from the axioms and say - all systems that are observationally the same are equivalent it doesn't change anything in the derivation - which sort of makes you wonder.

Thanks
Bill
 
  • #12
vanhees71
Science Advisor
Insights Author
Gold Member
2019 Award
15,709
7,040
Well, maybe I'm just too much biased by my training as a physicist to make sense of the whole Baysian interpretation of probabilities. In my opinion this has nothing to do with quantum theory but with any kind of probabilistic statement. It is also good to distinguish some simple categories of content of a physical theory.

A physical theory, if stated in a complete way like QT, is first some mathematical "game of our minds". There is a well-defined set of axioms or postulates which give a formal set of rules which establishes how to calculate abstract things. In QT that's the "state of the system", given by a self-adjoint positive semidefinite trace-class operator on a (rigged) Hilbert space, an "algebra of observables", represented by self-adjoint operators, and a Hamiltonian among the observables that defines the dynamics of the system. That's just the formal rules of the game. It's just a mathematical universe, you can make statements (prove theorems), do calculations. I think this part is totally free of interpretational issues, because no connection to the "real world" (understood as reproducible objective observations) has been made yet.

Now comes the difficult part, namely this connection with the real world, i.e., with reproducible obejective observations in nature. In my opinion, the only consistent interpretation is the Minimal Statistical Interpretation, which is basically defined by Born's Rule, saying that for a given preparation of a system in a quantum state, represented by the Statistical Operator [itex]\hat{R}[/itex] the probability (density) to measure a complete set of compatible operators is given by
[tex]P(A_1,\ldots, A_n|\hat{R})=\langle A_1,\ldots, A_n|\hat{R}|A_1,\ldots, A_n \rangle[/tex]
where [itex]|A_1,\ldots, A_n\rangle [/itex] is a (generalized) common eigenvector normalized to 1 (a [itex]\delta[/itex] distribution) of the self-adjoint operators representing the complete set of compatible operators.

Now the interpretation is shifted to the interpretation of probabilities. QT makes no other predictions about the outcome of measurements than these probabilities, and now we have to think about the meaning of probabilities. It's clear that probability theory also is given as a axiomatic set of rules (e.g., the Kolmogorov axioms), which is unproblematic since its just a mathematical abstraction. The question now is, how to interpret probabilities in the sense of physical experiments. Physics is about the test of hypothesis about real-world experiments and thus we must make this connection between probabilities and outcomes of such real-world measurements. I don't see, how else you can define this connection than by repeating the measurement on a sufficiently large ensemble of identically and independently prepared experimental setups. The larger the ensemble the higher the statistical significance for proving or disproving the predicted probabilities for the outcome of measurements.

The Bayesian view, for me, is just a play with words, trying to give a physically meaningful interpretation of probability for a single event. In practice, however, you cannot prove anything about a probabilistic statement with only looking at a single event. If I predict a probability of 10% chance of rain tomorrow, and then the fact whether it rains or doesn't rain on the next day doesn't tell anything about the validity of my probabilistic prediction. The only thing one can say is that for many days with the weather conditions of today on average it will rain in 10% of all cases on the next day; no more no less. Whether it will rain or not on one specific date cannot be predicted by giving a probability.

So for the practice of physics the Bayesian view of probabilities is simply pointless, because doesn't tell anything about the outcome real experiments.
 
  • #13
9,476
2,560
So for the practice of physics the Bayesian view of probabilities is simply pointless, because doesn't tell anything about the outcome real experiments.
I think it goes beyond physics. My background is in applied math and it invariably uses the frequentest interpretation which is basically the same as the ensemble interpretation. To me this Bayesian stuff seems just a play on words as well.

That said, and I cant comment because I didn't take those particular courses, applied Bayesian modelling and inference is widely taught - courses on it were certainly available where I went. I am not convinced however it requires the Bayesian interpretation.

Thanks
Bill
 
  • #14
stevendaryl
Staff Emeritus
Science Advisor
Insights Author
8,401
2,580
The Bayesian view, for me, is just a play with words, trying to give a physically meaningful interpretation of probability for a single event. In practice, however, you cannot prove anything about a probabilistic statement with only looking at a single event.
Well, the history of the universe only happens once, so we're stuck with having to reason about singular events, if we are to describe the universe.

More concretely, let's say that we have a theory that predicts that some outcome of an experiment has a 50/50 probability. So you perform the experiment 100 times, say, and find that outcome happens 49 times out 100. So that's pretty good. But logically speaking, how is making the conclusion based on 100 trials any more certain than making a conclusion based on 1 trial, or 10 trials? The outcome, 49 out of 100, is consistent with just about any probability at all. You haven't narrow down the range of probabilities at all. What have you accomplished, then? You've changed your confidence, or belief, that the probability is around 1/2.

Mathematically speaking, the frequentist account of probability is nonsense. Probability 1/2 doesn't mean that something will happen 1/2 of the time, no matter how many experiments you perform. And it's nonsensical to add "...in the limit as the number of trials goes to infinity...", also. There is no guarantee that relative frequencies approach any limit whatsoever.
 
  • #15
stevendaryl
Staff Emeritus
Science Advisor
Insights Author
8,401
2,580
I think it goes beyond physics. My background is in applied math and it invariably uses the frequentest interpretation which is basically the same as the ensemble interpretation. To me this Bayesian stuff seems just a play on words as well.
The frequentist interpretation really doesn't make any sense, to me. As a statement about ensembles, it doesn't make any sense, either. If you perform an experiment, such as flipping a coin, there is no guarantee that the relative frequency approaches anything at all in the limit as the number of coin tosses goes to infinity. Furthermore, since we don't really ever do things infinitely often, then what can you conclude, as a frequentist, from 10 trials of something. Or 100? Or 1000? You can certainly dutifully write down the frequency, but every time you do another trial, than number is going to change, by a tiny amount. Is the probability changing every time you perform the experiment?
 
  • #16
stevendaryl
Staff Emeritus
Science Advisor
Insights Author
8,401
2,580
The frequentist interpretation really doesn't make any sense, to me. As a statement about ensembles, it doesn't make any sense, either. If you perform an experiment, such as flipping a coin, there is no guarantee that the relative frequency approaches anything at all in the limit as the number of coin tosses goes to infinity. Furthermore, since we don't really ever do things infinitely often, then what can you conclude, as a frequentist, from 10 trials of something. Or 100? Or 1000? You can certainly dutifully write down the frequency, but every time you do another trial, than number is going to change, by a tiny amount. Is the probability changing every time you perform the experiment?
In practice, people who claim to be doing "frequentist probability" use "confidence intervals". So if you perform an experiment 100 times, and you get a particular outcome 49 times, then you can say something like: The probability is 49% +/- E, where E is a confidence interval. But it isn't really true. The "true" probability could be 99%. Or the "true" probability could be 1%. You could have just had a weird streak of luck. The choice of E is pretty much ad hoc.
 
  • #17
stevendaryl
Staff Emeritus
Science Advisor
Insights Author
8,401
2,580
John Baez gives a discussion of Bayesianism here:
http://math.ucr.edu/home/baez/bayes.html

Here's a snippet:

It's not at all easy to define the concept of probability. If you ask most people, a coin has probability 1/2 to land heads up if when you flip it a large number of times, it lands heads up close to half the time. But this is fatally vague!

After all what counts as a "large number" of times? And what does "close to half" mean? If we don't define these concepts precisely, the above definition is useless for actually deciding when a coin has probability 1/2 to land heads up!

Say we start flipping a coin and it keeps landing heads up, as in the play Rosencrantz and Guildenstern are Dead by Tom Stoppard. How many times does it need to land heads up before we decide that this is not happening with probability 1/2? Five? Ten? A thousand? A million?

This question has no good answer. There's no definite point at which we become sure the probability is something other than 1/2. Instead, we gradually become convinced that the probability is higher. It seems ever more likely that something is amiss. But, at any point we could turn out to be wrong. We could have been the victims of an improbable fluke.

Note the words "likely" and "improbable". We're starting to use concepts from probability theory - and yet we are in the middle of trying to define probability! Very odd. Suspiciously circular.

Some people try to get around this as follows. They say the coin has probability 1/2 of landing heads up if over an infinite number of flips it lands heads up half the time. There's one big problem, though: this criterion is useless in practice, because we can never flip a coin an infinite number of times!

Ultimately, one has to face the fact that probability cannot be usefully defined in terms of the frequency of occurence of some event over a large (or infinite) number of trials. In the jargon of probability theory, the frequentist interpretation of probability is wrong.

Note: I'm not saying probability has nothing to do with frequency. Indeed, they're deeply related! All I'm saying is that we can't usefully define probability solely in terms of frequency.
 
  • #18
29,978
6,365
First, I don't know enough QM to have any opinion on interpretations of QM, but I do use Bayesian statistics in other things (e.g. analysis of medical tests)

Well, maybe I'm just too much biased by my training as a physicist to make sense of the whole Baysian interpretation of probabilities. ...

So for the practice of physics the Bayesian view of probabilities is simply pointless, because doesn't tell anything about the outcome real experiments.
You should really invest a little time into it. The Bayesian approach to probaility is more in line with the scientific method than the frequentist approach.

In the scientific method you formulate a hypothesis, then you acquire data, then you use that data to decide to keep or reject your hypothesis. In other words, you want to determine the likelyhood of the hypothesis given the data, which is exactly what Bayesian statistics calculate. Unfortunately, frequentist statistical tests simply don't measure that. Instead they calculate the likelyhood of the data given the hypothesis.

I think that the big problem with Bayesian statistics right now is the lack of standardized tests. If you say "my t-test was significant with p=0.01" then everyone understands what mathematical test you ran on your data and what you got. There is no corresponding "Bayesian t-test" that you can simply report and expect everyone to know what you did.

Most likely, your preference for frequentist statistics is simply a matter of familiarity, born of the fact that the tools are well-developed and commonly-used. This seems to be the case for bhobba also.
My background is in applied math and it invariably uses the frequentest interpretation which is basically the same as the ensemble interpretation.
 
  • #19
463
7
You should really invest a little time into it. The Bayesian approach to probaility is more in line with the scientific method than the frequentist approach.
actually are merging. (frequentist and bayesian)

"Efron also compares more recent statistical theories such as frequentism to Bayes' theorem, and looks at the newly proposed fusion of Bayes' and frequentist ideas in Empirical Bayes. Frequentism has dominated for a century and does not use prior information, considering future behavior instead"

Read more at: http://phys.org/news/2013-06-bayesian-statistics-theorem-caution.html#jCp
 
  • #20
stevendaryl
Staff Emeritus
Science Advisor
Insights Author
8,401
2,580
First, I don't know enough QM to have any opinion on interpretations of QM, but I do use Bayesian statistics in other things (e.g. analysis of medical tests)

You should really invest a little time into it. The Bayesian approach to probaility is more in line with the scientific method than the frequentist approach.
I don't think that the frequentist interpretation of probability can be taken seriously, for the reasons that John Baez gives in the passage I quoted. On the other hand, a purely subjective notion of probability doesn't seem like the whole story, either.

For example, one could model a coin flip by using an unknown parameter [itex]h[/itex] reflecting the probability of getting a "heads". One could start off with the completely unknown probability distribution on [itex]h[/itex]: it could be anything between 0 and 1. Then you flip the coin a few times, and you use Bayes' theorem to get an adjusted probability distribution on the parameter [itex]h[/itex]. For example, if I flip twice, and get 1 head and 1 tail, then the adjusted probability distribution is [itex]P(h) = 6h (1-h)[/itex], which has a maximum at [itex]h=\frac{1}{2}[/itex].

The weird thing here is that you have probability appearing as an unknown parameter, [itex]h[/itex], and you also have it appearing as a subjective likelihood of that parameter. It doesn't make sense to me that it could all be subjective probability, because how can there be an unknown subjective probability [itex]h[/itex]?
 
  • #21
9,476
2,560
The frequentist interpretation really doesn't make any sense, to me. As a statement about ensembles, it doesn't make any sense, either. If you perform an experiment, such as flipping a coin, there is no guarantee that the relative frequency approaches anything at all in the limit as the number of coin tosses goes to infinity. Furthermore, since we don't really ever do things infinitely often, then what can you conclude, as a frequentist, from 10 trials of something. Or 100? Or 1000? You can certainly dutifully write down the frequency, but every time you do another trial, than number is going to change, by a tiny amount. Is the probability changing every time you perform the experiment?
The law of large numbers is rigorously provable from the axioms pf probability.

What it says is if a trial (experiment or whatever) is repeated a large number of times, independently under identical conditions, then the proportion of times that any specified outcome occurs approximately equals the probability of the event's occurrence on any particular trial; the larger the number of repetitions, the better the approximation tends to be.

This guarantees a sufficiently large, but finite, number of trials exists (ie an ensemble) that for all practical purposes contains the outcomes in proportion to its probability.

It seems pretty straightforward to me, but each to his own I suppose. This is applied math after all. I remember when I was doing my degree I used to get upset at careless stuff like treating dx as a small first order quantity - which of course it isn't - but things are often simpler doing that. Still the criticisms I raised then are perfectly valid and its only in a rigorous treatment they disappear - but things become a lot more difficult. If that's what appeals be my quest - I like to think I have come to terms with such things these days. As one of my statistics professors said to me - and I think it was the final straw that cured me of this sort of stuff - I can show you books where all the questions you ask are fully answered - but you wouldn't read them. He then gave me this deep tome on the theory of statistical inference - and guess what - he was right.

Thanks
Bill
 
  • #22
9,476
2,560
Most likely, your preference for frequentist statistics is simply a matter of familiarity, born of the fact that the tools are well-developed and commonly-used. This seems to be the case for bhobba also.
That's true.

I too did a fair amount of statistics in my degree deriving and using stuff like the student t distribution, degrees of freedom etc. I always found the frequentest interpretation more than adequate.

That's not to say the Bayesian view is not valid - it is - its just I never found the need to move to that view.

Thanks
Bill
 
  • #23
stevendaryl
Staff Emeritus
Science Advisor
Insights Author
8,401
2,580
The law of large numbers is rigorously provable from the axioms pf probability.

What it says is if a trial (experiment or whatever) is repeated a large number of times, independently under identical conditions, then the proportion of times that any specified outcome occurs approximately equals the probability of the event's occurrence on any particular trial; the larger the number of repetitions, the better the approximation tends to be.
But the meaning of "tends to" is the part that makes no sense, under a frequentist account. What does that mean? It is possible, when flipping coins, to have a run of 1,000 flips in a row that are all heads. It is possible to have a run of 1,000,000 flips in a row with all heads. So what does this "tends to" mean? Well, you can say that such a run of heads is unlikely, but according to what meaning of "unlikely"?

This guarantees a sufficiently large, but finite, number of trials exists (ie an ensemble) that for all practical purposes contains the outcomes in proportion to its probability.
It doesn't guarantee anything. You can calculate a number, the likelihood that the relative frequency will differ from the probability by more than some specific amount. But what's the meaning of that number? It can't be given a frequentist interpretation as a probability.
 
  • #24
9,476
2,560
But the meaning of "tends to" is the part that makes no sense, under a frequentist account. What does that mean? It is possible, when flipping coins, to have a run of 1,000 flips in a row that are all heads. It is possible to have a run of 1,000,000 flips in a row with all heads. So what does this "tends to" mean? Well, you can say that such a run of heads is unlikely, but according to what meaning of "unlikely"?
The meaning of such things lies in a rigorous development of probability. That's how it is proved and rests on ideas like almost surely convergence and convergence in probability.

You are putting the cart before the horse. In the frequentest interpretation you have an ensemble that is the EXPECTED outcome of a very large number of trials - and that's what the law of large numbers converges to. Sure - you can flip any number of heads - but that is not the expected value - which is for a large number half and half. Then one imagines a trial as picking a random element from that ensemble - you can pick any element of that ensemble every time - but the ensemble contains the objects in the correct proportion.

I agree these ideas are subtle and many great mathematicians such as Kolmorgorov wrote difficult tomes putting all this stuff on a firm basis eg the strong law of large numbers. But on a firm basis they most certainly are.

If you really want to investigate it without getting into some pretty hairy and advanced pure math then Fellers classic is a good place to start:
http://ruangbacafmipa.staff.ub.ac.id/files/2012/02/An-Introduction-to-probability-Theory-by-William-Feller.pdf [Broken]

As you will see there is a lot of stuff that leads up to the proof of the law of large numbers and in volume 1 Feller does not give the proof of the very important Strong Law Of Large Numbers - he only gives the proof of the Weak Law - you have to go to volume 2 for that - and the level of math in that volume rises quite a bit.

Feller discusses the issues you raise and it is very subtle indeed - possibly even more subtle than you realize. However if that's what interests you then Feller is a good place to start.

Just as an aside I set myself the task of working through both volumes - bought them both. Got through volume 1 but volume 2 was a bit too tough and never did finish it.

Thanks
Bill
 
Last edited by a moderator:
  • #25
29,978
6,365
I always found the frequentest interpretation more than adequate.
Yes, I have also, particularly with respect to the wide variety of powerful software with pre-packaged standard statistical tests. Personally, I think that the hard-core Bayesians need to spend less time promoting their viewpoint and more time developing and standardizing their tools.
 

Related Threads on Quantum Bayesian Interpretation of QM

  • Last Post
Replies
16
Views
4K
  • Last Post
Replies
7
Views
2K
  • Last Post
2
Replies
27
Views
5K
  • Last Post
Replies
7
Views
2K
  • Poll
  • Last Post
Replies
17
Views
4K
  • Last Post
Replies
5
Views
2K
  • Last Post
Replies
2
Views
2K
  • Last Post
5
Replies
101
Views
13K
  • Last Post
Replies
0
Views
2K
  • Last Post
Replies
3
Views
970
Top