# QBism - Is it an extension of "The Monty Hall Problem"?

1. Aug 29, 2014

### hankaaron

I just watched a video discussion on the modern interpretations of the wave function. In it I was introduced to QBism, i.e. Quantum Bayesianism. To me sounded a lot like the famous Monty Hall problem. Is QBism's probability similar to that?

2. Aug 29, 2014

### atyy

No, it has nothing to do with the Monty Hall problem, apart from both having something to do with probability.

3. Aug 29, 2014

### hankaaron

Last edited: Aug 29, 2014
4. Aug 29, 2014

### bhobba

Its simply a different view of probability.

The basis of probability is the Kolmogorov axioms - not subjective knowledge. In those axioms probability is an abstract thing.

The Kolmogorov axioms are equivalent to the so called Cox axioms where its simply a state of knowledge. The Kolmogorov axioms, via the law of large numbers which is derivable from those axioms, also gives the frequentest view.

The Monty Hall problem, and QM, can be viewed via either interpretation of probability. The frequentest view leads to something along the lines of the ensemble interpretation, the subjectivist something along the lines of Copenhagen or Quantum Baysianism.

I have to say however, my background is applied math, and most applied mathematicians view it the frequentest way because a trial is a very concrete thing. Those into Bayesian statistics are an exception.

Thanks
Bill

5. Aug 29, 2014

### atyy

Last edited by a moderator: May 6, 2017
6. Aug 29, 2014

### bhobba

Good point - Bayes rule is NOT the Bayesian view of probabilities - they are two different things.

Bayes rule can be viewed under any interpretation of probability - it follows from the Kolmogorov axioms.

Thanks
Bill

7. Aug 30, 2014

### microsansfil

Tossing a “fair” coin, following The Logic of Science by E. T Jaynes:

prob = 1/2 is not a property of the coin.
prob = 1/2 is not a joint property of coin and tossing mechanism.
Any probability assignment starts from a prior probability.

http://www.nmsr.org/qbism.pdf

Patrick

8. Aug 30, 2014

### microsansfil

Point of view that i read

The mathematical theory of Kolmogorov is to the probability as the differential geometry is to the general relativity.

It gives deductive framework, formal, but has no bearing on its application on its interpretation when applied to observations, to bet, to decisions, etc.

The difference between frequentist and Bayesian does not address the formal deductive framework on how to conduct the calculations, but on the interpretation of the data and results when we speak of "probabilities".

Jaynes gives examples in classical physics where according to what is called "probability" it seems we get reasoning, calculations, and different results.

Patrick

9. Aug 30, 2014

### bhobba

Thats what you have an interpretation for.

There are two main ones:
1. Bayesian - probability is a state of knowledge.
2 Frequentest - you simply associate this abstract thing called probability with objects and apply the law of large numbers to show for a large number of trials the probability is the proportion of the outcome in the trials.

Just out of interest Terry Tao has posted some nice proofs of this:
http://terrytao.wordpress.com/2008/06/18/the-strong-law-of-large-numbers/

This is absolutely fundamental to the modern view of probability.

From the well known standard text - Feller - page 3
'We shall no more attempt to explain the true meaning of probability than the modern physicist dwells on the real meaning of the mass and energy or the geometer discusses the nature of a point. Instead we shall prove theorem's and show how they are applied'

Thanks
Bill

Last edited: Aug 30, 2014
10. Aug 30, 2014

### bhobba

Its obvious, that being the case, differing views are leading to errors. That's why the truth lies in the axioms.

Thanks
Bill

Last edited: Aug 30, 2014
11. Aug 30, 2014

### microsansfil

It's not a question of truth or falsity. The results are different. I will search for article.

Patrick

12. Aug 30, 2014

### bhobba

Then it's almost certainly wrong - simple as that.

If such was true it would be big news earning the discover an instant Fields medal and one of the great seminal discoveries of mathematics along the lines of Godel's Theorem.

I had a look at the reference:
'For many years there has been controversy over “frequentist” versus “Bayesian” methods of inference, in which the writer has been an outspoken partisan on the Bayesian side.The record of this up to 1981 is given in an earlier book (Jaynes, 1983). In these old works there was a strong tendency, on both sides, to argue on the level of philosophy or ideology. We can now hold ourselves somewhat aloof from this because, thanks to recent work, there is no longer any need to appeal to such arguments. We are now in possession of proven theorems and masses of worked-out numerical examples. As a result, the superiority of Bayesian methods is now a thoroughly demonstrated fact in a hundred different areas. One can argue with a philosophy; it is not so easy to argue with a computer printout, which says to us: “Independently of all your philosophy, here are the facts of actual performance.” We point this out in some detail whenever there is a substantial difference in the final results. Thus we continue to argue vigorously for the Bayesian methods; but we ask the reader to note that our arguments now proceed by citing facts rather than proclaiming a philosophical or ideological position.'

If true like I said that would be BIG news.

We have a professor of statistics and probability that posts here:
https://www.physicsforums.com/member.php?u=401042

I suggest you get his opinion before accepting such startling news.

Thanks
Bill

Last edited: Aug 30, 2014
13. Aug 30, 2014

### microsansfil

The Fields medal is reserved to people which will demonstrate that the mathematical axiomatic speak about semantic (falsity/truth)

Any mathematical axiom is purely syntactic. To do physics we need semantic (Observation, reasoning, decisons, ...).

Example of Diffusion

Patrick

Last edited: Aug 30, 2014
14. Aug 30, 2014

### bhobba

It's for any great mathematical discovery such as done by Terry Tao, Witten and Nash - for entirely different things not related to axiomatic's.

Showing that two different interpretations that are equivalent to the same axioms give different results would be a mind blowing discovery of seminal importance.

That's why I suggest you get the view of a professional in the field, because to me its obvious its incorrect.

Thanks
Bill

Last edited: Aug 30, 2014
15. Aug 30, 2014

### bhobba

Of course it is.

But that in no way changes logic. The same axioms can not lead to different results.

Both Frequentest and Baysian use exactly the same axioms. If you have two different results with the same axioms you have discovered they are inconsistent. For probability such would be, well, mind blowing news.

I also did a search on frequentest probability proven wrong.

Nothing came up.

I think the conclusion is pretty obvious - but contact the professor if you like.

Thanks
Bill

16. Aug 30, 2014

### bhobba

Your proof of that claim, rather than philosophical waffling, would prove most interesting.

Thanks
Bill

17. Aug 30, 2014

### stevendaryl

Staff Emeritus
I think it may depend on whether they're talking about the philosophy of probability or whether you're talking about methodology. The way that Bayesians and Frequentists analyze data is slightly different, even though in the limit of infinitely many trials, the differences become negligible (because of the law of large numbers).

If you only have a finite number of trials (which, of course, you always do), then the frequentist has to make some judgments about significance of results. Were there enough trials to get good statistics? At some point, such a judgment requires an ad hoc parameter (confidence levels).

In contrast, the methods of Bayesian statistics are indifferent as to the number of trials. You can get information from a single trial. You can get more information from 1000 trials, but there is no magic number of trials.

There could definitely be some situation where the Bayesian and Frequentist methodologies lead to different conclusions about a study.

18. Aug 30, 2014

### bhobba

I have never heard of any.

To apply the frequentest view you need some probability that for all practical purposes is zero. But that can be anything. Do you think a probability of 1/googoplex^googoplex taken as zero would ever have the slightest practical consequence. And even if you think of one then why would simply taking a probability below whatever the sensitivity of the situation is would not fix it?

Thanks
Bill

19. Aug 30, 2014

### stevendaryl

Staff Emeritus
Here's something that mathematical physicist John Baez wrote years ago, and when I read it, I was so convinced that I assumed that frequentism was just one of those relics of the past. Since I don't really hang out with statisticians much, I didn't realize that there was still a debate about it.

http://math.ucr.edu/home/baez/bayes.html

20. Aug 30, 2014

### stevendaryl

Staff Emeritus
I know that nobody is going to believe this, but when I posted that link, I had completely forgotten that John Baez wrote that article in response to conversations with me.