QBism - Is it an extension of "The Monty Hall Problem"?

In summary: The two main camps in this dispute are those who believe that the probabilities associated with a particular event are a state of knowledge that the observer possesses prior to observation and those who believe that the probabilities are a result of the observations made during the experiment.The "frequentist" camp contends that the probabilities are a result of the observations made during the experiment, while the "Bayesian" camp contends that the probabilities are a state of knowledge that the observer possesses prior to observation.The frequentist position is usually associated with the Copenhagen interpretation of quantum mechanics, while the Bayesian position is usually associated with the wave-particle duality interpretation.'Bayesianism takes account of the subjective knowledge of the observer/me
  • #1
hankaaron
83
4
I just watched a video discussion on the modern interpretations of the wave function. In it I was introduced to QBism, i.e. Quantum Bayesianism. To me sounded a lot like the famous Monty Hall problem. Is QBism's probability similar to that?
 
Physics news on Phys.org
  • #2
No, it has nothing to do with the Monty Hall problem, apart from both having something to do with probability.
 
  • #3
Last edited:
  • #4
Its simply a different view of probability.

The basis of probability is the Kolmogorov axioms - not subjective knowledge. In those axioms probability is an abstract thing.

The Kolmogorov axioms are equivalent to the so called Cox axioms where its simply a state of knowledge. The Kolmogorov axioms, via the law of large numbers which is derivable from those axioms, also gives the frequentest view.

The Monty Hall problem, and QM, can be viewed via either interpretation of probability. The frequentest view leads to something along the lines of the ensemble interpretation, the subjectivist something along the lines of Copenhagen or Quantum Baysianism.

I have to say however, my background is applied math, and most applied mathematicians view it the frequentest way because a trial is a very concrete thing. Those into Bayesian statistics are an exception.

Thanks
Bill
 
  • #6
atyy said:
Sure, in the sense that the Monty Hall problem uses Bayes Rule. However, the probabilities in Bayes Rule do not have to be subjective,

Good point - Bayes rule is NOT the Bayesian view of probabilities - they are two different things.

Bayes rule can be viewed under any interpretation of probability - it follows from the Kolmogorov axioms.

Thanks
Bill
 
  • #7
hankaaron said:
I just watched a video discussion on the modern interpretations of the wave function. In it I was introduced to QBism, i.e. Quantum Bayesianism.

Tossing a “fair” coin, following The Logic of Science by E. T Jaynes:

prob = 1/2 is not a property of the coin.
prob = 1/2 is not a joint property of coin and tossing mechanism.
Any probability assignment starts from a prior probability.

http://www.nmsr.org/qbism.pdf

Patrick
 
  • #8
Point of view that i read

The mathematical theory of Kolmogorov is to the probability as the differential geometry is to the general relativity.

It gives deductive framework, formal, but has no bearing on its application on its interpretation when applied to observations, to bet, to decisions, etc.

The difference between frequentist and Bayesian does not address the formal deductive framework on how to conduct the calculations, but on the interpretation of the data and results when we speak of "probabilities".

Jaynes gives examples in classical physics where according to what is called "probability" it seems we get reasoning, calculations, and different results.

Patrick
 
  • #9
microsansfil said:
It gives deductive framework, formal, but has no bearing on its application on its interpretation when applied to observations, to bet, to decisions, etc.

Thats what you have an interpretation for.

There are two main ones:
1. Bayesian - probability is a state of knowledge.
2 Frequentest - you simply associate this abstract thing called probability with objects and apply the law of large numbers to show for a large number of trials the probability is the proportion of the outcome in the trials.

Just out of interest Terry Tao has posted some nice proofs of this:
http://terrytao.wordpress.com/2008/06/18/the-strong-law-of-large-numbers/

This is absolutely fundamental to the modern view of probability.

From the well known standard text - Feller - page 3
'We shall no more attempt to explain the true meaning of probability than the modern physicist dwells on the real meaning of the mass and energy or the geometer discusses the nature of a point. Instead we shall prove theorem's and show how they are applied'

Thanks
Bill
 
Last edited:
  • #10
microsansfil said:
Jaynes gives examples in classical physics where according to what is called "probability" it seems we get reasoning, calculations, and different results.

Its obvious, that being the case, differing views are leading to errors. That's why the truth lies in the axioms.

Thanks
Bill
 
Last edited:
  • #11
bhobba said:
Its obvious, that being the case, differing views are leading to errors. That's why the truth lies in the axioms.

It's not a question of truth or falsity. The results are different. I will search for article.

Patrick
 
  • #12
microsansfil said:
It's not a question of truth or falsity. The results are different. I will search for article.

Then it's almost certainly wrong - simple as that.

If such was true it would be big news earning the discover an instant Fields medal and one of the great seminal discoveries of mathematics along the lines of Godel's Theorem.

I had a look at the reference:
'For many years there has been controversy over “frequentist” versus “Bayesian” methods of inference, in which the writer has been an outspoken partisan on the Bayesian side.The record of this up to 1981 is given in an earlier book (Jaynes, 1983). In these old works there was a strong tendency, on both sides, to argue on the level of philosophy or ideology. We can now hold ourselves somewhat aloof from this because, thanks to recent work, there is no longer any need to appeal to such arguments. We are now in possession of proven theorems and masses of worked-out numerical examples. As a result, the superiority of Bayesian methods is now a thoroughly demonstrated fact in a hundred different areas. One can argue with a philosophy; it is not so easy to argue with a computer printout, which says to us: “Independently of all your philosophy, here are the facts of actual performance.” We point this out in some detail whenever there is a substantial difference in the final results. Thus we continue to argue vigorously for the Bayesian methods; but we ask the reader to note that our arguments now proceed by citing facts rather than proclaiming a philosophical or ideological position.'

If true like I said that would be BIG news.

We have a professor of statistics and probability that posts here:
https://www.physicsforums.com/member.php?u=401042

I suggest you get his opinion before accepting such startling news.

Thanks
Bill
 
Last edited:
  • #13
bhobba said:
Then it's wrong - simple as that.

If such was true it would be big news earning the discover an instant Fields medal and of the great seminal discoveries of mathematics along the lines of Godel's Theorem.

The Fields medal is reserved to people which will demonstrate that the mathematical axiomatic speak about semantic (falsity/truth) :smile:

Any mathematical axiom is purely syntactic. To do physics we need semantic (Observation, reasoning, decisons, ...).

Example of Diffusion


Patrick
 
Last edited:
  • #14
microsansfil said:
The Fields medal is reserved to people which will demonstrate that the mathematical axiomatic speak about semantic (falsity/truth) :smile:

It's for any great mathematical discovery such as done by Terry Tao, Witten and Nash - for entirely different things not related to axiomatic's.

Showing that two different interpretations that are equivalent to the same axioms give different results would be a mind blowing discovery of seminal importance.

That's why I suggest you get the view of a professional in the field, because to me its obvious its incorrect.

Thanks
Bill
 
Last edited:
  • #15
microsansfil said:
Any mathematical axiom is purely syntactic. To do physics we need semantic (Observation, reasoning, decisons, ...).

Of course it is.

But that in no way changes logic. The same axioms can not lead to different results.

Both Frequentest and Baysian use exactly the same axioms. If you have two different results with the same axioms you have discovered they are inconsistent. For probability such would be, well, mind blowing news.

I also did a search on frequentest probability proven wrong.

Nothing came up.

I think the conclusion is pretty obvious - but contact the professor if you like.

Thanks
Bill
 
  • #16
microsansfil said:
prob = 1/2 is not a property of the coin.

Your proof of that claim, rather than philosophical waffling, would prove most interesting.

Could you post it please?

Thanks
Bill
 
  • #17
bhobba said:
Then it's almost certainly wrong - simple as that.

I think it may depend on whether they're talking about the philosophy of probability or whether you're talking about methodology. The way that Bayesians and Frequentists analyze data is slightly different, even though in the limit of infinitely many trials, the differences become negligible (because of the law of large numbers).

If you only have a finite number of trials (which, of course, you always do), then the frequentist has to make some judgments about significance of results. Were there enough trials to get good statistics? At some point, such a judgment requires an ad hoc parameter (confidence levels).

In contrast, the methods of Bayesian statistics are indifferent as to the number of trials. You can get information from a single trial. You can get more information from 1000 trials, but there is no magic number of trials.

There could definitely be some situation where the Bayesian and Frequentist methodologies lead to different conclusions about a study.
 
  • #18
stevendaryl said:
There could definitely be some situation where the Bayesian and Frequentist methodologies lead to different conclusions about a study.

I have never heard of any.

To apply the frequentest view you need some probability that for all practical purposes is zero. But that can be anything. Do you think a probability of 1/googoplex^googoplex taken as zero would ever have the slightest practical consequence. And even if you think of one then why would simply taking a probability below whatever the sensitivity of the situation is would not fix it?

Thanks
Bill
 
  • #19
Here's something that mathematical physicist John Baez wrote years ago, and when I read it, I was so convinced that I assumed that frequentism was just one of those relics of the past. Since I don't really hang out with statisticians much, I didn't realize that there was still a debate about it.

It's not at all easy to define the concept of probability. If you ask most people, a coin has probability 1/2 to land heads up if when you flip it a large number of times, it lands heads up close to half the time. But this is fatally vague!

After all what counts as a "large number" of times? And what does "close to half" mean? If we don't define these concepts precisely, the above definition is useless for actually deciding when a coin has probability 1/2 to land heads up!

Say we start flipping a coin and it keeps landing heads up, as in the play Rosencrantz and Guildenstern are Dead by Tom Stoppard. How many times does it need to land heads up before we decide that this is not happening with probability 1/2? Five? Ten? A thousand? A million?

This question has no good answer. There's no definite point at which we become sure the probability is something other than 1/2. Instead, we gradually become convinced that the probability is higher. It seems ever more likely that something is amiss. But, at any point we could turn out to be wrong. We could have been the victims of an improbable fluke.

Note the words "likely" and "improbable". We're starting to use concepts from probability theory - and yet we are in the middle of trying to define probability! Very odd. Suspiciously circular.

Some people try to get around this as follows. They say the coin has probability 1/2 of landing heads up if over an infinite number of flips it lands heads up half the time. There's one big problem, though: this criterion is useless in practice, because we can never flip a coin an infinite number of times!

Ultimately, one has to face the fact that probability cannot be usefully defined in terms of the frequency of occurence of some event over a large (or infinite) number of trials. In the jargon of probability theory, the frequentist interpretation of probability is wrong.

http://math.ucr.edu/home/baez/bayes.html
 
  • #20
stevendaryl said:
Here's something that mathematical physicist John Baez wrote years ago, and when I read it, I was so convinced that I assumed that frequentism was just one of those relics of the past. Since I don't really hang out with statisticians much, I didn't realize that there was still a debate about it.

http://math.ucr.edu/home/baez/bayes.html

I know that nobody is going to believe this, but when I posted that link, I had completely forgotten that John Baez wrote that article in response to conversations with me.
 
  • #21
stevendaryl said:
Here's something that mathematical physicist John Baez wrote years ago, and when I read it,

As usual he is correct.

The issue though is whether there is a practical situation that a very small probability doesn't exist below which it makes no difference.

Its the same issue in applying the calculus. You need a some Δt not equal to zero to actually use it - but since its not zero it can't be correct in measuring things. But in practice there are intervals that for all practical purposes its square can be taken as zero - which is the intuitive approach to it.

Thanks
Bill
 
  • #22
bhobba said:
As usual he is correct.

The issue though is whether there is a practical situation that a very small probability doesn't exist below which it makes no difference.

Its the same issue in applying the calculus. You need a some Δt not equal to zero to actually use it - but since its not zero it can't be correct in measuring things. But in practice there are intervals that for all practical purposes its square can be taken as zero - which is the intuitive approach to it.

Thanks
Bill

Right. Frequentism could be considered a pragmatic methodology for dealing with statistics, without making any claims about the philosophy of probability.

The thing that is annoying about Bayesianism is that none of its conclusions are ever exciting or revolutionary. The Bayesian can never make a definitive announcement of the form: "Our statistics show that cigarettes cause cancer" or "Our experiments show that parity is violated by weak decays." For the Bayesian, data never proves or disproves a claim, it just adjusts the posterior probability of its being true. In contrast, scientists schooled in Karl Popper falsifiability think in terms theories being thrown out by experiment.

When it comes to figuring out what course of action to take in response to some crisis, Bayesianism vs. Falsifiability seems to me to make a difference.

Suppose there are two competing theories about the cause of some disease afflicting a patient: Theory A, and Theory B. Suppose there are three treatment options: Option 1, Option 2, Option 3.

Theory A says that Option 1 is the best treatment, and Option 2 is not nearly as good, and Option 3 is so bad, it will likely kill the patient.
Theory B says that Option 3 is the best treatment, and Option 2 is worse, and Option 1 will kill the patient.

The Bayesian analysis would proceed as follows:

Let [itex]P(\alpha)[/itex] be the subjective probability of theory [itex]\alpha[/itex]
Let [itex]P(j | \alpha)[/itex] be the probability of survival of the patient, given that theory [itex]\alpha[/itex] is true, and option [itex]j[/itex] is chosen.

Then we compute [itex]P(j)[/itex], the probability of survival given option [itex]j[/itex] as follows:

[itex]P(j) = \sum_\alpha P(\alpha) P(j | \alpha)[/itex]

So we pick the option that maximizes the probability of survival.

I would think that justifying that choice would be very difficult for the frequentist. The frequentist would say that there is no probability of theory A versus theory B. Either one or the other is correct, even if we don't know which. So either

[itex]P(j) = P(j | A)[/itex]

or

[itex]P(j) = P(j | B)[/itex]

but we don't know which. Combining different theories to get an overall probability makes no sense, from a frequentist point of view.
 
  • #23
bhobba said:
But that in no way changes logic. The same axioms can not lead to different results.
This is absurd.

This demonstrates that you do not know what you're talking for mathematics.

I have already given you a very simble example based on the axioms of the distance :
http://en.wikipedia.org/wiki/Taxicab_geometry
The result of a possible interprétation a circle.

The physical is not of mathematical nor bhobba's philosophy

Patrick
 
  • #24
bhobba said:
There are two main ones:
1. Bayesian - probability is a state of knowledge.
2 Frequentest - you simply associate this abstract thing called probability with objects and apply the law of large numbers to show for a large number of trials the probability is the proportion of the outcome in the trials.

The two main one are
1. Epstemic
2; Ontic

In ontic there is also Popper interpretation which is different from Frequentest.

And espitemic can not be reduced to the Bayesian interpretation.

Patrick
 
  • #25
microsansfil said:
This is absurd.

I simply don't know what to say.

If you believe its OK for two implementations of exactly the same axioms to give differing results then your math teachers were different to mine - and I had quite a few highly qualified ones for all sorts of subjects from Statistical Modelling to Hilbert Spaces.

It would also contradict standard texts like Feller I quoted from.

Thanks
Bill
 
  • #26
bhobba said:
If you believe its
I believe nothing it is simply the mathematics. In mathematics Proof theory (axiom/
only the syntax) is different from Model theory (semantic); The link between this two, in one direction, is Gödel's completeness theorem.

One wonders which speak about metaphysics ?

Patrick
 
  • #27
bhobba said:
I simply don't know what to say.

If you believe its OK for two implementations of exactly the same axioms to give differing results then your math teachers were different to mine - and I had quite a few highly qualified ones for all sorts of subjects from Statistical Modelling to Hilbert Spaces.

I think it is absurd to call anything you said absurd. However, I think that the issue might be about how the axioms are used in practice. It is certainly not enough to say: Here are the axioms for probability. Here are the results of a study. Compute the probability that cigarettes cause cancer (or whatever). To apply a theory, you have to have some kind of rules for connecting the formulas on paper to something you do in a laboratory. The axioms do not tell you what those rules are.

Two people could agree on the axioms and disagree about how the axioms should be applied in a real-world case.
 
  • #28
microsansfil said:
1. Epstemic
2; Ontic

When I did my degree I did six compulsory subjects - Mathematical Statistics 1A, 1B, 2A, 2B, 3A and 3B.

It was also used in a number of other subjects I did eg Operations Research, Mathematical Economics and Stochastic Modelling.

The view of everyone of those subjects was Fellers. If you wanted a picture you applied the law of large numbers and thought of the proportion of a large numbers of trials.

I have read books on stuff like Credibility Theory and Bayesian statistics that introduced the Bayesian view - in certain situations like updating estimates the Bayesian view was used because it led to more direct understanding.

I even studied books like the following to see the proof of existence theorems:
https://www.amazon.com/dp/9812703713/?tag=pfamazon01-20

But never in all my studies have those terms been used.

The first thing I need to ask - how do they diverge from the Kolmogorov axioms?

Thanks
Bill
 
Last edited by a moderator:
  • #29
stevendaryl said:
However, I think that the issue might be about how the axioms are used in practice.

I am starting to get that feeling as well.

But much more detail needs to be forthcoming to sort it out.

I could go through that book to try and nut it out.

But gee - I really don't feel like doing that for claims of this nature - the onus should really be on the person making the claims.

Thanks
Bill
 
  • #30
microsansfil said:
I believe nothing it is simply the mathematics.

Of course nothing applied is simply mathematics.

I gave a quote from Feller - exactly what is your issue with it?

I will repeat it for ease of reference.

'We shall no more attempt to explain the true meaning of probability than the modern physicist dwells on the real meaning of the mass and energy or the geometer discusses the nature of a point. Instead we shall prove theorem's and show how they are applied'

Thanks
Bill
 
Last edited:
  • #31
stevendaryl said:
In contrast, the methods of Bayesian statistics are indifferent as to the number of trials. You can get information from a single trial. You can get more information from 1000 trials, but there is no magic number of trials.

The results of Bayesian statistics are dependent on the number of trials. Regardless of the prior, even if it places an infinitesimally small probability over the true probability, as long as the prior is non-zero over the true hypothesis, the Bayesian will converge to the true probability.

Bayesian statistics is guaranteed to work if one knows in advance all possible hypotheses. Which is why it is beautiful, and also impractical - because if we did, we would already have a candidate non-perturbative definitions of all of string theory.
http://en.wikipedia.org/wiki/Bernstein–von_Mises_theorem
http://www.encyclopediaofmath.org/index.php/Bernstein-von_Mises_theorem

The other important theorem is the de Finetti representation theorem that allows Bayesians to be "effectively frequentist".
http://www.cs.berkeley.edu/~jordan/courses/260-spring10/lectures/lecture1.pdf
http://www.stats.ox.ac.uk/~steffen/teaching/grad/definetti.pdf
 
Last edited:
  • #32
stevendaryl said:
I would think that justifying that choice would be very difficult for the frequentist. The frequentist would say that there is no probability of theory A versus theory B. Either one or the other is correct, even if we don't know which.

A Bayesian could say the same thing as well - its just our knowledge of which is correct is subjective.

A Frequentest would say in a large number similar situations a certain proportion would be wrong. But, in that application it seems a strange way to view it - for hypothesis testing I think Bayesian is better.

But take Stochastic Modelling and say a queue at a bank. It's quite natural to think of if we did a large number of trials we would find a certain proportion with this or that number. Although of course one could view it the Bayesian way and think of a each number as simply subjective likelihood - but for me such isn't that visual.

As far as QM goes, for exactly the same reason I find the ensemble view more appealing - its concrete thinking of repetitions of the same observation - exactly as Vanhees says.

You can view it Bayesian and get something like Copenhagen, but like the queue it sort of seems unnatural.

One thing for sure, Bayesian is the obvious correct way to view Many Worlds. We know we must be in a world but which one. However I don't want to revive that long thread we had about it.

Thanks
Bill
 
Last edited:
  • #33
bhobba said:
When I did my degree
But never in all my studies have those terms been used.
I will not give you my CV.

There are different languages and cultures

Ontic := Objectif
Epistemic : Subjectif

Many different interpretation
http://en.wikipedia.org/wiki/Frequentist_probability
http://en.wikipedia.org/wiki/Probabilistic_logic
http://en.wikipedia.org/wiki/Propensity_probability
http://en.wikipedia.org/wiki/Bayesian_probability
...

bhobba said:
The first thing I need to ask - how do they diverge from the Kolmogorov axioms?
l

Proof theory
Model theory
Gödel's completeness theorem

Kolmogorov is included in this theoretical framework Measure (mathematics).

In QM there are, inter alia, Quantum probability Theory :

http://arxiv.org/abs/quant-ph/0601158 That isn't Kolmogorov axioms (mathematics of classical probability theory was subsumed into classical measure theory by Kolmogorov in 1933)As in general relativity there are Differential geometry.However physical is not as mathematical

Patrick
 
Last edited:
  • #34
microsansfil said:
There are different languages and cultures

I think that's obvious

microsansfil said:
Ontic := Objectif
Epistemic : Subjectif

Its good to know what you mean.

I did a scan on the book you linked to and it did not mention either of those terms.

microsansfil said:
Many different interpretations

I am aware there are a number of different views. That's not my issue. My issue is since they all are based on the Kolmogorov Axioms they all must give the same results.

My background is math mate - I am well aware of what constitutes a valid proof.

I am well aware of Godel's theorem, but its relevance here has me beat.

I am well aware of Model theory. Its application to non standard analysis is one of the most beautiful pieces of math I have ever seen - and one of the most difficult - its decidedly non-trivial. Again its relevance here has me beat.

microsansfil said:
Kolmogorov is included in this theoretical framework Measure (mathematics)

Mate - didn't I just do a post on a book of rigorous probability theory I studied a similar one of. Exactly what do you think it's about?

microsansfil said:
That isn't Kolmogorov axioms (mathematics of classical probability theory was subsumed into classical measure theory by Kolmogorov in 1933)

By definition the Kolmogorov axioms are a measure space with total measure one and conversely a measure space of total measure one obeys the Kolmogorv axioms.

I am now starting to suspect your knowledge of rigorous probability theory is rather rudimentary.

microsansfil said:
As in general relativity there is Differential geometry.

I have studied GR. The situation is exactly the same as Feller wrote for probability.

microsansfil said:
However physical is not as mathematical

Nobody ever said it was. Again I repeat what Feller said and highlight the key point:
'We shall no more attempt to explain the true meaning of probability than the modern physicist dwells on the real meaning of the mass and energy or the geometer discusses the nature of a point. Instead we shall prove theorem's and show how they are applied'

Now exactly what is your issue?

Thanks
Bill
 
Last edited:
  • #35
atyy said:
The results of Bayesian statistics are dependent on the number of trials. Regardless of the prior, even if it places an infinitesimally small probability over the true probability, as long as the prior is non-zero over the true hypothesis, the Bayesian will converge to the true probability.

Bayesian statistics is guaranteed to work if one knows in advance all possible hypotheses. Which is why it is beautiful, and also impractical - because if we did, we would already have a candidate non-perturbative definitions of all of string theory.
http://en.wikipedia.org/wiki/Bernstein–von_Mises_theorem
http://www.encyclopediaofmath.org/index.php/Bernstein-von_Mises_theorem

The other important theorem is the de Finetti representation theorem that allows Bayesians to be "effectively frequentist".
http://www.cs.berkeley.edu/~jordan/courses/260-spring10/lectures/lecture1.pdf
http://www.stats.ox.ac.uk/~steffen/teaching/grad/definetti.pdf

I once sketched out a Bayesian "theory of everything". Theoretically (not in practice, because it's computationally intractable, or maybe even noncomputable), you would never need any other theory.

Let [itex]T_1, T_2, ...[/itex] be an enumeration of all possible theories. Let [itex]H_1, H_2, ...[/itex] be an enumeration of all possible histories of observations. (It might be necessary to do some coarse-graining to make a discrete set of possibilities.)

Let [itex]P(T_i)[/itex] be the a-priori probability that theory [itex]T_i[/itex] is true.
Let [itex]P(H_j | T_i)[/itex] be the (pretend it's computable) probability of getting history [itex]H_j[/itex] if theory [itex]T_i[/itex] were true. Then we compute the probability of [itex]T_i[/itex] given [itex]H_j[/itex] has been observed via Bayes' rule:

[itex]P(H_j) = \sum_i P(T_i) P(H_j | T_i)[/itex]
[itex]P(T_i | H_j) = P(H_j | T_i) P(T_i)/P(H_j)[/itex]

So this gives us an a posteriori probability that any theory [itex]T_i[/itex] is true.

How can we enumerate all possible theories? Well, we can just think of a theory as an algorithm for computing probabilities of future histories given past histories. Computability theory shows us a way to enumerate all such algorithms.
 

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
1K
Replies
12
Views
1K
  • Engineering and Comp Sci Homework Help
Replies
15
Views
1K
  • Quantum Interpretations and Foundations
Replies
14
Views
2K
  • Set Theory, Logic, Probability, Statistics
7
Replies
212
Views
11K
  • Quantum Interpretations and Foundations
Replies
3
Views
2K
Replies
5
Views
3K
  • General Math
Replies
30
Views
3K
  • Quantum Interpretations and Foundations
3
Replies
76
Views
4K
  • Quantum Interpretations and Foundations
4
Replies
109
Views
7K
Back
Top