# Issues in QM Foundations?

• A

## Main Question or Discussion Point

This thread is a shootoff from this post in the thread Summary of Frauchiger-Renner. The topics are related, but this thread offers a new perspective that diverges from the main subject of that thread.

In QM foundations, the sheer amount of interpretations, disagreement among experts about what is only 'an interpretation' and what is 'a new theory', blatant selectivity and/or disregard of the actual historical progression of the field, search for a minimal amount of axioms, operationalizations of concepts assumed to be of key importance and so on, all independently show the field is not in optimal shape. The fact that no other canonical physical theories nor their respective foundations suffer from these issues is evidence that the situation in foundations of QM has run amok.

This means at least three things: 1) there are far more problems than practitioners available i.e. there might be a direct need for more workers in QM foundations, 2) there aren't enough experts in this field who are independent of other fields (e.g. expert particle theorist turned QM foundations researcher is clearly not independent) and 3) the field is degenerating into a "search for the best axioms" (cf. The Character of Physical Law, Richard Feynman, linked here for convenience:
especially 9:24 till 10:20).

During the 20th century, the foundations of mathematics went through similar problems - problem 3) in particular - as the foundations of QM currently faces. Eventually, workers in foundations of mathematics settled on two axiomatic bases: Zermelo-Frankel set theory with the axiom of choice (ZFC) and without the axiom of choice (ZF). However, eventually it was proved that there are problems in mathematics (e.g. the Continuum Hypothesis) which are independent of both ZFC and ZF, i.e. fully undecidable even in principle; I'm not aware of how much workers in the foundations of QM have considered this possibility at work in their own field.

The practical things to come out of QM foundations literature today are mostly popularizations of some interpretations based in contemporary expert opinion and the creation of new no-go theorems, which both primarily seem to I) serve as a guideline for QM interpretation for QM practitioners and II) as guidelines for theory construction for theoreticians. Often it is not clear whether the no-go theorems are really proper guidelines for valid theory construction, instead of being a sort of selection tool to root out 'bad axioms', reducing the entire situation to a "search for the best axioms" as described above.

Moreover, here's an analogy with another new science (about as old as QM): Literature in QM foundations is as least as confused as e.g. literature in the foundations of psychiatry. To make it clear, no one is questioning the skill, ability or intent of the practitioners; it is instead entire programmes - often with no direct empirical application or limitations to practice as a consequence - which seem to suffer from blatantly shaky foundations, in spite of the practitioners' good intentions. In fact, despite the mathematical rigour in QM foundations, I suspect the situation in psychiatry is in actuality in far less murky waters because of it's strong coupling to both practice and experiment: progress there is obvious even when there are large setbacks (like rewriting parts of the DSM for ideological reasons).

Excluding laymen, all of this is patently clear when looked at a) from an outsiders perspective, i.e. from the viewpoint of a practicing non-foundations physicist or mathematician b) when looked at from a philosophy of physics perspective, c) when compared with other scientific theories as a scientist in general d) when compared with the foundations of mathematics, e) when compared with the foundations of probability theory and f) not mentioned yet, but perhaps most importantly, when compared with large theories in physics which also suffered immensely from the confusion of interpretational issues.

The best known case in the history of physics, where problems and paradoxes in the theory led to as much confusion as they do in QM foundations today, was in the 18th and 19th century in fluid mechanics, d’Alembert’s paradox; in fact, this problem can be restated as a problem of the interpretation of the ontological versus epistemological status of a central object in the theory, namely boundary layers - exactly like the problem with ##\psi## in QM foundations. It is therefore nothing short of a tragedy that this tale isn't universally known among physicists; here a brief retelling is quoted from (Bush, 2015):
John Bush said:
And lest the longevity of the quantum paradoxes be mistaken for their insurmountability, fluid mechanics has a cautionary tale to tell. In 1749, d’Alembert’s paradox indicated that an object moving through an inviscid fluid experiences no drag, a prediction that was clearly at odds with experiments on high–Reynolds number gas flows. The result was a longstanding rift between experimentalists and theorists: For much of the nineteenth century, the former worked on phenomena that could not be explained, and the latter on those that could not be observed (Lighthill 1956). D’Alembert’s paradox stood for over 150 years, until Prandtl’s developments (Anderson 2005) allowed for the resolution of the dynamics on the hitherto hidden scale of the viscous boundary layer.

Last edited:

Related Quantum Physics News on Phys.org
jedishrfu
Mentor
This is the way things work. We argue over the interpretation but always go back to the math. Eventually someone will cut through the Gordian Knot of our misunderstanding and a new more pleasing theory will be born but until that time QM is the best thing since sliced bread.

bhobba and Auto-Didact
I agree, up to more tantalizing interpretations such as BM and stochastic mechanics. However, what surprises me more is that foundational progress is so mind-numbingly slow, while we are living in an age where there are more physicists alive than in all of history combined! Where is the next Newton or Einstein to save us from ourselves?

The theoretician's game is like running a marathon blindfolded with many other contenders, where we are all trying to get from A to B; however, initially we tend to quickly keep bumping into each other, from which each runner eventually learns how to avoid collisions and how to take each other into account. As a result, we almost all end up running in large circles.

By the time one of us gains enough of a background and sufficient experience to seem capable of making a lasting impression on this problem, he usually has become either far too fast for the crowd to keep up with him or far too slow for him to keep up with the crowd; in both cases he seems to be like a drunk forging his own path forward.

This causes him to become unsynchronized from the crowd: those in his direct vicinity either respond by speeding up to him or leave him behind, those further away see him to be just one of many other drunks; he is blindfolded and literally in the same boat as everyone else: how can he possibly see anything that they can not?

In the above analogy, drunkenness is a metaphor for seeing and being captivated by some particular form of mathematical aesthetics. In actuality, I think trusting in such aesthetics is our only real guide forward, which is why I think it is no idle game to be exposed to as much different standards of canonical forms beauty as well as ugliness in physics as possible.

These aesthetics are of course subjective: what some may find beautiful, others may find hideous. This is reflective of human nature more widely and of our own personal development in which we have all had differing degrees of exposure to different fundamentally aesthetic-free forms of mathematics. What is beauty then? The answer is simple: the elusive mathematical forms naturally seen most abundantly in nature, especially when it shows up unexpected.

jedishrfu
jedishrfu
Mentor
One reason there’s no progress is the push for results. People can’t spend much concentrated time pondering these things.

And then there’s the success of QM itself that hampers future deeper theories.

Lastly we may not yet have the math to deal with the concepts of a deeper theory.

1977ub, bhobba and Auto-Didact
Yes, I think it is clear that all three factors you mention are involved. Practically speaking, a theoretician needs either tenure or copious amounts of cash to have the freedom to work in such an ideal care-free manner; even tenure isn't enough, if he has to worry about his reputation.

This I think is especially why your second and third points are even more difficult for individuals to do anything about. In any case, all three points are a reflection of the amount of practicing physicists today and a clear demonstration of the principle that the job "theoretician" doesn't scale too well in proportion to the theoretician's task.

Incidentally, the analogies I am cooking up here are no random ones, but instead popular toy models from discrete dynamical systems; by actively procrastinating I'm hoping it could help identify all or most system parameters governing physicists in a more abstract version of something like this.

f95toli
Gold Member
However, what surprises me more is that foundational progress is so mind-numbingly slow, while we are living in an age where there are more physicists alive than in all of history combined! Where is the next Newton or Einstein to save us from ourselves?
This is to a large extent because it is not that interesting. I've never encountered a single working physicist who has any interest in foundations and by now I'v met quite a few (I've been working on experimental QM for nearly 20 years) . The closest you get is various experiments on loophole free Bell tests done using different systems,.but they are in the field mainly seen as interesting experimental challenges rather than something fundamental, simply because by now no one believes Bell was wrong. The fact that you -by definition- can't experimentally test different interpretations means that is of little or no interest to experimentalists.
Moreover, this is not something that is covered in even graduate courses in QM so most working physicists know very little about foundations.

Note that the same thing can be said about classical mechanics. Before the emergence of QM and SR/GR there was a (small) field dealing with foundations of classical (Newtonian) mechanics in the 19th century; but again I suspect most people who used the laws of Newton at the time did not worry much about this.

bhobba, dextercioby, Lord Jestocost and 2 others
This is to a large extent because it is not that interesting. I've never encountered a single working physicist who has any interest in foundations and by now I'v met quite a few (I've been working on experimental QM for nearly 20 years) .
This is a non-argument. Foundations is a conceptual construction effort, meaning - w.r.t. physics - both the responsibility and methodology belong almost exclusively to mathematical and theoretical physics.

In fact, it is predicted that all regular QM experiments are incapable of telling us anything new, unless we go beyond the known scale of validity of QM and get into the territory of large masses (##10^{-8} \mathrm {kg}##) where there is an obvious conflict with GR and the problem becomes alarming from the theoretician's viewpoint.

There are in fact, dozens to hundreds of serious experimentalists working on such questions (mesoscopic and macroscopic QM), whether you have met them or not. The reason why almost no one has worked on them before is because it is extremely difficult work and doesn't obviously lead to a good career as the other options staring them in the face.

stevendaryl
Staff Emeritus
This is to a large extent because it is not that interesting. I've never encountered a single working physicist who has any interest in foundations and by now I'v met quite a few (I've been working on experimental QM for nearly 20 years).
I think that reflects the lack of expectation of progress, rather than lack of interest. Other than poking at the edges, the foundational questions have been around since the very beginning of quantum mechanics. So it's been roughly 100 years of no progress (a little more or a little less, depending on when you place the beginning of quantum theory---with Einstein and Planck, or with Bohr, de Broglie, Schrodinger and Heisenberg). Working physicists do not want to spend a lot of time on anything that will likely be a dead end.

I guess it's harsh to say "no progress". Bell's research counts as progress, and so does the development of Bohmian mechanics and the various alternate interpretations (Many Worlds, Coherent Histories, etc.) and so do the various no-go theorems such as the PBR theorem, the Frauchiger-Renner result. But I don't think that any of these results constitute a breakthrough.

Last edited:
Auto-Didact and Lord Jestocost
stevendaryl
Staff Emeritus
To me, the big issue in quantum foundations is ontology: What actually exists (as opposed to what are mathematical artifacts used for calculation). The quantum recipe (which is not a theory, but a pragmatic way to apply quantum mechanics) treats measurements as real, and treats everything else---atoms, particles, fields, etc.---as just mathematical fictions for the purpose of making calculations about probabilities. I personally find that unsatisfying, because measurements are (I assume) made up of lots of interactions of lots of atoms and particles and fields. It doesn't make sense to me to treat a measurement as real without treating the constituents as real, as well. But maybe that's old-fashioned reductionism.

dextercioby, Auto-Didact and 1977ub
During the 20th century, the foundations of mathematics went through similar problems - problem 3) in particular - as the foundations of QM currently faces. Eventually, workers in foundations of mathematics settled on two axiomatic bases: Zermelo-Frankel set theory with the axiom of choice (ZFC) and without the axiom of choice (ZF). However, eventually it was proved that there are problems in mathematics (e.g. the Continuum Hypothesis) which are independent of both ZFC and ZF, i.e. fully undecidable even in principle; I'm not aware of how much workers in the foundations of QM have considered this possibility at work in their own field.
i dont know any advanced quantum theory but i dont think interpretations of QM has anything to do with "axioms" of QM, because axioms are only a calculational/epistemological thing. also the math side and how things settled seemed very arbitrary to me, the CH is obviously false and people just haven't yet been able to come up with a new universally accepted axiom(s) that proves the negation of CH. the usual set theory is REALLY bad, i remember reading a power cardinal can be ANY regular non limit cardinal (can't remember precisely someone has to correct me).

in any case i think finding what QM is really trying to say about the universe is way more important than findings axioms and quantum gravity etc, layman don't care about precise predictions or mathematical elegance, we just want to know what the universe is really like and what is consciousness.

To me, the big issue in quantum foundations is ontology: What actually exists (as opposed to what are mathematical artifacts used for calculation). The quantum recipe (which is not a theory, but a pragmatic way to apply quantum mechanics) treats measurements as real, and treats everything else---atoms, particles, fields, etc.---as just mathematical fictions for the purpose of making calculations about probabilities. I personally find that unsatisfying, because measurements are (I assume) made up of lots of interactions of lots of atoms and particles and fields. It doesn't make sense to me to treat a measurement as real without treating the constituents as real, as well. But maybe that's old-fashioned reductionism.
This is exactly my point of view; it is also why I'm naturally attracted towards e.g. Bohmian mechanics, for both the mathematically sophisticated and physically coherent picture of quantum theory that it offers, despite its status of not yet having been unified - or worse, being ununifiable - with special relativity.

I'm sure you have posted about this before, but what is your opinion on BM? Have you by any chance had the opportunity to read Dürr & Teufel, 2009 yet? If so, I'd like to hear your opinion on it.

Feynman gave the best onthology in two words: "Nature knows" (in the lectures). Physicists are parts of Nature, physics (with all the measurements) is a part of Nature's knowledge, everything that happens is knowledge increment, so Copenhagen stands firm enough)

stevendaryl
Staff Emeritus
This is exactly my point of view; it is also why I'm naturally attracted towards e.g. Bohmian mechanics, for both the mathematically sophisticated and physically coherent picture of quantum theory that it offers, despite its status of not yet having been unified - or worse, being ununifiable - with special relativity.

I'm sure you have posted about this before, but what is your opinion on BM? Have you by any chance had the opportunity to read Dürr & Teufel, 2009 yet? If so, I'd like to hear your opinion on it.
I think Bohmian mechanics is much more like an actual theory of physics than most other interpretations of quantum mechanics. The theory actually says what's happening and what the equations are governing it.

The only reason that I'm not 100% a Bohmian is because it seems ugly to me. There are two very elegant principles at work in modern theories of physics: (1) The Feynman sum over paths way of calculating amplitudes, and (2) the principle of relativity. Bohmian mechanics seems to ignore both of these. Or rather, if Bohmian mechanics is true, then both of them are accidental features of the world. The fact that Bohmian mechanics does not allow FTL communication seems like an accident: If your theory has one particle instantaneously affecting another distant particle, then why is FTL impossible? Similarly, amplitudes for paths, which have such elegant rules for calculating them in Quantum mechanics, just don't play any significant role in Bohmian mechanics.

Of course, it might very well be the case that those principles aren't fundamental, they're just accidental. But it seems to me that Bohmian mechanics is not looking at what physics is trying to tell us about the world.

kurt101 and martinbn
The only reason that I'm not 100% a Bohmian is because it seems ugly to me
I wouldn't exactly commit to being called a Bohmian myself, but I'm curious, what exactly do you find ugly: the equations themselves or the fact that the theory as is has not been made relativistic (yet)?

If it's the equations themselves, I find it interesting that I have actually developed the exact opposite view over time; they remind me of hydrodynamics. To me hydrodynamics has the most beautiful equations in all of physics, outside of GR.
There are two very elegant principles at work in modern theories of physics: (1) The Feynman sum over paths way of calculating amplitudes, and (2) the principle of relativity. Bohmian mechanics seems to ignore both of these. Or rather, if Bohmian mechanics is true, then both of them are accidental features of the world.
Feynman's path integral is to a large extent equivalent to the non-relativistic Schrodinger equation, which is itself of course equivalent to the Bohmian equations. Therefore this just seems like an unsolved puzzle, instead of an unsolvable one.

As for point (2), BM is of course formulated based on Galilean relativity; its generalisation to SR, i.e. Bohm-Dirac theory, unfortunately is not so simple as just copying the same steps made for QM or QFT... I'm not so sure that this implies that BM is actually telling us to give up on relativity, instead of just forcing us to try another route.
The fact that Bohmian mechanics does not allow FTL communication seems like an accident: If your theory has one particle instantaneously affecting another distant particle, then why is FTL impossible?
The reason that BM is non-local is because it contains ##\psi##. Non-locality arises because ##\psi## is a function on configuration space; all functions on configuration space (and phase space) are inherently non-local.

The reason FTL signalling is impossible is because of the quantum equilibrium hypothesis: actions at a distance mediated by ##\psi## are randomized in such a way that they are unusable.

A. Neumaier
2019 Award
the non-relativistic Schrodinger equation, which is itself of course equivalent to the Bohmian equations.
The non-relativistic Schrödinger equation is not equivalent to the Bohmian equations - it is just one of the two equations making up the Bohmian framework.

The second is an equation that introduces additional machinery without observable consequences but breaking the symplectic symmetry of the Hamiltonian framework and requiring sppoky action at any distance from the very beginning.

Moreover, to produce Born's rule it requires a global hypothesis of quantum equilibrium that is at least as difficult to motivate than Born's rule itself. Moreover, to be in quantum equilibrium, the universe must have been prestabilized since its creation, since there are no stability results that would guarantee convergence to quantum equilbrium. (This is not even guaranteed for thermodynamic equilibrium, because of the odd behavior of gravity.)

I wonder even whether Bohmian mechanics has a mathematically well-defined initial-value problem with unique solution.

All this makes to me Bohmian mechanics ugly, complicated, and very unlikely.

bhobba and dextercioby
stevendaryl
Staff Emeritus
As for point (2), BM is of course formulated based on Galilean relativity; its generalisation to SR, i.e. Bohm-Dirac theory, unfortunately is not so simple as just copying the same steps made for QM or QFT... I'm not so sure that this implies that BM is actually telling us to give up on relativity, instead of just forcing us to try another route.
Bell's inequality shows that any hidden-variable theory explaining the results of the EPR experiment must be nonlocal. So I think that there is no way to formulate a relativistic version of BM. You could have a theory that looks relativistic, in some approximation.

Bell's inequality shows that any hidden-variable theory explaining the results of the EPR experiment must be nonlocal. So I think that there is no way to formulate a relativistic version of BM. You could have a theory that looks relativistic, in some approximation.
In order to avoid an unnecessary lengthy digression, to make my point I will just directly cite the main argument of the following paper (Durr et all, 2004). I'd like for anyone to point out the errors in their reasoning. It is a short paper, only 5 pages long and well-written I might add, certainly worth a readthrough so it is best to just read it. For those however without access I will post their main argument here. The rest of this post is a direct citation of pp. 2-4. I quote:

In 1964, Bell proved that any serious version of quantum theory (regardless of whether or not it is based on microscopic realism) must violate locality [J. S. Bell. On the Einstein-Podolsky-Rosen Paradox. Physics, 1: 195–200, 1964]. He showed that if nature is governed by the predictions of quantum theory, the “locality principle,” precluding any sort of instantaneous (or superluminal) action-at-a-distance, is simply wrong, and our world is nonlocal. The theoretical analysis leading to such a conclusion is commonly known as Bell’s theorem.

Bell’s theorem involves two parts. The first part is the Einstein-Podolsky-Rosen argument [A. Einstein, B. Podolsky, and N. Rosen. Can Quantum-Mechanical Description of Physical Reality Be Considered Complete? Phys. Rev. 47: 777-780, 1935] applied to the simplified version considered by David Bohm [D. Bohm. Quantum Theory. Prentice-Hall, Englewood Cliffs, N.J., 1951], the EPRB experiment: a pair of spin one-half particles, prepared in a spin-singlet state, are moving in opposite directions. Measurements are made, say by Stern-Gerlach magnets, on selected components of the spins of the two particles. The spin-singlet state has the following property: whenever the component of the spin ##σ_1## in any direction ##\alpha## is measured for one of the two particles, a measurement of the same component of the spin ##σ_2## of the other particle will give with certainty the opposite value. For such a state the assumption of locality implies the existence of what are often called noncontextual hidden variables. More precisely, it implies, for the spin-singlet state, the existence of random variables ##Z^i_α (= Z_{α·σ_i}), i = 1,2,## which can be regarded as corresponding to preexisting values of all possible spin components of the two particles. In particular, focusing on components in only 3 directions a, b and c for each particle, locality implies the existence of 6 random variables ##Z^i_α, i = 1,2, α = a, b, c## such that
\begin{align} Z^i_α & = ±1 \\ Z^1_α & = −Z^2_α \end{align} and, more generally, \begin{align}\mathrm{Prob }(Z^1_α \neq Z^2_β) = q_{αβ}, \end{align} where the ##q_{αβ}= (1+α·β)/2 = \mathrm {cos}^2(θ/2)## are the corresponding quantum mechanical probabilities, with ##θ## the angle between ##α## and ##β##.

The argument for this conclusion can be expressed as follows: The existence of such random variables amounts to the idea that measurements of the spin components reveal preexisting values (the ##Z^i_α##). Assuming locality, this is implied by the perfect quantum mechanical anticorrelations [J. S. Bell. On the Einstein-Podolsky-Rosen Paradox. Physics, 1: 195–200, 1964]:
John Bell said:
Now we make the hypothesis, and it seems one at least worth considering, that if the two measurements are made at places remote from one another the orientation of one magnet does not influence the result obtained with the other. Since we can predict in advance the result of measuring any chosen component of ##σ_2##, by previously measuring the same component of ##σ_1##, it follows that the result of any such measurement must actually be predetermined.
Otherwise the result would have, at least in part, been produced by the remote measurement, just the sort of influence that Bell’s locality hypothesis precludes. We may also note that if the results had not been predetermined, the widely separated correlated residual innovations thereby implied would be an instance of nonlocality.

Observe that, given locality, the existence of such variables is a consequence rather than an assumption of Bell’s analysis. Bell repeatedly stressed this point (by determinism Bell here means the existence of the preexisting values that would determine the results of the corresponding measurements):
John Bell said:
It is important to note that to the limited degree to which determinism plays a role in the EPR argument, it is not assumed but inferred. What is held sacred is the principle of ‘local causality’ – or ‘no action at a distance’. . . .

It is remarkably difficult to get this point across, that determinism is not a presupposition of the analysis. ([J. S. Bell. Speakable and unspeakable in quantum mechanics. Cambridge University Press, Cambridge, 1987], p. 143)

Despite my insistence that the determinism was inferred rather than assumed, you might still suspect somehow that it is a preoccupation with determinism that creates the problem. Note well then that the following argument makes no mention whatever of determinism. . . . Finally you might suspect that the very notion of particle, and particle orbit . . . has somehow led us astray. . . . So the following argument will not mention particles, nor indeed fields, nor any other particular picture of what goes on at the microscopic level. Nor will it involve any use of the words ‘quantum mechanical system’, which can have an unfortunate effect on the discussion. The difficulty is not created by any such picture or any such terminology. It is created by the predictions about the correlations in the visible outputs of certain conceivable experimental set-ups. ([J. S. Bell. Speakable and unspeakable in quantum mechanics. Cambridge University Press, Cambridge, 1987], p. 150.)
The second part of the analysis, which unfolds the “difficulty . . . created by the . . . correlations,” involves only very elementary mathematics. Clearly, $$\mathrm {Prob } ({Z^1_a = Z^1_b} ∪ {Z^1_b = Z^1_c } ∪ {Z^1_c = Z^1_a}) = 1,$$ since at least two of the three (2-valued) variables ##Z^1_α## must have the same value. Hence, by elementary probability theory, $$\mathrm {Prob } (Z^1_a = Z^1_b) + \mathrm {Prob } (Z^1_b = Z^1_c) + \mathrm {Prob } (Z^1_c = Z^1_a) ≥ 1,$$ and using the perfect anticorrelations (2) we have that \begin{align}\mathrm {Prob } (Z^1_a = -Z^2_b) + \mathrm {Prob } (Z^1_b = -Z^2_c) + \mathrm {Prob } (Z^1_c = -Z^2_a) ≥ 1. \end{align}
(4) is equivalent to the celebrated Bell’s inequality. It is incompatible with (3). For example, when the angles between ##\mathrm {a, b}## and ##\mathrm{c}## are ##120 \deg## , the 3 relevant quantum correlations ##q_{αβ}## are all ##1/4##, implying a value of ##3/4## for the left hand side of (4).

Let P be the hypothesis of the existence of noncontextual hidden variables for the EPRB experiment, i.e., of preexisting values ##Z^i_α## for the spin components relevant to this experiment. Then Bell’s nonlocality argument, just described, has the following structure: \begin{align} \text{Part 1}&: \text{ quantum mechanics + locality } \text{⇒ P} \\ \text{Part 2}&: \text{ quantum mechanics } \text{⇒ not P} \\ \text{Conclusion}&: \text{ quantum mechanics } \text{⇒ not locality} \\ \end{align}For this argument what is relevant about “quantum mechanics” is merely the predictions concerning experimental outcomes corresponding to (1–3) (with Part 1 using in fact only (2)). To fully grasp the argument it is important to appreciate that the content of P—what it actually expresses, namely the existence of the noncontextual hidden variables—is of little substantive importance for the argument. What is important is the fact that P is incompatible with the predictions of quantum theory.

The content of P is, however, of great historical significance: It is responsible for the misconception that Bell proved that (i) hidden variables are impossible, a belief until recently almost universally shared by physicists, and, more recently, for the view that Bell proved that (ii) hidden variables, while perhaps possible, must be nonlocal. Statement (i) is plainly wrong, since a hidden-variables theory exists and works, as mentioned earlier. Statement (ii) is correct, significant, but nonetheless rather misleading. It follows from (5) and (6) that any account of quantum phenomena must be nonlocal, not just any hidden-variables account. Bell’s argument shows that nonlocality is implied by the predictions of standard quantum theory itself. Thus if nature is governed by these predictions, then nature is nonlocal. (That nature is so governed, even in the crucial EPRB-correlation experiments, has by now been established by a great many experiments, the most conclusive of which is perhaps that of Aspect [A. Aspect, J. Dalibard, and G. Roger. Experimental Test of Bell’s Inequalities using Time Varying Analyzers. Phys. Rev. Lett. 49: 1804–1807, 1982].)

DarMM
Gold Member
The typical understanding in Foundations research is that Bell's theorem shows that any hidden variable account must be either nonlocal, retrocausal, acausal or involve multiple outcomes (i.e. a form of Many Worlds).

DrChinese and dextercioby
The typical understanding in Foundations research is that Bell's theorem shows that any hidden variable account must be either nonlocal, retrocausal, acausal or involve multiple outcomes (i.e. a form of Many Worlds).
I know that, we've been over that many times before; critically reading the literature directly however suggests an astutely different understanding of what Bell's theorem says. In the spirit of this thread, there seems to be a real issue in foundations research, namely that the typical assesment and understanding of foundations literature and the critical assesment and understanding of the foundations literature do not necessarily agree. This is no small issue, why is there such a huge difference?

A possible explanation, which I don't think is quite fair, is that the authors may not be experts. Another explanation, more likely in my view and experience (direct academic circle), is that many physicists haven't actually taken the time to critically read all or most of the original papers, but instead mostly just go on reviews, textbooks, summaries and hearsay by known experts; the thread on FR clearly shows that not critically reading arguments - even those made by experts - can be quite dangerous.

DarMM
Gold Member
It's quite common in many fields. For example in Quantum Field Theory many folk claims about the theories are known to be false from rigorous studies or have strong arguments against them and yet they are often repeated, e.g. the Standard Model is trivial.

I think it's just the time it takes to become familiar with the material and the fact that early "soundbite" versions of facts get stuck in people's heads and take a long time to remove. I don't see it as something particular to Quantum Foundations.

However that understanding of Bell's theorem is the one I've encountered from most authors and it seems to be the correct one.

Perhaps an example from the FR thread of what you are talking about?

Last edited:
Auto-Didact
DrChinese
Gold Member
In 1964, Bell proved that any serious version of quantum theory (regardless of whether or not it is based on microscopic realism) must violate locality [J. S. Bell. On the Einstein-Podolsky-Rosen Paradox. Physics, 1: 195–200, 1964]. He showed that if nature is governed by the predictions of quantum theory, the “locality principle,” precluding any sort of instantaneous (or superluminal) action-at-a-distance, is simply wrong, and our world is nonlocal. The theoretical analysis leading to such a conclusion is commonly known as Bell’s theorem.
That statement is completely misleading, and should not be presented in this forum in a form implying consensus, even as a quote from a paper, without proper labeling as non-standard. Else it violates forum rules, which it probably does anyway. The only serious adherents to your view are those - like Durr, Norsen, etc. - who have subsequently attempted to spin such a conclusion out of Bell to push a Bohmian view. I also point out that the thread topic is quite different than a discussion of nonlocal quantum theories.

So I would challenge you to back that up by commonly accepted papers, of which Durr's itself would not be included. Orthodox is that Bell assumes locality and hidden variables/realism (in the form of counterfactual definiteness). Virtually all papers referencing Bell state as I do, as per below (Gregor Weihs, Thomas Jennewein, Christoph Simon, Harald Weinfurter, and Anton Zeilinger, 1998) [my emphasis added]:

https://arxiv.org/abs/quant-ph/9810080
"After Bell’s discovery [2] that EPR’s implication to explain the correlations using [local] hidden parameters would contradict the predictions of quantum physics, a number of experimental tests have been performed [3–5]. All recent experiments confirm the predictions of quantum mechanics. Yet, from a strictly logical point of view, they don’t succeed in ruling out a local realistic explanation completely..."

You may hold the opinion that only a non-local theory can explain entanglement, but that is just an opinion - and it is hardly consensus in the community. Otherwise, there wouldn't be papers entitled "Why Isn't Every Physicist a Bohmian?". From Wiki, which is simply included as demonstrating the general consensus on Bell and *not* as an authoritative source:

No physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics.

I couldn't say it better myself.

Auto-Didact
DrChinese
Gold Member
The typical understanding in Foundations research is that Bell's theorem shows that any hidden variable account must be either nonlocal, retrocausal, acausal or involve multiple outcomes (i.e. a form of Many Worlds).
Or: there may not be counterfactual hidden variables. I.e. QM is contextual, which I would say is something most of the above have in common.

DarMM
DarMM
Gold Member
Or: there may not be counterfactual hidden variables. I.e. QM is contextual, which I would say is something most of the above have in common.
Yes indeed. Even many of the Copenhagen (i.e. no hidden variables at all and ##\psi## is epistemic) views have a clear contextuality to them. Such as in decoherent histories you only ever get two of the four variables in the Bell test measured in a single decoherent history and the other two have no values because they don't form part of the classical facts of that history. Similarly* Jeffrey Bub and Richard Healey would say that without the right context, e.g. experimental equipment or environment, a variable won't obtain a classical value and in the Bell test only two can.

It seems Contextuality is the real feature to me.

*Bub, Healey and Decoherent histories are similar views, as are all Neo-Copenhagen views.

DrChinese
Gold Member
It seems Contextuality is the real feature to me.
I have been told by Demystifier that he considers Bohmian Mechanics to be a contextual theory. That is because there is no counterfactual measurement outcome.

Auto-Didact
It's quite common in many fields. For example in Quantum Field Theory many folk claims about the theories are known to be false from rigorous studies or have strong arguments against them and yet they are often repeated, e.g. the Standard Model is trivial.

I think it's just the time it takes to become familiar with the material and the fact that early "soundbite" versions of facts get stuck in people's heads and take a long time to remove. I don't see it as something particular to Quantum Foundations.
I agree. Most fields in which I have pushed through the literature, I have encountered this to differing degrees; however, they typically tend to treat alternatives less as being somehow taboo compared to what I notice in foundations of physics.

This is highly problematic for the following reason: foundations of physics is already such a tiny backwater research community; what hope is there then, that upon deferring reasoning to experts in good faith, that they will not just slowly expand into an extended nepotismesque clique and so just drown out all serious alternative theoretical viewpoints?

Are there any checks and balances in place to prevent such things from occurring, apart from the standard peer review system?
Perhaps an example from the FR thread of what you are talking about?
The gist of the thread showed that there is disagreement among experts about whether a theorem is actually right or wrong.
The only serious adherents to your view are those - like Durr, Norsen, etc. - who have subsequently attempted to spin such a conclusion out of Bell to push a Bohmian view.

So I would challenge you to back that up by commonly accepted papers, of which Durr's itself would not be included.
This is news to me. Is there a longer version of this 'black list' in foundations research? I find much of what Peres says very sensible.

In any case, what about their arguments? Surely, the arguments stand on their own merits. I don't think it is a good idea to let such obvious biases cloud our judgments of being able to analyze a serious argument; foundations research is hard enough as it is.
You may hold the opinion that only a non-local theory can explain entanglement, but that is just an opinion - and it is hardly consensus in the community.
It is indeed merely a strong suspicion, which needs to be worked out more fully, but I would argue that the same is true for any consensus which is not directly based on rigourously checked experiment or a completely clear and verifiable proof.

I'm not fully certain of anything w.r.t. what is actually ultimately true in physical science and neither is anyone else. If they claim otherwise they are either lying to you, to themselves or both.
I.e. QM is contextual, which I would say is something most of the above have in common.
It seems Contextuality is the real feature to me.
For something less controversial, I actually do seriously entertain the view that QM is contextual as well. I have a thread on this topic here which looks at some serious mathematical models of contextuality, but unfortunately there doesn't seem to be too much of an interest.