- #1
Auto-Didact
- 751
- 562
This thread is a shootoff from this post in the thread Summary of Frauchiger-Renner. The topics are related, but this thread offers a new perspective that diverges from the main subject of that thread.
In QM foundations, the sheer amount of interpretations, disagreement among experts about what is only 'an interpretation' and what is 'a new theory', blatant selectivity and/or disregard of the actual historical progression of the field, search for a minimal amount of axioms, operationalizations of concepts assumed to be of key importance and so on, all independently show the field is not in optimal shape. The fact that no other canonical physical theories nor their respective foundations suffer from these issues is evidence that the situation in foundations of QM has run amok.
This means at least three things: 1) there are far more problems than practitioners available i.e. there might be a direct need for more workers in QM foundations, 2) there aren't enough experts in this field who are independent of other fields (e.g. expert particle theorist turned QM foundations researcher is clearly not independent) and 3) the field is degenerating into a "search for the best axioms" (cf. The Character of Physical Law, Richard Feynman, linked here for convenience:
especially 9:24 till 10:20).
During the 20th century, the foundations of mathematics went through similar problems - problem 3) in particular - as the foundations of QM currently faces. Eventually, workers in foundations of mathematics settled on two axiomatic bases: Zermelo-Frankel set theory with the axiom of choice (ZFC) and without the axiom of choice (ZF). However, eventually it was proved that there are problems in mathematics (e.g. the Continuum Hypothesis) which are independent of both ZFC and ZF, i.e. fully undecidable even in principle; I'm not aware of how much workers in the foundations of QM have considered this possibility at work in their own field.
The practical things to come out of QM foundations literature today are mostly popularizations of some interpretations based in contemporary expert opinion and the creation of new no-go theorems, which both primarily seem to I) serve as a guideline for QM interpretation for QM practitioners and II) as guidelines for theory construction for theoreticians. Often it is not clear whether the no-go theorems are really proper guidelines for valid theory construction, instead of being a sort of selection tool to root out 'bad axioms', reducing the entire situation to a "search for the best axioms" as described above.
Moreover, here's an analogy with another new science (about as old as QM): Literature in QM foundations is as least as confused as e.g. literature in the foundations of psychiatry. To make it clear, no one is questioning the skill, ability or intent of the practitioners; it is instead entire programmes - often with no direct empirical application or limitations to practice as a consequence - which seem to suffer from blatantly shaky foundations, in spite of the practitioners' good intentions. In fact, despite the mathematical rigour in QM foundations, I suspect the situation in psychiatry is in actuality in far less murky waters because of it's strong coupling to both practice and experiment: progress there is obvious even when there are large setbacks (like rewriting parts of the DSM for ideological reasons).
Excluding laymen, all of this is patently clear when looked at a) from an outsiders perspective, i.e. from the viewpoint of a practicing non-foundations physicist or mathematician b) when looked at from a philosophy of physics perspective, c) when compared with other scientific theories as a scientist in general d) when compared with the foundations of mathematics, e) when compared with the foundations of probability theory and f) not mentioned yet, but perhaps most importantly, when compared with large theories in physics which also suffered immensely from the confusion of interpretational issues.
The best known case in the history of physics, where problems and paradoxes in the theory led to as much confusion as they do in QM foundations today, was in the 18th and 19th century in fluid mechanics, d’Alembert’s paradox; in fact, this problem can be restated as a problem of the interpretation of the ontological versus epistemological status of a central object in the theory, namely boundary layers - exactly like the problem with ##\psi## in QM foundations. It is therefore nothing short of a tragedy that this tale isn't universally known among physicists; here a brief retelling is quoted from (Bush, 2015):
In QM foundations, the sheer amount of interpretations, disagreement among experts about what is only 'an interpretation' and what is 'a new theory', blatant selectivity and/or disregard of the actual historical progression of the field, search for a minimal amount of axioms, operationalizations of concepts assumed to be of key importance and so on, all independently show the field is not in optimal shape. The fact that no other canonical physical theories nor their respective foundations suffer from these issues is evidence that the situation in foundations of QM has run amok.
This means at least three things: 1) there are far more problems than practitioners available i.e. there might be a direct need for more workers in QM foundations, 2) there aren't enough experts in this field who are independent of other fields (e.g. expert particle theorist turned QM foundations researcher is clearly not independent) and 3) the field is degenerating into a "search for the best axioms" (cf. The Character of Physical Law, Richard Feynman, linked here for convenience:
During the 20th century, the foundations of mathematics went through similar problems - problem 3) in particular - as the foundations of QM currently faces. Eventually, workers in foundations of mathematics settled on two axiomatic bases: Zermelo-Frankel set theory with the axiom of choice (ZFC) and without the axiom of choice (ZF). However, eventually it was proved that there are problems in mathematics (e.g. the Continuum Hypothesis) which are independent of both ZFC and ZF, i.e. fully undecidable even in principle; I'm not aware of how much workers in the foundations of QM have considered this possibility at work in their own field.
The practical things to come out of QM foundations literature today are mostly popularizations of some interpretations based in contemporary expert opinion and the creation of new no-go theorems, which both primarily seem to I) serve as a guideline for QM interpretation for QM practitioners and II) as guidelines for theory construction for theoreticians. Often it is not clear whether the no-go theorems are really proper guidelines for valid theory construction, instead of being a sort of selection tool to root out 'bad axioms', reducing the entire situation to a "search for the best axioms" as described above.
Moreover, here's an analogy with another new science (about as old as QM): Literature in QM foundations is as least as confused as e.g. literature in the foundations of psychiatry. To make it clear, no one is questioning the skill, ability or intent of the practitioners; it is instead entire programmes - often with no direct empirical application or limitations to practice as a consequence - which seem to suffer from blatantly shaky foundations, in spite of the practitioners' good intentions. In fact, despite the mathematical rigour in QM foundations, I suspect the situation in psychiatry is in actuality in far less murky waters because of it's strong coupling to both practice and experiment: progress there is obvious even when there are large setbacks (like rewriting parts of the DSM for ideological reasons).
Excluding laymen, all of this is patently clear when looked at a) from an outsiders perspective, i.e. from the viewpoint of a practicing non-foundations physicist or mathematician b) when looked at from a philosophy of physics perspective, c) when compared with other scientific theories as a scientist in general d) when compared with the foundations of mathematics, e) when compared with the foundations of probability theory and f) not mentioned yet, but perhaps most importantly, when compared with large theories in physics which also suffered immensely from the confusion of interpretational issues.
The best known case in the history of physics, where problems and paradoxes in the theory led to as much confusion as they do in QM foundations today, was in the 18th and 19th century in fluid mechanics, d’Alembert’s paradox; in fact, this problem can be restated as a problem of the interpretation of the ontological versus epistemological status of a central object in the theory, namely boundary layers - exactly like the problem with ##\psi## in QM foundations. It is therefore nothing short of a tragedy that this tale isn't universally known among physicists; here a brief retelling is quoted from (Bush, 2015):
John Bush said:And lest the longevity of the quantum paradoxes be mistaken for their insurmountability, fluid mechanics has a cautionary tale to tell. In 1749, d’Alembert’s paradox indicated that an object moving through an inviscid fluid experiences no drag, a prediction that was clearly at odds with experiments on high–Reynolds number gas flows. The result was a longstanding rift between experimentalists and theorists: For much of the nineteenth century, the former worked on phenomena that could not be explained, and the latter on those that could not be observed (Lighthill 1956). D’Alembert’s paradox stood for over 150 years, until Prandtl’s developments (Anderson 2005) allowed for the resolution of the dynamics on the hitherto hidden scale of the viscous boundary layer.
Last edited: