Graduate Jürg Fröhlich on the deeper meaning of Quantum Mechanics

Click For Summary
Jürg Fröhlich's recent paper critiques the confusion surrounding the deeper meaning of Quantum Mechanics (QM) among physicists, arguing that many evade clear interpretations. He introduces the "ETH-Approach to QM," which aims to provide a clearer ontology but is deemed too abstract for widespread acceptance. The discussion reveals skepticism about Fröhlich's arguments, particularly regarding entanglement and correlations in measurements, which many participants believe are adequately explained by standard QM. Critics argue that Fröhlich's claims do not align with experimental evidence supporting the predictions of QM, especially in entangled systems. Overall, the conversation emphasizes the need for clarity and understanding in the interpretation of quantum phenomena.
  • #91
A. Neumaier said:
Not really. Frequentists just have conditional probability, i.e., probabilities relative to a subensemble of the original ensemble. Nobody is choosing or updating anything; it never occurs
Well you learn you are in a subensemble then. Does this change much? It's still the case that the theory doesn't specify when you "learn" you're in a given subensemble.

In all views you will update your probabilities, regardless of what meaning you give to this it occurs across all views. The point is that the theory never gives formal account of how this comes about. It's just a primitive of probability theory.

A. Neumaier said:
Neither are there observers or superobservers. I have never seen the notion of superobservers in classical probability of any kind
One person is including just the system in the probability model (observer), the other is including the system and the device (superobserver). That's all a superobserver is really. The notion can be introduced easily.

A. Neumaier said:
No, because both Wigner and his friend only entertain subjective approximations of the objective situation. Subjectively everything is allowed. Even logical faults are subjectively permissible (and happen in real subjects quite frequently).
I don't understand this I have to say. The Bayesian view of probability does not permit logical faults either under de Finetti or Cox's constructions. Unless you mean something I don't understand by "logical faults". In fact the point of Cox's theorem is that Probability is Logic under uncertainty.

Regarding the sentence in bold, can you be more specific about what you mean by Wigner's friend not being possible under a frequentist view? I really don't understand.
 
Physics news on Phys.org
  • #92
DarMM said:
Wikipedia (Bayesian probability) said:
For objectivists, interpreting probability as extension of logic, probability quantifies the reasonable expectation everyone (even a "robot") sharing the same knowledge should share in accordance with the rules of Bayesian statistics, which can be justified by Cox's theorem.
What a robot finds reasonable depends on how it is programmed, hence is (in my view) subjective.
What should count as knowledge is conceptually very slippery and should not figure in good foundations.
Wikipedia (Cox's theorem) said:
Cox wanted his system to satisfy the following conditions:
  1. Divisibility and comparability – The plausibility of a proposition is a real number and is dependent on information we have related to the proposition.
  2. Common sense – Plausibilities should vary sensibly with the assessment of plausibilities in the model.
  3. Consistency – If the plausibility of a proposition can be derived in many ways, all the results must be equal.
Even though a unique plausible concept of probability comes out after making the rules mathematically precise, I wouldn't consider this objective since it depends on ''information we have'', hence on a subject.

Rather than start with a complicated set of postulates that make recourse to subjects and derive standard probability, it is much more elegant, intuitive, and productive to start directly with the rules for expectation values featured by Peter Whittle (and recalled in physicists notation in Section 3.1 of my Part II). I regularly teach applied statistics on this basis, from the scratch.

DarMM said:
The best book I think on the Objective outlook is E.T. Jaynes's "Probability Theory: The Logic of Science"
There are no objective priors, and Jaynes' principle of maximum entropy (Chapter 11) gives completely wrong results for thermodynamics if one assumes knowledge of the wrong prior and/or the wrong expectation values (e.g., that of ##H^2## rather than that of ##H##). One needs to be informed by what actually works to produce the physically correct results from max entropy. A detailed critique is in Section 10.7 of my online book.
 
Last edited:
  • Informative
Likes Mentz114
  • #93
A. Neumaier said:
There are no objective priors, and Jaynes' principle of maximum entropy (Chapter 11) gives completely wrong results for thermodynamics if one assumes knowledge of the wrong prior and/or the wrong expectation values (e.g., that of ##H^2## rather than that of ##H##. One needs to be informed by what actually works to produce the physically correct results from max entropy. A detailed critique is in Section 10.7 of my online book.
That's pretty much the argument most Subjective Bayesians use against Objective Bayesianism. Certainly I know you do not like Probability in the Foundations, thus the Thermal Interpretation. It is for this reason I mentioned it in #80
 
  • #94
DarMM said:
you learn you are in a subensemble then. Does this change much? It's still the case that the theory doesn't specify when you "learn" you're in a given subensemble.
No. You assume that you are in a subensemble. This assumption may be approximately correct, but human limitations in this assessment are irrelevant for the scientific part.

Theory never specifies which assumptions are made by one of its users. It only specifies what happens under which assumptions.
DarMM said:
In all views you will update your probabilities
I may update probabilities according to whatever rules seem plausible to me (never fully rational), or whatever rules are programmed into the robot who makes decision. But the updating is a matter of decision making, not of science.
DarMM said:
The point is that the theory never gives formal account of how this comes about.
My point is that theory is never about subjective approximations to objective matters. It is about what is objective. How humans, robots, or automatic experiments handle it is a matter of psychology, artificial intelligence, and experimental planning, respectively, not of the theory.
DarMM said:
One person is including just the system in the probability model (observer), the other is including the system and the device (superobserver). That's all a superobserver is really.
The only observers of a classical Laplacian universe are Maxwell's demons, and they cannot be included into a classical dynamics. So their superobservers aren't describable classically.
DarMM said:
I don't understand this I have to say. The Bayesian view of probability does not permit logical faults
I was talking about my views on subjective and objective. A subject isn't bound by rules. This makes all Bayesian derivations very irrelevant, no matter how respectable the literature about it. They discuss what should be the case, not what is the case. But only the latter is the subject of science. Bayesian justifications are ethical injunctions, not scientific arguments.
DarMM said:
can you be more specific about what you mean by Wigner's friend not being possible under a frequentist view?
They are of course possible, but their assessment of the situation is (in my view) just subjective musings, approximations they make up based on what they happen to know. Thus here is no need for physics to explain their findings.

What would be of interest is a setting where Wigner and his friend are both quantum detectors, and their ''knowledge'' could be specified precisely in terms of properties of their state. Only then the discussion about them would become a matter of physics.
DarMM said:
I know you do not like Probability in the Foundations, thus the Thermal Interpretation.
I have nothing at all against probability in the frequentist sense. The only problem to have these in the foundations is that frequentist statement about systems that are unique are meaningless.
But the foundations must apply to our solar system, which is unique from the point of view of what physicists from our culture can measure.
 
  • Like
Likes Auto-Didact
  • #95
A. Neumaier said:
Theory never specifies which assumptions are made by one of its users. It only specifies what happens under which assumptions.
A. Neumaier said:
But the updating is a matter of decision making, not of science.
A. Neumaier said:
My point is that theory is never about subjective approximations to objective matters. It is about what is objective
A. Neumaier said:
I was talking about my views on subjective and objective. A subject isn't bound by rules. This makes all Bayesian derivations very irrelevant, no matter how respectable the literature about it
A. Neumaier said:
They are of course possible, but their assessment of the situation is (in my view) just subjective musings, approximations they make up based on what they happen to know. Thus here is no need for physics to explain their findings
A. Neumaier said:
I have nothing at all against probability in the frequentist sense. The only problem to have these in the foundations is that frequentist statement about systems that are unique are meaningless.
But the foundations must apply to our solar system, which is unique from the point of view of what physicists from our culture can measure
Just going these, are you basically saying your reasons for not liking the typical statistical view (either Bayesian or Frequentist) of probability in the Foundations? Probability involves updating in both views, Bayesian or Frequentist.

You are basically saying you prefer a non-statistical reading of things in the Foundations as I mentioned as an alternative in #80.
 
Last edited:
  • #96
DarMM said:
are you basically saying your reasons for not liking the typical statistical view (either Bayesian or Frequentist) of probability in the Foundations? Probability involves updating in the both views Bayesian or Frequentist.
No.

I am perfectly happy with a frequentist view of classical probability as applying exactly to (fully or partially known) ensembles, to which any observer (human or not) assigns - usually as consistently as feasible - approximate values based on data, understanding, and guesswork.

But the theory (the science) is about the true ,100% exact frequencies and not about how to assign approximate values. The latter is an applied activity, the subject of applied statistics, not of probability theory. Applied statistics is a mixture of science and art, and has - like any art - subjective aspects. I teach it regularly and without any metaphyscial problems (never a student asking!) based on Peter Whittle's approach, Probability via Expectation. (Theoretical science also has its artistic aspects, but these are restricted to the exposition of the theory and preferences in the choice of material.)

The only reason I cannot accept probability in the foundations of physics is that the latter must apply to unique large systems, while the classical notion of probability cannot do this. By axiomatizing instead of probability the more basic notion of uncertainty and treating probability as derived concept, I found the way out - the thermal interpretation.

Bayesian thinking (including any updating - exact values need no updating) is not science but belongs to 100% to the art of applied statistics, supported by a little fairly superficial theory based on ugly and contentuous axioms. I had studied these in some detail many years ago, starting with the multivolume treatise on foundation of measurement by Suppes, and found this (and much more) of almost no value - except to teach me what I should avoid.
DarMM said:
That's pretty much the argument most Subjective Bayesians use against Objective Bayesianism.
They are driving out the devil with Beelzebul.
 
Last edited:
  • Like
Likes Auto-Didact
  • #97
A. Neumaier said:
The only reason I cannot accept probability in the foundations of physics is that the latter must apply to unique large systems, while the classical notion of probability cannot do this. By axiomatizing instead of probability the more basic notion of uncertainty and treating probability as derived concept, I found the way out - the thermal interpretation.
I appreciate your post, but this does seem to me to be about not liking Probability in the foundations, Bayesian or Frequentist. My main point was that most of the issues people here seem to be having with the Minimal Statistical view or similar views like Neo-Copenhagen or QBism* reduce to the issue of having a typical statistical view (again in either sense) of Probability.

As I said understanding the probabilistic terms in a new way detached from the typical views is the only way out of these issues if one does not like this. Hence the final line of #80

*They mainly differ only in whether they like Frequentism, Objective Bayesian or Subjective Bayesian approaches. They agree with each other on virtually all other issues.
 
  • #98
DarMM said:
but this does seem to me to be about not liking Probability in the foundations, Bayesian or Frequentist.
It is not about not liking it but a specific argument why having probability in the foundations makes the foundations invalid. I'd not mind having probability in the foundations if it would appear only in properties of tiny subsystems of large unique systems.
 
  • #99
A. Neumaier said:
It is not about not liking it but a specific argument why having probability in the foundations makes the foundations invalid. I'd not mind having probability in the foundations if it would appear only in properties of tiny subsystems of large unique systems.
Yes, but that's what I was talking about. The issues here seem to be issues with probability in the Foundations. The "liking" was not meant to imply you lacked an argument or were operating purely on whimsy. :smile:
 
  • #100
DarMM said:
Is all of this not just a problem related to QM being a probabilistic theory?

For example if I model a classical system such as a gas using statistical methods like Liouville evolution, I start with an initial state on the system ρ0ρ0\rho_0 and it evolves into a later one ρtρt\rho_t. Nothing in the formalism will tell me when a measurement occurs to allow me to reduce ρρ\rho to a tighter state with smaller support (i.e. Bayesian updating). Just as nothing in a probabilistic model of a dice will tell me when to reduce the uniform distribution over outcomes down to a single outcome. Nothing says what a measurement is.

Maybe a good way for you to think about the difference is that classically, the idea of preexisting hidden variables underlying measurements is very easy and natural and intuitive, to the extent that everyone (who would want to be a realist/materialist) would simply adopt a HV interpretation of classical physics that escapes all these issues around measurement.

In QM, HVs are highly constrained and unintuitive. In response, some people bite the bullet and try to still make them work, some go to many worlds, some change the physics itself (GRW). But other would-be realists decide to give up on realism, and thus face the issues with measurement and probability being fundamental.

So, I think you are right there is a very similar set of philosophical problems for a classical antirealist as a quantum antirealist, and ultimately part of being a true antirealist is not caring about this. The difference is many quantum antirealists are not true antirealists. Many are just defeated realists who only dislike antirealism slightly less than they dislike the options in quantum realism, but still believe in all the downsides of antirealism, and think this should be broadcast. Others are okay with one or more of the quantum realist options, but are forced to learn the antirealist view in textbooks, and so will talk about the issues with antirealism to try to remedy this bias. Because of these cultural realities, this debate which you correctly identify as over antirealism writ large and not specific to QM ends up being cashed out only in the context of quantum antirealism
 
  • Like
Likes eloheim, Auto-Didact and DarMM
  • #101
A. Neumaier said:
Nothing here is about causes. Bayes' formula just relates two different conditional probabilities.
Well, yes, it has to do with the use we make of it, because otherwise it's only syntax.

To make sense you need semantics and therefore an interpretation/model.

/Patrick
 
  • #102
microsansfil said:
Well, yes, it has to do with the use we make of it, because otherwise it's only syntax.

To make sense you need semantics and therefore an interpretation/model.
Yes, but no semantics requires that one of ##A## and ##B## is the cause of the other. They can be two arbitrary statements. Taking the relative frequency of pregnancies as ##A## and the number of storks in the area as ##B## is valid semantics.
 
  • Like
Likes Auto-Didact
  • #103
https://bayes.wustl.edu/etj/articles/cmystery.pdf
The idea is that a conditional probability, depending on the context, can be used to express physical causality.

In the paper the example of BERNOULLI'S URN REVISITED (page 13) : In (18) the probability on the right expresses a physical causation, that on the left only an inference.

A conditional probability can, depending on the context, express a "physical causality" or an inference.

/Patrick
 
  • #104
microsansfil said:
https://bayes.wustl.edu/etj/articles/cmystery.pdf
The idea is that a conditional probability, depending on the context, can be used to express physical causality.

In the paper the example of BERNOULLI'S URN REVISITED (page 13) : In (18) the probability on the right expresses a physical causation, that on the left only an inference.

A conditional probability can, depending on the context, express a "physical causality" or an inference.
But only if you know already the causal connection. From probabilities alone one can never deduce a causal relation, only correlations.
 
  • Like
Likes Auto-Didact
  • #105
DarMM said:
Many books on the philosophy of Probability theory delve into the details, but the Wikipedia link here has the most pertinent details:
https://en.wikipedia.org/wiki/Bayesian_probability#Objective_and_subjective_Bayesian_probabilities
It's mostly about prior probabilities. Objective Bayesians build off of Cox's theorem and Subjective Bayesians off of DeFinetti's work.

The best book I think on the Objective outlook is E.T. Jaynes's "Probability Theory: The Logic of Science"

For the Subjective Bayesian outlook I like J. Kadane's "Principles of Uncertainty" or DeFinetti's "Theory of Probability: A Critical Introductory Treatment"
Surely probability theory is no more a part of the foundations of QT than the Fourier transform ?
They are both in the toolbox of many theories, including classical mechanics.
 
  • #106
Well, I guess there's a lot to find problematic about Fourier transforms for philosophers. I'd not be surprised that we could get a discussion about Fourier transformation that gets over 100 postings long.

Just to trigger a heated debate: What's better, Fourier or Laplace trafos (it's nearly as important as the war-like debates about emacs vs. vi ;-)).

SCNR.
 
  • Haha
Likes DarMM
  • #107
Mentz114 said:
Surely probability theory is no more a part of the foundations of QT than the Fourier transform ?
It has very minor effects like how exactly you think of the quantum state, or what you think is going on in quantum tomography. Not of any practical importance.

In post #80 I wasn't concerned with what one thinks of probability theory, but more that many of these issues (Wigner's friend, What is a measurement, etc) are nothing more than an issue with having probability theory in a fundamental theory.
 
  • Informative
Likes Mentz114
  • #108
vanhees71 said:
Just to trigger a heated debate: What's better, Fourier or Laplace trafos (it's nearly as important as the war-like debates about emacs vs. vi ;-)).
You're not one of those Laplacists are you? :eek:

Mentors can @vanhees71 be banned for corrupting the forum?
 
  • #109
No, don't worry, I'm usually using the Fourier transformation :biggrin:
 
  • #110
DarMM said:
The only problem is that quantum mechanics involves non-classical correlations. That is correlations outside the polytope given by assuming that your variables all belong in a single sample space. You can show (Kochen-Specker, Colbeck-Renner, etc) that theories with correlations outside of this polytope by necessity lack a dynamical account for their outcomes or correlations.
I just thought I'd put an example of the proof of this here if people enjoy it. Consider ##X## and ##Z## polarization measurements on two particles. All measurements have outcomes ##\{0,1\}##. I'll call the observers ##A## and ##B##. Imagine we find they are correlated as follows:

##X_A####Z_A##
##X_B####=####=##
##Z_B####=####\neq##

i.e. if they both perform an ##X## measurement the results will be equal.

Now consider the chance that ##A## obtains ##0## when they measure ##X_A##:
$$p\left(0|X_A\right)$$
From no-signalling this doesn't depend on the ##B## measurement, so we'll just take it to be ##X_B##, then
$$p\left(0|X_A\right) = p\left(00|X_A X_B\right) + p\left(01|X_A X_B\right)$$
Of course the second term is zero so:
$$p\left(0|X_A\right) = p\left(00|X_A X_B\right)$$
Since this is purely based on the correlation array it doesn't matter if we include any other arbitrary collection of events ##e## that occurred prior to the measurements:
$$p\left(0|X_A , e\right) = p\left(00|X_A X_B , e\right)$$
If we then focus on the chance for an ##X_B## measurement to produce zero we get a similar result:
$$p\left(0|X_B , e\right) = p\left(00|X_A X_B , e\right)$$
And thus we have:
$$p\left(0|X_A , e\right) - p\left(0|X_B , e\right) = 0$$
Iterating through a few different combinations of measurements we get three more equations like this for other sets of outcomes, thus in total we have:
<br /> p\left(0|X_A , e\right) - p\left(0|X_B , e\right) = 0\\<br /> p\left(0|X_B , e\right) - p\left(0|Z_A , e\right) = 0\\<br /> p\left(0|Z_A , e\right) - p\left(0|Z_B , e\right) = 0\\<br /> p\left(0|Z_B , e\right) - p\left(1|X_A , e\right) = 0<br />

These cancel off against each other to give us:
$$p\left(0|X_A , e\right) - p\left(1|X_A , e\right) = 0$$
Since we have ##p\left(1|X_A , e\right) = 1 - p\left(0|X_A , e\right) ## this gives us:
$$p\left(0|X_A , e\right) = \frac{1}{2}$$
So the outcome of an ##X_A## measurement cannot be deterministic. With this you can show none of the other outcomes can be deterministic either.

The correlations I used here are supra-quantum, i.e. stronger than those in quantum mechanics. Ekert and Renner proved that the same holds true in QM (https://www.nature.com/articles/nature13132?draft=journal, note they use information theoretic language so phrase it in terms of privacy).

The correlations are too strong for individual outcomes to be deterministic.

If you try the same with classical correlations the equations come out underdetermined thus the solutions have a free parameter ##\lambda## which can be adjusted to give deterministic solutions for the correlations.
 
  • Like
Likes dextercioby
  • #111
stevendaryl said:
...That means that after a measurement, the device is in a definite "pointer state". On the other hand, if you treat the measuring device (plus observer plus the environment plus whatever else is involved) as a quantum mechanical system that evolves under unitary evolution, then unless the observable being measured initially has a definite value, then after the measurement, the measuring device (plus observer, etc) will NOT be in a definite pointer state.

This is just a contradiction...
As QM is formulated and corroborated, the observers is mandatory and implicit - always. So anyone trying to solve the foundational problems of QM by removing the observer, to me appears not to appreciate the heart of a measurement theory.

So if the the pointer state is in a definitive state - relative to the original observer (the measurement device if you wish). The fact that this can be in a non-definitive state, relative to another observer is not a contradition per see, right?

Contradiction would appear only when they "communicate" their view, and then we have a physical interaction between them. But if they two observers are generalized beyond the "classical background" that Bohr relied in, the "contradiction" may well rather manifest itself as an interaction terms between the observers. This seems to me tha natural resolution to this. So rather than getting rid of observes, i think what we need to do is to deepend the abstraction of observers to extent beyond classical ontologies.

/Fredrik
 
  • #112
A. Neumaier said:
My point is that theory is never about subjective approximations to objective matters. It is about what is objective. How humans, robots, or automatic experiments handle it is a matter of psychology, artificial intelligence, and experimental planning, respectively, not of the theory.
I think these things are a divisior among researchers in this area and its interesting to highlight. I think your view, is stringent and if it is possible, the most accurate one.

But I belong to those that think that absolute objectivity is an illusion. It can not be attained, at best its an attractor. Actually not unlike human science even. Therefore, using this as a hard constraint may be misguiding when we are building a machinerey for optimal inference, beuause i think that in order to see how rules are formed, you need to break them.

So in your definition I belong to the subjective probability camp, but unlike your second scentence I do not mix in human cognition. The subjectivity here does not mean in any significant sense that science is subjective human-to-human. All it supposedly means, is that best inferred physical states encoded by some kind of statevector, are dependent on the physical subsystem making the inference.

But this stance to foudnational research seem to me to me in minority and thus under-developed beucase it creates a lot of extra difficulties, therefore most physicists seem to not like this. It is my impression.

The main difficulty is how to explain the de facto objectivity we all agree upon, even in despite minor disagreements, based on a foundation that is fundamentally interacting subjective views? This is a serious problem, sufficient to reject this unless you actually see a chance to solve it.

/Fredrik
 
  • Like
Likes Auto-Didact
  • #113
vanhees71 said:
Well, I guess there's a lot to find problematic about Fourier transforms for philosophers. I'd not be surprised that we could get a discussion about Fourier transformation that gets over 100 postings long.
Why so much hatred for philosophers? What did they do to you?

Bertrand Russell said:
The value of philosophy is, in fact, to be sought largely in its very uncertainty. The man who has no tincture of philosophy goes through life imprisoned in the prejudices derived from common sense, from the habitual beliefs of his age or his nation, and from convictions which have grown up in his mind without the co-operation or consent of his deliberate reason. To such a man the world tends to become definite, finite, obvious; common objects rouse no questions, and unfamiliar possibilities are contemptuously rejected. As soon as we begin to philosophize, on the contrary, we find, as we saw in our opening chapters, that even the most everyday things lead to problems to which only very incomplete answers can be given. Philosophy, though unable to tell us with certainty what is the true answer to the doubts which it raises, is able to suggest many possibilities which enlarge our thoughts and free them from the tyranny of custom. Thus, while diminishing our feeling of certainty as to what things are, it greatly increases our knowledge as to what they may be; it removes the somewhat arrogant dogmatism of those who have never traveled into the region of liberating doubt, and it keeps alive our sense of wonder by showing familiar things in an unfamiliar aspect.

Fourier transforms are easy to understand in the context of finite group theory: https://link.springer.com/chapter/10.1007/3-540-45878-6_8

/ Patrick
 
  • #114
DarMM said:
The only problem is that quantum mechanics involves non-classical correlations. That is correlations outside the polytope given by assuming that your variables all belong in a single sample space. You can show (Kochen-Specker, Colbeck-Renner, etc) that theories with correlations outside of this polytope by necessity lack a dynamical account for their outcomes or correlations.
I don't consider this a problem. To the contrary, this most surprising consequence of the quantum formalism, has been observed with astonishing significance and accuracy over the last decades in the wake of Bell's seminal paper. It's not a problem but a feature of QT to have predicted this phenomenon accurately!
 
  • #115
microsansfil said:
Why so much hatred for philosophers? What did they do to you?
Why hatred? I'm just doubting the usefulness of philosophy in the natural sciences, no more no less.
 
  • #116
vanhees71 said:
I don't consider this a problem. To the contrary, this most surprising consequence of the quantum formalism, has been observed with astonishing significance and accuracy over the last decades in the wake of Bell's seminal paper. It's not a problem but a feature of QT to have predicted this phenomenon accurately!
That quote was from #80 where the context was it's a problem for "completions" of quantum mechanics, not for QM itself.
 
  • Like
Likes vanhees71
  • #117
My problem is to see the necessity for "completions", as long as there are no observations hinting at an incompleteness of QT. The problem I have with understanding, why some people are so obsessed with purely philosophical issues that they think the QT is somehow incomplete. The only incompleteness I'm aware of is the pressing issue of the missing quantum theory of gravity (and, in view common of the geometrical interpretation of GR, probably also spacetime).
 
  • Like
Likes DarMM
  • #118
vanhees71 said:
My problem is to see the necessity for "completions", as long as there are no observations hinting at an incompleteness of QT. The problem I have with understanding, why some people are so obsessed with purely philosophical issues that they think the QT is somehow incomplete. The only incompleteness I'm aware of is the pressing issue of the missing quantum theory of gravity (and, in view common of the geometrical interpretation of GR, probably also spacetime).
All these 'purely philosophical issues' also happen to be mathematical issues i.e. of interest to some mathematicians who do not care anything about physics except as a guide to understanding better and broadening the theory of mathematics itself; this makes the desire to answer foundational issues in physics a strictly scientific endeavour, whether or not there is any necessity for such discussions from experiment (NB: this is true whether or not mathematics is seen as a science).

Case in point: both string theory and twistor theory cannot be called 'physics' by any stretch of the imagination, yet no one questions the fruits they offer indirectly to physical theory. Purely mathematical reformulations and extensions of such models may one day lead to the mathematical discovery of a new theory which will turn out to be physics; foundations of QM research has similar intentions.
 
  • #119
Fra said:
As QM is formulated and corroborated, the observers is mandatory and implicit - always. So anyone trying to solve the foundational problems of QM by removing the observer, to me appears not to appreciate the heart of a measurement theory.

Well, I think that's completely backwards. To solve the measurement problem MEANS to remove the observer as a fundamental element of QM.
 
  • #120
stevendaryl said:
Well, I think that's completely backwards. To solve the measurement problem MEANS to remove the observer as a fundamental element of QM.
I have a suspicion that orthodox QM only works experimentally at all because it is de facto a relational theory, possibly the first of its kind i.e. within natural science. If this is true, then there is the possibility that there will never be a reductionistic understanding possible, not even in principle, i.e. the anti-realists are correct.

This can be interpreted in two ways: the relational nature of QM is either fundamental or is itself an approximation to some underlying non-relational theory. This dichotomy can only be answered by remodelling the foundations of physics using branches of pure and applied mathematics which physicists - especially experimentalists - usually have no experience with whatsoever.

Note however that relational theories aren't new in science at all; they are only new in natural science. There are extremely advanced mathematical models in the social sciences cooked up by physicists and mathematicians who decided to do some freelance work in social sciences; these theories all tend to be applied models within the dynamical systems approach to science also known as complexity theory.
 

Similar threads

  • · Replies 48 ·
2
Replies
48
Views
4K
  • · Replies 376 ·
13
Replies
376
Views
21K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 31 ·
2
Replies
31
Views
6K
  • · Replies 69 ·
3
Replies
69
Views
7K
  • · Replies 13 ·
Replies
13
Views
5K
  • · Replies 0 ·
Replies
0
Views
268
  • · Replies 61 ·
3
Replies
61
Views
6K
  • · Replies 17 ·
Replies
17
Views
5K
  • · Replies 23 ·
Replies
23
Views
7K