Graduate Jürg Fröhlich on the deeper meaning of Quantum Mechanics

Click For Summary
Jürg Fröhlich's recent paper critiques the confusion surrounding the deeper meaning of Quantum Mechanics (QM) among physicists, arguing that many evade clear interpretations. He introduces the "ETH-Approach to QM," which aims to provide a clearer ontology but is deemed too abstract for widespread acceptance. The discussion reveals skepticism about Fröhlich's arguments, particularly regarding entanglement and correlations in measurements, which many participants believe are adequately explained by standard QM. Critics argue that Fröhlich's claims do not align with experimental evidence supporting the predictions of QM, especially in entangled systems. Overall, the conversation emphasizes the need for clarity and understanding in the interpretation of quantum phenomena.
  • #61
Mentz114 said:
I don't agree. My problem is irreversibility, which is demanded of the measurement by the purists but is unobtainable with unitary evolution.
The irreversibility comes into physics through coarse graining. Also in classical physics there's no irreversibility on the fundamental level. Of course, for philosophers, also this opens a can of worms (or even Pandora's box if you wish). There are debates about this even longer than there are debates about QT. From the physics point of view there's no problem. To the contrary it's well understood, and the "arrow of time" comes into physics as a basic postulate in the sense of the "causal arrow of time". As any fundamental assumption/postulate/axiom, however you want to call it, in the edifice of theoretical physics it cannot be proven but it's assumed based on experience, and this is the most fundamental experience of all: That there are "natural laws" which can be described mathematically, and also about this you can build a lot of mysteries and philosophies of all kinds. From a physics point of view that's all irrelevant, but perhaps nice for your amusement in the sense of fairy tales.

The point with unitarity is that it guarantees that the "thermodynamical arrow of time" is inevitable consistent with the "causal arrow of time", and this is not a fundamental law but can be derived from the assumption of a causal arrrow of time and unitarity of the time evolution of closed quantum systems. With the thermodynamical arrow of time also irreversibility is well determined, i.e., the fact that entropy has increased. Also note that also the entropy depends on the level of description or coarse graining. It's measuring the missing information, given the level of description, relative to what's defined as "complete knowledge".
 
  • Informative
Likes Mentz114
Physics news on Phys.org
  • #62
charters said:
No, the problem is you refuse to consider the time evolution of the measuring device itself as the unitary evolution of a quantum system. But this is the only thing that makes sense, since the device is made of electrons and nucleons, which everyone agrees are quantum systems.

You are implicitly dividing the world in two, where the meaning of quantum systems are defined only by the probabilistic responses they trigger in classical devices, which you independently assume to already exist. But there is no sensible way to explain how these classical devices can ever come to exist in the first place.
Why should I describe a measurement device like this? Do you describe a rocket flying to the moon as a quantum system? I don't believe that this makes much sense. That's why we use the adequately reduced (coarse grained) description for measurement devices or rockets: It's because it's impossible to describe the microstate of a macroscopic system (despite the rare cases, where it is in a simple enough state like some systems close to 0 temperature like liquid He or a superconductor etc.). As it turns out the effective quantum description of macroscopic systems almost always leads to behavior of the relevant macroscopic degrees fo freedom as described by classical physics (Newton's Laws of motion including of gravity for the moon rocket).
 
  • #63
Lord Jestocost said:
Here I disagree. In Renninger-type of measurements the “reduction” of the wave function is accomplished without any physical interaction. As Nick Herbert writes in “Quantum Reality: Beyond the New Physics”:

The existence of measurements in which “nothing happens” (Renninger-style measurement), where knowledge is gained by the absence of a detection, is also difficult to reconcile with the view that irreversible acts cause quantum jumps. In a Renninger-style measurement, there must always be the “possibility of an irreversible act” (a detector must actually be present in the null channel), but this detector does not click during the actual measurement. If we take seriously the notion that irreversible acts collapse the wave function, Renninger measurements require us to believe that the mere possibility of an irreversible act is sufficient to bring about a quantum jump. The fact that such “interactionless” measurements are possible means that the wave function collapse cannot be identified with some specific random process occurring inside a measuring device.
Can you give a concrete example of a real-world experiment, where a measurement occurs without interaction of something measured with some measurement device? I'd say, if the system doesn't interact with the measurement device there cannot be a measurement to begin with. I've no clue what a "Renninger-style measurement" might be.
 
  • #64
Lord Jestocost said:
In a Renninger-style measurement, there must always be the “possibility of an irreversible act” (a detector must actually be present in the null channel), but this detector does not click during the actual measurement. If we take seriously the notion that irreversible acts collapse the wave function, Renninger measurements require us to believe that the mere possibility of an irreversible act is sufficient to bring about a quantum jump. The fact that such “interactionless” measurements
These are not interactionless - a null measurement is obtained, and as a consequence the state collapses (though not necessarily to an eigenstate).
 
  • #65
vanhees71 said:
Do you describe a rocket flying to the moon as a quantum system? I don't believe that this makes much sense.
Do you want to imply that a rocket flying to the moon is not a quantum system? What then is the size where a system loses its describability as a qauntum system?
vanhees71 said:
That's why we use the adequately reduced (coarse grained) description for measurement devices or rockets
Already the possibility of a reduced description requires that there is a theoretically possible, though unknown complete description, which we can reduce by coarse graining.
 
  • Like
Likes eloheim, Auto-Didact and dextercioby
  • #66
vanhees71 said:
Can you give a concrete example of a real-world experiment...

As far as I know, Renninger-type of measurements are thought experiments, see, for example:
Towards a Nonlinear Quantum Physics
 
  • #67
bob012345 said:
Feynman said nobody understands Quantum Mechanics. I think that's even more true today. I think it was Dirac who famously said something paraphrased as "shut up and calculate".

Much confusion arises additionally when one doesn’t recognize that the “objects” which are addressed by quantum theory (QT) are - in a scientific sense - fundamentally different from the “objects” which are addressed by classical physical theories. As pointed out by James Jeans in his book “PHYSICS & PHILOSOPY” (1948):

Complete objectivity can only be regained by treating observer and observed as parts of a single system; these must now be supposed to constitute an indivisible whole, which we must now identify with nature, the object of our studies. It now appears that this does not consist of something we perceive, but of our perceptions, it is not the object of the subject-object relation, but the relation itself. But it is only in the small-scale world of atoms and electrons that this new development makes any appreciable difference; our study of the man-sized world can go on as before.

QT deals with the temporal and spatial patterns of events which we perceive to occur on a space-time scene, our “empirical reality”. QT makes no mention of “deep reality” behind the scene, so QT cannot be the point of contact if one wants to know or to explain “what is really going on.”
 
  • #68
A. Neumaier said:
Do you want to imply that a rocket flying to the moon is not a quantum system? What then is the size where a system loses its describability as a qauntum system?

Already the possibility of a reduced description requires that there is a theoretically possible, though unknown complete description, which we can reduce by coarse graining.
Of course not, as I said in the paragraph later, the classical behavior of the relevant macroscopic degrees of freedom is well understood by the usual coarse-graining procedures from quantum-many-body theory. As far as matter is concerned everything on the basic level is described by relativistic QFT. However, you cannot describe all systems in all microscopic details, and thus one has to make approximations and find effective theories to describe the system at a level at which it can be described, and this can be in various ways. E.g., bound-state problems are usually treated in non-relativistic approximations, whenever this is possible, because it's much simpler than the relativistic description. Then, at the macroscopic level one describes systems by classical physics, because that covers everything that's relevant at this level of description.
 
  • #69
Lord Jestocost said:
As far as I know, Renninger-type of measurements are thought experiments, see, for example:
Towards a Nonlinear Quantum Physics
I don't have this book, but obviously it's not about quantum theory but some extension, I cannot judge, what it is supposed to solve or "correct" on standard quantum mechanics. As far as I could read at google, no valid mathematical description of the described experiment was made, but it's clear that ##D_1## in any case is another obstacle in the way of the particle, with which it interacts, and has thus to be taken into account to describe the system completely. It's a contradiction in itself to assume ##D_1## is an detector and the particles to be measured are not interacting with it at the same time. Even if ##D_1## doesn't give a signal, the particle may still interact with it. To understand the probability that ##D_1## gives a signal or not as well as for ##D_2## has to be analyzed in detail. Usually, there's some non-zero probability for a particle not to be detected at all, depending on the specific setup of the detector(s).
 
  • #70
vanhees71 said:
Of course not, as I said in the paragraph later, the classical behavior of the relevant macroscopic degrees of freedom is well understood by the usual coarse-graining procedures from quantum-many-body theory. As far as matter is concerned everything on the basic level is described by relativistic QFT. However, you cannot describe all systems in all microscopic details
One cannot in practice. But foundations are about the principles, not the practice. All questions of interpretation concern the principles. There one has a single huge quantum system consisting of a tiny measured system, a macroscopic detector, and maybe a heat bath, and wants to understand how the principles lead to unique outcomes (for example) - the measurement results of your coarse grained description.

You answer all these foundational questions by substituting the in principle existing (though unknown) description of this large quantum system by a classical description - just as Bohr did. The foundational problems are then associated with this change of description, where you just say that it works and hence is fine, but people interested in the foundations want to understand why.
 
  • Like
Likes eloheim, Auto-Didact and dextercioby
  • #71
Good luck. I don't think that you can ever achieve this for any physical theory which describes nature. It's simply to complicated.

I never understood what Bohr precisely wanted to say, because of his too philosophical enigmatic writing style, but where for sure he is right with is that QT as a description of what's observed in nature is about the observations done finally with macroscopic measurement devices and that their workings are well-enough understood within classical physics. The validity of classical physics for macroscopic systems, as well as quantum theory (in fact any physical theory) is seen from comparison to experiment and observation. I think the paper by Englert is brilliant, cleaning up all the superfluous philosophical balast of "solving" some philosophical pseudo problems that don't have anything to do with physics nor will most probably have any merit in leading to new better theories.
 
  • Like
Likes Lord Jestocost
  • #72
vanhees71 said:
QT as a description of what's observed in nature is about the observations done finally with macroscopic measurement devices and that their workings are well-enough understood within classical physics.
Thus you now endorse the Heisenberg cut (between a quantum system and classically treated detector results), for which you never before saw any use...
 
  • Like
Likes Auto-Didact
  • #73
vanhees71 said:
Good luck. I don't think that you can ever achieve this for any physical theory which describes nature. It's simply to complicated.
...
It seems that a general theory is impossible. It can be done for some simple cases but each case has to be handled individually. The big problem is probability amplitude which cannot be physical as it stands for dimensional reasons. How can there ever be a mapping from amplitude space to (say) pressure or any (thermo)dynamical variable ?
 
  • #74
@vanhees71

To me it’s merely astonishing that Renninger-type of measurements seem to be in every way equivalent to measurements in which something seems “actually” to happen. An english translation by W. De Baere of Renninger’s paper “Zum Wellen–Korpuskel–Dualismus” (Zeitschrift für Physik 136, 251-261 (1953)) can be found here: https://arxiv.org/abs/physics/0504043
 
  • #75
Mentz114 said:
It seems that a general theory is impossible. It can be done for some simple cases but each case has to be handled individually. The big problem is probability amplitude which cannot be physical as it stands for dimensional reasons. How can there ever be a mapping from amplitude space to (say) pressure or any (thermo)dynamical variable ?
I don't understand anything of this? Why should there be a problem with a "probability amplitude" for dimensional reasons? Of course the dimension of the probability and thus also the "probability amplitude" depends on for which (continuous) quantity it is given. A distribution transforms as a distribution, which is why mathematically it's called a distribution. E.g., in position representation the "probability amplitude" (usually simply called wave function), ##|\psi(\vec{x})##, of a single particle has dimension ##1/\text{length}^{3/2}##. No problem whatsoever.

I've no clue what you mean concerning thermodynamics. Quantum statistics is well-defined, and all the thermodynamical quantities you state are just thermodynamical quantities. What should they have to do with "amplitude space" (whatever this means)?
 
  • Informative
Likes Mentz114
  • #76
Lord Jestocost said:
@vanhees71

To me it’s merely astonishing that Renninger-type of measurements seem to be in every way equivalent to measurements in which something seems “actually” to happen. An english translation by W. De Baere of Renninger’s paper “Zum Wellen–Korpuskel–Dualismus” (Zeitschrift für Physik 136, 251-261 (1953)) can be found here: https://arxiv.org/abs/physics/0504043
I've looked at the German original because I thought there must be errors in the translation. To my astonishment that's not the case. I'm puzzled about the fact that such a paper could ever appear in a serious physics journal as "Zeitschrift für Physik". Nothing about "photons" he says makes any sense, nor has ever such things as a path of a single photon or a guiding wave been observed. Despite his claim no convincing pilot-wave theory a la de Broglie and Bohm has been formulated for photons nor relativistic particles in general.
 
  • #77
vanhees71 said:
... of a single particle has dimension ##1/\text{length}^{3/2}##...
How many furlongs is that ?
 
  • Like
Likes Auto-Didact
  • #78
Mentz114 said:
How many furlongs is that ?
Eight furlongs per keel
 
  • Like
Likes Auto-Didact and Mentz114
  • #79
DarMM said:
so quantum observables are as much a property of the device as the quantum system itself.
Yep. It seems to me that :

The measurement is a projection into the base defined by the measuring instrument. We measure a trace left by the system on the measuring device that makes sense to our consciousness as human observers (through the mediation of our brains).

/Patrick
 
  • #80
PeterDonis said:
The issue with the minimal interpretation is that there is no rule that tells you when a measurement occurs. In practice the rule is that you treat measurements as having occurred whenever you have to to match the data. So in your example, since nobody actually observes observers to be in superpositions of pointer states, and observers always observe definite results, in practice we always treat measurements as having occurred by the time an observer observes a result.
stevendaryl said:
So, the minimal interpretation ultimately gives a preference to macroscopic quantities over other variables, but this preference is obfuscated by the use of the word "measurement". The inconsistency is that if you treat the macroscopic system as a huge quantum mechanical system, then no measurement will have taken place at all. The macroscopic system (plus the environment, and maybe the rest of the universe) will not evolve into a definite pointer state.
Is all of this not just a problem related to QM being a probabilistic theory?

For example if I model a classical system such as a gas using statistical methods like Liouville evolution, I start with an initial state on the system ##\rho_0## and it evolves into a later one ##\rho_t##. Nothing in the formalism will tell me when a measurement occurs to allow me to reduce ##\rho## to a tighter state with smaller support (i.e. Bayesian updating). Just as nothing in a probabilistic model of a dice will tell me when to reduce the uniform distribution over outcomes down to a single outcome. Nothing says what a measurement is.

Similarly one could zoom out to a larger agent, who uses a distribution not only over the gas from the first example but also over the state space of the device used to measure it (staying within classical mechanics for now). His ##P## distribution will evolve under Liouville's equation to involve multiple detection states for the device, in contrast to my case where the device lies outside the probability model and is used to learn of an outcome.

Any probability model contains the notion of an "agent" who "measures/learns" the value of something. These ideas are primitives unexplained in probability theory (i.e. what "causes" Bayesian updating). Any "zoomed out" agent placing my devices within their probability model will not consider them to have an outcome when I do until they themselves "look".

So to me all of this is replicated in Classical probability models. It's not a privileged notion of macroscopic systems, but unexplained primitive notions of "agent" and "learning/updating" common to all probability models. Introduce an epistemic limit in the Classical case and it becomes even more similar to QM with non-commutativity, no cloning of pure states, superdense coding, entanglement monogamy, Wigner's friend being mathematically identical to the quantum case, etc

The major difference between QM and a classical probability model is the fact that any mixed state has a purification on a larger system, i.e. less than maximal knowledge of a system ##A## can always be seen as being induced by maximal knowledge on a larger system ##B## containing ##A## (D'Ariano, Chiribella, Perinotti axioms). This is essentially what is occurring in Wigner's friend. Wigner has a mixed state for his friend's experimental device because he has maximal possible knowledge (a pure state) for the Lab as a whole. The friend does not track the lab as a whole and thus he can have maximal knowledge (a pure state) for the device.

So as long as QM is based on probability theory viewed in the usual way you will always have these odd notions of "when does a measurement occur/when do I update my probabilities" and "I consider event ##A## to have occured, but somebody else might not". You could say this is a discomfort from having probability as a fundamental notion in your theory.

If one wishes a way out of this would be @A. Neumaier 's view where he reads the formalism differently and not in the conventional statistical manner.
 
Last edited:
  • Like
Likes dextercioby, akvadrako, vanhees71 and 2 others
  • #81
DarMM said:
So to me all of this is replicated in Classical probability models. It's not a privileged notion of macroscopic systems, but unexplained primitive notions of "agent" and "learning/updating" common to all probability models.
Only in classical subjective (Bayesian) probability. Frequentist interpretations have neither such notions nor the associated problems.
 
  • Like
Likes Auto-Didact
  • #82
A. Neumaier said:
Only in classical subjective (Bayesian) probability. Frequentist interpretations have neither such notions nor the associated problems.
Do you mean Subjective Bayesianism (e.g. de Finetti) or are you using "Subjective" to denote Bayesianism in general?
 
Last edited:
  • Like
Likes Auto-Didact
  • #83
DarMM said:
Do you mean Subjective Bayesianism (e.g. DeFinetti) or are you using "Subjective" to denote Bayesianism in general?
I call a probability theory subjective if the probabilities depend on an ill-defined agent (rather than on objective contexts only). Bayesian probability seems to me a synonym for that.
 
  • #84
A. Neumaier said:
Only in classical subjective (Bayesian) probability. Frequentist interpretations have neither such notions nor the associated problems.
They still have Bayesian updating and relativism of when that updating occurs. For example in the case of the classical gas in the actual model used by the observers and superobservers the distributions used have the same behavior regardless of what view one holds of probability theory.

My post uses Bayesian language, but even in the frequentist case the superobserver will continue to use a mixture over outcomes where the observer will not. Up until he views the system. That's just a feature of probability theory.

You'll still have the notion of what you don't include in the probability side of your model and updating/conditioning. I don't see what is different in the sense relevant here.

Basically you can still replicate Wigner's friend even under a frequentist view.
 
  • #85
A. Neumaier said:
I call a probability theory subjective if the probabilities depend on an ill-defined agent (rather than on objective contexts only). Bayesian probability seems to me a synonym for that.
That's not the conventional usage though right? There is Objective Bayesianism. Subjective is usually reserved for views like de Finetti and Savage.
 
Last edited:
  • #86
DarMM said:
They still have Bayesian updating and relativism of when that updating occurs.
Not really. Frequentists just have conditional probability, i.e., probabilities relative to a subensemble of the original ensemble. Nobody is choosing or updating anything; it never occurs.
DarMM said:
even in the frequentist case the superobserver
Neither are there observers or superobservers. I have never seen the notion of superobservers in classical probability of any kind.

All subjective updating happens outside probability theory when some subject wants to estimate the true probabilities about which the theory is.
DarMM said:
Basically you can still replicate Wigner's friend even under a frequentist view.
No, because both Wigner and his friend only entertain subjective approximations of the objective situation. Subjectively everything is allowed. Even logical faults are subjectively permissible (and happen in real subjects quite frequently).
 
  • #87
DarMM said:
That's not the conventional usage though right? There is Objective Bayesianism. Subjective is usually reserved for views like DeFinetti and Savage.
What is Objective Bayesianism? Where is an authoritative classification? I am not an expert in classifying interpretations...
 
  • #88
A. Neumaier said:
Not really. Frequentists just have conditional probability, i.e., probabilities relative to a subensemble of the original ensemble. Nobody is choosing or updating anything; it never occurs.
Baye's relationship leads to a symmetry noted by Laplace

244365

By rewriting it as follows

244374


which leads to return of conditional probability. Which makes it possible to calculate the probability of causes by events. Laplace called this the inverse probability (Plausibility of the hypothesis). It has a lot of application in the theory of knowledge.
Frequent statisticians refuse to resonate in the plausibility of the hypotheses.

244370


He's working on hypothesis rejection. Test a hypothesis by calculating the likelihood of its results. Frequentist do not adhere to the concept of inverse probability because of the apriori which is subjective.

244373


Subjectivity also exists with the frequentist method. She's just hiding under the carpet.

/Patrick
 

Attachments

  • 0 bytes · Views: 0
  • #89
microsansfil said:
Which makes it possible to calculate the probability of causes by events.
Nothing here is about causes. Bayes' formula just relates two different conditional probabilities.
 
  • Like
Likes vanhees71
  • #90
A. Neumaier said:
What is Objective Bayesianism? Where is an authoritative classification? I am not an expert in classifying interpretations...
Many books on the philosophy of Probability theory delve into the details, but the Wikipedia link here has the most pertinent details:
https://en.wikipedia.org/wiki/Bayesian_probability#Objective_and_subjective_Bayesian_probabilities
It's mostly about prior probabilities. Objective Bayesians build off of Cox's theorem and Subjective Bayesians off of DeFinetti's work.

The best book I think on the Objective outlook is E.T. Jaynes's "Probability Theory: The Logic of Science"

For the Subjective Bayesian outlook I like J. Kadane's "Principles of Uncertainty" or DeFinetti's "Theory of Probability: A Critical Introductory Treatment"
 

Similar threads

  • · Replies 48 ·
2
Replies
48
Views
4K
  • · Replies 376 ·
13
Replies
376
Views
21K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 31 ·
2
Replies
31
Views
6K
  • · Replies 69 ·
3
Replies
69
Views
7K
  • · Replies 13 ·
Replies
13
Views
5K
  • · Replies 0 ·
Replies
0
Views
268
  • · Replies 61 ·
3
Replies
61
Views
6K
  • · Replies 17 ·
Replies
17
Views
5K
  • · Replies 23 ·
Replies
23
Views
7K