A Jürg Fröhlich on the deeper meaning of Quantum Mechanics

A. Neumaier
Science Advisor
Insights Author
Messages
8,679
Reaction score
4,738
TL;DR Summary
a pointer to a recent paper
I'd like to draw attention to a very recent paper by Jürg Fröhlich, a well-known mathematical physicist from the ETH Zürich. It starts out as follows:
Jürg Fröhlich said:
I consider it to be an intellectual scandal that, nearly one hundred years after the discovery of matrix mechanics by Heisenberg, Born, Jordan and Dirac, many or most professional physicists – experimentalists and theorists alike – admit to be confused about the deeper meaning of Quantum Mechanics (QM), or are trying to evade taking a clear standpoint by resorting to agnosticism or to overly abstract formulations of QM that often only add to the confusion. [...]
I felt that the subject had better remain a hobby until later in my career. [...]
But when I was approaching mandatory retirement I felt an urge to clarify my understanding of some of the subjects I had had to teach to my students for thirty years
Section 2 is titled ''Standard formulation of Quantum Mechanics and its shortcomings''. Surely @vanhees71 has very convincing reasons why this critique is irrelevant from his personal point of view. But the others might be interested.

Section 3 then presents a completion of QM, the ''ETH-Approach to QM''. It is too abstract to become popular - one more of many interpretations satisfying their authors but probably not a majority of quantum physicists.
 
Last edited:
  • Like
Likes Andrea Panza, Auto-Didact, julcab12 and 10 others
Physics news on Phys.org
Interesting paper.

At the moment I can't quite figure the difference between it and Decoherent Histories.

Need to think more and hear others views,

Thanks
Bill
 
  • Like
Likes atyy
I don't have reasons why this critique may be irrelevant since again I'm not even able to understand the problem to begin with. The statement is on p. 7. Fröhlich argues for a two-spin-1/2 system in the singlet Bell state, with the state ket given by
$$|\Psi \rangle=\frac{1}{\sqrt{2}} (|1/2,-1/2 \rangle-|-1/2,1/2 \rangle).$$
Then he makes the statement that from the obvious fact that of course the expectation value of ##S_z=s_z \otimes 1 + 1 \times s_z## is 0, it would follow that there couldn't be correlations between measurements of the 2 spins, which are however of course present due to the entanglement (it's even the maximal entanglement one can get, since it's a Bell state).

The point is that you have to do measurements on such prepared spin pairs on an "even-by-event" basis to get the correlations, i.e., for each pair you have to measure the ##s_z## for both particles of the pair and then find the 100% correlation, i.e., if A measures +1/2, then B necessarily measures -1/2 and vice versa. Of course you can't learn this from the expectation values.

Of course, for each of the two observers what they get are just unpolarized particles, i.e., the spin component is utmost indetermined, and that's in accordance with the cited linked-cluster principle which is of course valid in a relativistic local QFT by construction.

So I don't even get the argument of the author, why there is a contradiction to the minimally interpreted QT or the many observations on systems like this (mostly with photons), which are well understood using the minimally interpreted QT, including the "funny" features of entanglement. It's all well understood and described by standard QT, e.g., in the Walborn quantum eraser experiment we have discussed very often in this forum. I don't need to repeat it.

I also don't understand the solution of what the author seems to consider a problem since I'm not familiar with the the mathematical-physics notation the author uses. I've not the time to learn this notation to understand the solution of a problem, whose statement I even cannot understand to begin with. I'm sure, this interpretation could becom popular (or more popular or at least discussed) when (a) the problem it tries to solve is clarified and (b) the mathematical-physics notation is translated down to more common theoretical-physicists notation.
 
  • Like
Likes bhobba and Mentz114
A. Neumaier said:
It is too abstract to become popular
yet, he says"it furnishes quantum theory with a clear ontology”. I could not understand what that ontology entailed.
 
  • Like
Likes zonde
Well, "ontology" is also one of those unclear notions of the philosophers. It's not clear at all what "ontology" means. It heavily depends on what the individual philosopher thinks it is. For the physicist any observable fact about nature is enough "ontology". That's why I don't understand, where Fröhlich's problem with QM is located. The example he brings in the above cited paper is simply not what's observed. To the contrary, nowadays there are many experiments by the AMO community which prove it wrong: Since more than 30 years (with Aspect's first experiment concerning the Bell inequalities) with ever increasing precision the strong correlations due to entanglement (but indeed consistent with the linked cluster principle valid for local relativistic QFTs and thus valid for the standard model and particularly QED!) are empirically established. It's not only expectation values that can be measured, but the outcome of measurements on an event-by-event basis.

So far there's not the slightest hint that QT is incomplete within the realm it is formulated (it is only (!) incomplete with regard to the lack of a satisfactory quantum description of gravity). The "ontology" from a physicist's point of view then simply is provided by the notion of the quantum state, and this implies that there is something called "objective indeterminism" in nature, i.e., it is impossible to determine by preparation all observable definable on a given quantum system. Also the classical behavior of "everyday matter" is well understood: It's due to reducing the description to the relevant macroscopic degrees of freedom through "coarse-graining".
 
  • Like
Likes DanielMB and *now*
vanhees71 said:
For the physicist any observable fact about nature is enough "ontology".
vanhees71 said:
The "ontology" from a physicist's point of view then simply is provided by the notion of the quantum state,
Quantum state is not an observable fact about nature. You are contradicting yourself within single post.

If you stick to the idea that observable facts about nature are "ontological", QM is still rather ontologically unsatisfactory. That's because statistics is composite fact about nature. It requires some interpretation and grouping of similar situations. Elementary facts are single detections. And NRQM as well as QFT can speak only about statistics and it can not say how these statistics emerge from elementary facts while common explanation seems to fail.
 
  • Like
Likes YaGurlAnimalWorkzz and DanielMB
You are right in saying that the ontlogy in QT is provided by both the notion of "state" and "observable". These notions together provide the ontlogy of QT, and there's no contradiction to this ontology by experiment. To the contrary, the more QT is tested the better it gets confirmed. Particularly Fröhlich's idea of proving some contradiction is unclear and not justified by any observation. To the contrary his example provides one of the most stringent tests of Q(F)Ts consistency as a theory (validity of the linked-cluster principle) as well as with observations, which Bell experimenta are a confirmation of Q(F)T with the highest significance ever reached between theory in experiment in the history of physics.
 
vanhees71 said:
You are right in saying that the ontlogy in QT is provided by both the notion of "state" and "observable".
Never said that. The things you say are so incoherent that I have no idea how to reply.
 
What specifically is "incoherent"?

My statement is that the minimally interpreted QT (which is basically Copenhagen without collapse) is all "ontology" you need since it describes precisely what's observed today.

The fundamental point about which all these discussions about "interpretation" occur is the following:

(a) The state of a system is described by a positive semidefinite self-adjoint operator ##\hat{\rho}## with ##\mathrm{Tr} \hat{\rho}=1##.
(b) All observables are described by self-adjoint operators ##\hat{O}##. The possible outcome of (accurate) measurements are the generalized eigenvalues of these operators.
(c) The probability to find the value ##o_i## when measuring the observable ##O## is given by
$$P(o_i)=\sum_{\beta} \langle o_i,\beta|\hat{\rho}|o_i,\beta \rangle$$
with the usual treatmenof spectral values in the continuum. ##\beta## are a set of parameters (e.g., the eigenvalues of a complete set of observable operators that complement ##\hat{O}##). Also here the usual treatment if there are continuous parts in these parameters is implied. For simplicity I use sums rather than integrals.

The ontology in this standard interpretation thus is that if a system is in a state described by ##\hat{\rho}##, some observables may have determined values others not. In any case the probabilities for measuring a certain possible value is given by the formula above (the Born rule in its most general form).

In my opinion that's compatible with all observations done so far. There seems nothing missing in this description. It's used to also describe the most accurate observations concerning entangled states, where Fröhlich argues to be a problem about, and his argument doesn't match the corresponding experiments (like the polarization measurements on entangled photon pairs a la Aspect, the quantum eraser, etc.), because it just argues with some expectation value, which refers to an averaging over many measurements on identically prepared systems (short an ensemble), where of course information that can be gained according to QT is thrown away. Then of course the correlations predicted by entanglement are not observable anymore, because this information is not taken, but as is demonstrated by very accurate measurements on such Bell states, in fact one can get this information, which is the event-by-event outcome of measurements on the entangled observables (in Fröhlich's example the polarization state of both of the entangled photons for each such prepared photon pair). With this information, all predicted correlations (100% correlation when measuring polarization in the same or mutually orthogonal directions, violation of Bell's inequality for adequately chosen relative angles of the polarizers etc. etc.).

The outcome of these experiments is very clear: All predictions of QT have been empirically "verified" at a very large confidence level. Also many (if not all) loopholes brought forward so far have been excluded. No observations are unexplained by minimally interpreted QT.

In the relativistic realm also no violations of Einstein causality are present. This is only the case if a naive collapse argument of some Copenhagen-interpretation flavors is envoked (as far as I know, Bohr was at least very careful not to put to much weight in the collapse assumption). It's also not necessary to explain the said correlations due to entanglement on far-distant parts of a quantum system (like the polarization-entangled photon pair in Fröhlich's example, where the single-photon polarization measurements can be done at arbitrary far distances, as long as no photon is significantly disturbed on its way from the source to the detector which would of course destroy the entanglement and correlations before both measurements have been done). As Fröhlich states, the very construction of QED as a local microcausal relativistic QFT guarantees the validity of the linked-cluster principle, and this also applies then of course to the said entangled state of two photons. Performing the polarization measurements (more precisely the detection of each photon behind the polarization filters) in setups such that these detection events are space-like separated, for which mutual influence of one measurement by the other is excluded, still show all the predicted correlations, which is also in accordance with the linked-cluster principle. The conclusion is that, as stated within minimally interpreted QFT, the correlation is due to the state preparation and not by causal influences between the detection events.

So again, I don't see any "incoherence" in the minimal interpretation nor the necessity for "more ontology" than provided by it, at least I don't see it in Fröhlich's argument.
 
  • Like
Likes bhobba
  • #10
vanhees71 said:
So again, I don't see any "incoherence" in the minimal interpretation nor the necessity for "more ontology" than provided by it, at least I don't see it in Fröhlich's argument.
I think I might be able to explain other people's problems to you. There are basically three problems people have with QM.

  1. There is no dynamical account of which measurement outcome occurs
  2. There is no dynamical account of how the correlations present in entanglement are achieved
  3. Imagine I measure the state ##\frac{1}{\sqrt{2}}\left(|\uparrow\rangle + |\downarrow\rangle\right)## and say my device measures spin-up. Then the state I use after is ##|\uparrow\rangle##, but an external superobserver would use ##\frac{1}{\sqrt{2}}\left(|\uparrow, D_{\uparrow}, L_A\rangle + |\downarrow, D_{\downarrow} L_B\rangle\right)## with ##D_{\uparrow}## denoting the state of my device and ##L_A## being a state of the lab. That is I use a "collapsed" state, but the superobserver does not.
I think recent work, especially since Spekken's model in 2004 and clarifications of the Frauchiger-Renner argument, has shown that (3) is not really a problem or a contradiction, it only is if you accept the eigenstate-eigenvalue link which is a very old naive view of QM.

However (1) and (2) do seem like problems or at least something that needs completing in a further theory.

The only problem is that quantum mechanics involves non-classical correlations. That is correlations outside the polytope given by assuming that your variables all belong in a single sample space. You can show (Kochen-Specker, Colbeck-Renner, etc) that theories with correlations outside of this polytope by necessity lack a dynamical account for their outcomes or correlations.

So you either reinterpret the formalism in a non-statistical manner (Many-Worlds, Thermal Interpretation), add additional variables to restore the single sample space (but we know they have to be nonlocal or retrocausal) or just accept that there is no account.
 
Last edited:
  • Like
Likes eloheim, mattt, Mentz114 and 3 others
  • #11
bhobba said:
Interesting paper.

At the moment I can't quite figure the difference between it and Decoherent Histories.

Need to think more and hear others views,

Thanks
Bill
It's not very obvious from the paper but it is a form of Consistent Histories in a sense. However unlike Decoherent histories it doesn't frame consistency in terms of interference terms dying off, but in terms of a certain relation holding between the observables in the state ##\omega##. This is slightly different as it can be "exact", but the major points are the same.

It's similar to how Jeffrey Bub's view is sort of consistent histories, but it takes consistency as being the emergence of a sub-algebra which satisfies the rules of a Boolean lattice.

So just different notions of consistency. What's interesting is that in a typical experiment all three conditions seem to hold, i.e. interference dies off, Frohlich's algebraic condition holds and a sub-algebra with lattice conditions as demanded by Bub emerges.

Later edit:
Personal conjecture: I wouldn't be surprised if basically Frohlich and Bub's views are two ways of phrasing the same thing, i.e. the emergence of a certain algebraic structure relative to the state is what permits you to reason that macroscopic equipment has obtained a definitive outcome state of which you are ignorant. Perhaps there's a theorem showing they are equivalent. We know from Spekkens model that interference terms don't mean there isn't a single outcome, so you don't need them to vanish.
You then need decoherence to show that macroscopic events obey classical statistics, not that they occur.
 
Last edited:
  • Like
Likes bhobba, vanhees71 and atyy
  • #12
DarMM said:
There are basically three problems people have with QM.
  1. There is no dynamical account of which measurement outcome occurs
  2. There is no dynamical account of how the correlations present in entanglement are achieved

Are these “serious” problems of physics? Or are these “problems” merely an expression of indignation that the ultimate reality behind giving rise to our “perception” of events occurring on a space-time scene cannot be grasped with recourse to classical notions and concepts?

As Berthold-Georg Englert writes in "On quantum theory":

Abstract. Quantum theory is a well-defined local theory with a clear interpretation. No “measurement
problem” or any other foundational matters are waiting to be settled.
 
Last edited:
  • Like
Likes vanhees71
  • #13
Lord Jestocost said:
Are these “serious” problems of physics? Or are these “problems” merely an expression of indignation that the ultimate reality behind giving rise to our “perception” of events occurring on a space-time scene cannot be grasped with recourse to classical notions and concepts?
I'm going to be very controlled in my response here, because I don't want this to veer into the usual stuff.

Are these serious problems? Well it depends on whether you think there has to be an account for how events occur, specifically outcomes of measurements on microscopic systems, or whether you think the current evidence from QM is enough for you to concede that you will never get one. The latter runs counter to why many people are interested in science, so it is not too surprising it is viewed as a problem.

I think saying "classical notions" undersells the problem some people have. It makes it sound as though they are attached to specific ideas like particles or fields. Where as the only "notion" they are holding to is an explanation/account at all.

Englert's quote (from https://arxiv.org/abs/1308.5290) just shows he is not bothered by this.
Like Gell-Mann, Griffiths, Bub, Bohr, Heisenberg, Hartle, Haag, etc he just swallows the bullet. You will never have an explanation. The End.
 
  • Like
Likes eloheim
  • #14
DarMM said:
Englert's quote (from https://arxiv.org/abs/1308.5290) just shows he is not bothered by this.
Like Gell-Mann, Griffiths, Bub, Bohr, Heisenberg, Hartle, Haag, etc he just swallows the bullet. You will never have an explanation. The End.
The full abstract of his paper (published in Eur. Phys. J. D) says:
Berthold Englert said:
Quantum theory is a well-defined local theory with a clear interpretation. No "measurement problem" or any other foundational matters are waiting to be settled.
This essentially echoes the credo of @vanhees71. Englert's introduction says:
Berthold Englert said:
there is no experimental fact, not a single one, that contradicts a quantum-theoretical prediction. Yet, there is a steady stream of publications that are motivated by alleged fundamental problems: We are told that quantum theory is ill-defined, that its interpretation is unclear, that it is nonlocal, that there is an unresolved “measurement problem,” and so forth.
I find both statements in this second quote fully valid, and Englert's later explanations that these are only pseudo-problems not convincing.
 
  • #15
Yes and his "clear interpretation" detailed in the rest of the paper is (Neo-)Copenhagen, i.e. there is no explanation for measurement outcomes.

Without further no-go theorems we can't proceed further. Currently the attempts to add more variables to restore a single sample space (retrocausal and nonlocal theories) haven't been generalized to QFT and views that attempt to reinterpret the formalism non-statistically (Many Worlds and Thermal Interpretation) haven't been proven to give the correct observational statistics.

So we have to wait to see if one of these other views can be gotten to work in some way. Or perhaps wait for the development of no-go theorems that either forbid them or make them look completely unnatural and fine-tuned, forcing us to "swallow the bullet" of Copenhagen and its lack of explanations.

Time will tell.
 
Last edited:
  • #16
A. Neumaier said:
I find both statements in this second quote fully valid, and Englert's later explanations that these are only pseudo-problems not convincing.
His final words in the paper are:
Berthold Englert said:
What, then, about the steady stream of publications that offer solutions for alleged fundamental problems, each of them wrongly identified on the basis of one misunderstanding of quantum theory or another? Well, one could be annoyed by that and join van Kampen [42] in calling it a scandal when a respectable journal prints yet another such article. No-one, however, is advocating censorship, even of the mildest kind, because the scientific debate cannot tolerate it. Yet, is it not saddening that so much of diligent effort is wasted on studying pseudo-problems?
Note that van Kampen's paper, which he cites here and which also promoted the thesis that there is no measurement problem, contains an error in its ''proof'' of this thesis.
 
Last edited:
  • #17
DarMM said:
Currently the attempts to add more variables to restore a single sample space (retrocausal and nonlocal theories) haven't been generalized to QFT and views that attempt to reinterpret the formalism non-statistically (Many Worlds and Thermal Interpretation) haven't been proven to give the correct observational statistics.

Aharonov's "retrocausal" two time interpretation generalizes to QFT (equally as easily as MWI) and gets the correct observational statistics from a typicality assumption on the future boundary choice.
 
  • #18
charters said:
Aharonov's "retrocausal" two time interpretation generalizes to QFT (equally as easily as MWI) and gets the correct observational statistics from a typicality assumption on the future boundary choice.
Note what I said. I didn't say that retrocausal theories can't get out the statistics, I said they aren't generalized to QFT fully, neither is the TSVF you are discussing. Kastner's work can be considered to have shown that it might be able to replicate aspects of QED, but I'm not aware of a full proof that it works in the QFT case.

Many Worlds has many issues with QFT, such as the absence of pure states for finite volume systems. And the Born rule has never been proven to hold.

Anyway, I'd be happy to discuss this on another thread. Either MWI or Retrocausal views.
 
  • #19
DarMM said:
has shown that (3) is not really a problem or a contradiction

Aren't 1) and 3) basically the same problem, or at least related.
 
  • #20
ftr said:
Aren't 1) and 3) basically the same problem, or at least related.
They are not the same problem. In general they are not related, for example theories like Spekkens model have (3), but not (1). That is you can have Wigner's friend style problem without having the measurement problem. That's why the measurement problem, labelled (1) above, is more important to QM and (3) isn't a real issue.
 
  • #21
DarMM said:
They are not the same problem. In general they are not related, for example theories like Spekkens model have (3), but not (1). That is you can have Wigner's friend style problem without having the measurement problem. That's why the measurement problem, labelled (1) above, is more important to QM and (3) isn't a real issue.

Well, collapse is an integral part of standard interpretation like CI although it is down played and all other interpretations are not that successful( at least this is my view of the general consensus) in circumventing it because of the probability interpretation. So all the power to TI.:smile:

EDIT: OK, it is possible to look at them as separate problems in some sense.
 
Last edited:
  • #22
Lord Jestocost said:
Are these “serious” problems of physics? Or are these “problems” merely an expression of indignation that the ultimate reality behind giving rise to our “perception” of events occurring on a space-time scene cannot be grasped with recourse to classical notions and concepts?

No, the difficulties of interpreting quantum mechanics are not due to the fact that it cannot be grasped in terms of classical notions.

As Berthold-Georg Englert writes in "On quantum theory":

Abstract. Quantum theory is a well-defined local theory with a clear interpretation. No “measurement
problem” or any other foundational matters are waiting to be settled.

I think he's just wrong about that.
 
  • Like
Likes Auto-Didact
  • #23
stevendaryl said:
I think he's just wrong about that.
Its controversial, even here on PF. Different people have different criteria for ''well-defined''and' 'clear' '. Those with loose criteria are easily satisfied, only those with strict ones see the problems. No amount of discussion will change this.
 
  • Like
Likes Auto-Didact, dextercioby, stevendaryl and 1 other person
  • #24
A. Neumaier said:
Its controversial, even here on PF. Different people have different criteria for ''well-defined''and' 'clear' '. Those with loose criteria are easily satisfied, only those with strict ones see the problems. No amount of discussion will change this.
I think this is pretty accurate. Either you have a problem with the standard lack of explanation for how measurement outcomes come about and how nonclassical correlations are achieved or you don't. In the typical approach QM doesn't give any explanation for these things. You'll either think this is an insight (i.e. this is something which cannot be given a scientific explanation) or an incompleteness (there has to be a deeper theory telling us how they come about).

The way things currently stand, i.e. no-go theorems et al, leaves this as an issue of personal taste.
 
  • #26
ftr said:
However I do think that some research come very close to explaining it.

https://www.nature.com/articles/d41586-018-05095-z
Although it's an interesting popular account, it doesn't really have much to do with the issue being discussed here.
 
  • #27
What do others think of Fröhlich's argument about the inequivalence of the Schrödinger and Heisenberg pictures?
 
  • #28
A. Neumaier said:
Different people have different criteria for ''well-defined''and' 'clear' '. Those with loose criteria are easily satisfied, only those with strict ones see the problems.

Come on, why this undertone? Maybe, those who are “satisfied” with quantum theory have already gained deep insights and clarity.
 
  • #29
Lord Jestocost said:
Maybe, those who are “satisfied” with quantum theory have already gained deep insights and clarity.
Not only maybe, but surely.

However, only according to their own criteria for insight and clarity. Certainly not to mine.
 
Last edited:
  • Like
Likes eloheim and Auto-Didact
  • #30
Lord Jestocost said:
Come on, why this undertone? Maybe, those who are “satisfied” with quantum theory have already gained deep insights and clarity.

No, it's obvious that that is not the case.
 
  • Like
Likes Auto-Didact
  • #31
DarMM said:
Although it's an interesting popular account, it doesn't really have much to do with the issue being discussed here.

It is not black and white

https://arxiv.org/pdf/1604.02589.pdf
"Quantum gravity may have as much to tell us about the foundations and interpretation of quantum mechanics as it does about gravity. The Copenhagen interpretation of quantum mechanics and Everett’s Relative State Formulation are complementary descriptions which in a sense are dual to one another. My purpose here is to discuss this duality in the light of the of ER=EPR conjecture."
 
  • #32
ftr said:
However I do think that some research come very close to explaining it.
https://www.nature.com/articles/d41586-018-05095-z
ftr said:
It is not black and white
https://arxiv.org/pdf/1604.02589.pdf"Quantum gravity may have as much to tell us about the foundations and interpretation of quantum mechanics as it does about gravity. The Copenhagen interpretation of quantum mechanics and Everett’s Relative State Formulation are complementary descriptions which in a sense are dual to one another. My purpose here is to discuss this duality in the light of the of ER=EPR conjecture."
It doesn't make sense to inject into a dedicated thread random papers about foundations. If you want these to be discussed, create a new thread about them, or wait until one of them really fits an existing discussion topic.
 
  • Like
Likes Auto-Didact, mattt and dextercioby
  • #33
DarMM said:
What do others think of Fröhlich's argument about the inequivalence of the Schrödinger and Heisenberg pictures?
I have always perceived that the equivalence of Schroedinger and Heisenberg pictures is nothing but a disguised form of the Born's rule: for a single Hilbert space, there is a unique unitary time evolution operator conserving probabilities or densities of probability. Does this inequivalence set forth by this paper mean there is a nonunitary time evolution?
 
Last edited:
  • Like
Likes DarMM
  • #34
dextercioby said:
I have always perceived that the equivalence of Schriedinger and Heisenberg pictures is nothing but a disguised form of the Born's rule: for a single Hilbert space, there is a unique unitary time evolution operator conserving probabilities or densities of probability. Does this inequivalence set forth by this paper mean there is a nonunitary time evolution?
Still digesting his paper and looking at other papers. I'll throw up a summary soon once I'm sure I understand it.
 
  • #35
Feynman said nobody understands Quantum Mechanics. I think that's even more true today. I think it was Dirac who famously said something paraphrased as "shut up and calculate".
 
  • Like
Likes Lord Jestocost
  • #36
DarMM said:
What do others think of Fröhlich's argument about the inequivalence of the Schrödinger and Heisenberg pictures?
I haven't yet understood what Fröhlich means with his nonequivalence claim.
dextercioby said:
I have always perceived that the equivalence of Schroedinger and Heisenberg pictures is nothing but a disguised form of the Born's rule
But it has nothing to do with Born's rule, unless you identify Born's rule with the existence of the expectation mapping (which, however, would empty Born's rule from all its empirical content).
Surely it is not equivalent to Born's rule, for it says nothing about measurement.

The equivalence just says that the time dependence of ##Tr~A(t)\rho(t)## can be distributed in different ways to that of ##A## and ##\rho##.
 
  • Like
Likes vanhees71
  • #37
A. Neumaier said:
I haven't yet understood what Fröhlich means with his nonequivalence claim
He's basically referring to the fact that his interpretation has "constant collapse" for lack of a better word.

So Fröhlich says that at time ##t## we have the algebra of observables located times ##\geq t##. This is denoted ##\mathcal{E}_{\geq t}##. An event is a particular set of projectors, ##\{\pi_{E,t}\}##, summing to unity. An event is then said to occur at ##t## if its projectors commute with all other observables in ##\mathcal{E}_{\geq t}## under the state ##\omega##:
$$\omega\left(\left[\pi_{E},A\right]\right) = 0$$

This is meant to be a purely mathematical condition with no need for observation as a primitive. In a given state ##\omega## and given a particular time ##t## and its associated observables ##\mathcal{E}_{\geq t}## there will be such a set of projectors. Thus there is always some event that occurs. After that event has occurred one should use the state ##\omega_{E,t}## given by the conventional state reduction rule.

However imagine I am an experimenter in a lab. I have performed a measurement and updated to ##\omega_{E,t}##. Fröhlich's point is that there will then be, under a proper mathematical analysis, some event ##\{\pi_{E^\prime,t^\prime}\}## that via his condition will occur. This will then cause an update to the state ##\omega_{E^\prime,t^\prime}##. However under conventional QM the experimenter, since he has not made a measurement, continues to use ##\omega_{E,t}##. In the ETH-interpretation he has made an error by restricting the events that occur to be solely his measurement events. Thus his state is incorrect.

Fröhlich discusses why usually it is almost completely accurate. Essentially because the event that follows at ##t^\prime## (under certain assumptions about the Hamiltonian) has projectors that almost overlap with those of the event that occurred at ##t##.

This results in the ETH-interpretation having slightly different predictions from standard QM.

Operators evolve under the Heisenberg equations of motion, but states between measurements do not exactly follow Schrödinger evolution. Thus the inequivalence.
 
  • Like
Likes Auto-Didact
  • #38
DarMM said:
Operators evolve under the Heisenberg equations of motion, but states between measurements do not exactly follow Schrödinger evolution. Thus the inequivalence.
But traditionally, if operators evolve under the Heisenberg equations of motion, states remain constant.

Thus Fröhlich changes the meaning of the Heisenberg picture!?

it seems to me that, when viewed in the Schrödinger picture, Fröhlich is proposing something like the piecewise deterministic procesess (PDP) of Breuer & Petruccione referred to in my Part III. There is also old work by Jadczyk on PDP and event-enhanced quantum mechanics: https://arxiv.org/pdf/hep-th/9409189, https://arxiv.org/pdf/quant-ph/9506017, and a few more. But so far I didn't have the time to check out the precise relations to Fröhlich's setting.
 
  • #39
A. Neumaier said:
But traditionally, if operators evolve under the Heisenberg equations of motion, states remain constant.

Thus Fröhlich changes the meaning of the Heisenberg picture!?
Yes I would say. Operators follow the Heisenberg equations of motion, but states do not remain constant. In standard QM they remain constant except upon "collapse", so constant except at measurements. Fröhlich however has "constant collapse" so states are truly always evolving even in the Heisenberg picture.
 
  • #40
A. Neumaier said:
it seems to me that, when viewed in the Schrödinger picture, Fröhlich is proposing something like the piecewise deterministic procesess (PDP) of Breuer & Petruccione referred to in my Part III
There is a relation I suspect, but for Fröhlich the evolution is fundamentally stochastic/random. The state update rule is not an "effective" proscription, but literally true.
 
  • #41
DarMM said:
Fröhlich however has "constant collapse" so states are truly always evolving even in the Heisenberg picture.
Do you mean continuous collapse - at every moment in time, as in continuous measurement theory?
DarMM said:
There is a relation I suspect, but for Fröhlich the evolution is fundamentally stochastic/random. The state update rule is not an "effective" proscription, but literally true.
The same holds in PDP, except that the times of collapse are random, not continuous (else one has a quantum diffusion process - relevant for measuring operators with continuous spectra).
 
  • #42
A. Neumaier said:
Do you mean continuous collapse - at every moment in time, as in continuous measurement theory?
I believe so. He discusses only the case where time is discrete. There he has collapse at each discrete moment of time. The natural extension to continuous time is continuous collapse.

A. Neumaier said:
The same holds in PDP, except that the times of collapse are random, not continuous (else one has a quantum diffusion process - relevant for measuring operators with continuous spectra).
You're right of course. I had in mind your Thermal version view of such cases when contrasting it with Fröhlich. PDP is very similar to Fröhlich as you said.
 
  • #43
I should say as far as I can tell Fröhlich doesn't consider the quantum state to be physically real, just a method of keeping track of which events might occur. So the collapse processes above are physical in the sense of specifying the occurrence of an event, but not the reduction of a physical state vector.

So in ETH the world is composed of a sequence of randomly realized events. Events from non-commuting projector sets are not comparable. A history only involves a subset of possible quantities. This is the typical counterfactual indefiniteness that distinguishes QM from a classical stochastic process, e.g. there will be an event where a value for ##S_x## is realized, not the whole spin vector ##\left(S_x, S_y, S_z\right)##.

In an Bell-Aspect experiment one cannot compare different measurement pair choices for Alice and Bob since they occur in different histories.

So a Copenhagen variant very similar to Decoherent histories and the "Event"-interpretation of Haag @bhobba . Again I'm not really sure there is a true difference between Fröhlich, Haag and Bub here or just a difference of formulation.
 
  • #44
I've not read all the recent postings, but some of the proponents of the claim that there's a measurement problem, raised two issues:

(a) how do measurement outcomes occur?
(b) the need to prove Born's rule.

I don't see any issues with both points since a measurement result comes about through interactions of the measured system with the measurement device. QT gives an adaquate and accurate description about all so far reproducibly observations.

Concerning (b), I consider the Born rule as one of the fundamental postulates of QT, that can not be derived from the other postulates. I think Englert is right!
 
  • Like
Likes bhobba
  • #45
vanhees71 said:
I don't see any issues with both points since a measurement result comes about through interactions of the measured system with the measurement device. QT gives an adaquate and accurate description about all so far reproducibly observations.
I think people's issues is that it doesn't tell you which result will occur. There's also the unusual feature that only the observable you look at "occurs", e.g. for Spin in the x-direction only a ##S_x## outcome occurs, so quantum observables are as much a property of the device as the quantum system itself.

I think you are fine with this because you think there isn't anything but the statistics, i.e. you can't know which occurs because that's what the world is like.
 
  • #46
I consider the rules of the minimal interpretation to be outright contradictory. If something is a contradiction, it can't be correct. On the one hand, one of the rules of the minimal interpretation says that a measurement always results in an eigenvalue of the operator corresponding to the observable being measured. That means that after a measurement, the device is in a definite "pointer state". On the other hand, if you treat the measuring device (plus observer plus the environment plus whatever else is involved) as a quantum mechanical system that evolves under unitary evolution, then unless the observable being measured initially has a definite value, then after the measurement, the measuring device (plus observer, etc) will NOT be in a definite pointer state.

This is just a contradiction. Of course, you can make the use of the quantum formalism consistent by just imposing an ad hoc distinction between measurement devices (or more generally, macroscopic systems) and microscopic systems. But that's not a physical theory, that's a rule of thumb.
 
  • Like
Likes eloheim, Auto-Didact, dextercioby and 1 other person
  • #47
vanhees71 said:
I've not read all the recent postings, but some of the proponents of the claim that there's a measurement problem, raised two issues:

(a) how do measurement outcomes occur?
(b) the need to prove Born's rule.

I don't see any issues with both points since a measurement result comes about through interactions of the measured system with the measurement device. QT gives an adaquate and accurate description about all so far reproducibly observations.

Concerning (b), I consider the Born rule as one of the fundamental postulates of QT, that can not be derived from the other postulates. I think Englert is right!
I agree except perhaps one should say "... a measurement result comes about through non-unitary interactions..." . It is non-unitary-ness that seems to give people a problem.
 
  • #48
stevendaryl said:
This is just a contradiction
That has never been demonstrated.

Your contradiction equally applies to Spekkens model where the device is measuring a system and obtains an outcome ##a## from a set ##\{a,b\}##, but an observer isolated from the device models it as being in a superposition. However one can explicitly see that there isn't a contradiction in Spekkens model.
 
  • Like
Likes dextercioby and vanhees71
  • #49
stevendaryl said:
I consider the rules of the minimal interpretation to be outright contradictory. If something is a contradiction, it can't be correct. On the one hand, one of the rules of the minimal interpretation says that a measurement always results in an eigenvalue of the operator corresponding to the observable being measured. That means that after a measurement, the device is in a definite "pointer state". On the other hand, if you treat the measuring device (plus observer plus the environment plus whatever else is involved) as a quantum mechanical system that evolves under unitary evolution, then unless the observable being measured initially has a definite value, then after the measurement, the measuring device (plus observer, etc) will NOT be in a definite pointer state.

This is just a contradiction. Of course, you can make the use of the quantum formalism consistent by just imposing an ad hoc distinction between measurement devices (or more generally, macroscopic systems) and microscopic systems. But that's not a physical theory, that's a rule of thumb.
In other words your problem is that you don't want to accept the probabilistic nature of the quantum description. That's not a problem of QT, but just prejudice about how nature should be. Science, however, tells us, how nature behave, and the conclusion of the gain of knowledge summarized accurately in the QT-formalism, which lead to correct predictions and descriptions of all objective phenomena observed so far, is that nature is intrinsically probabilistic, i.e. there's no way to prepare a system such that all observables take determined values. Thus, there's no contradiction in the two postulates you claim. To the contrary, indeterminism in the above precise sense of QT makes it a consistent and accurate description of all our experience so far!
 
  • #50
Mentz114 said:
I agree except perhaps one should say "... a measurement result comes about through non-unitary interactions..." . It is non-unitary-ness that seems to give people a problem.
There's no single proof of non-unitariness. In some sense one can even say that everyday experience (validity of thermadynamics) tells the opposite: unitarity ensures the validity of the principle of deatailed balance.
 

Similar threads

Back
Top