I Confusion about the thermal interpretation's account of measurement

  • Thread starter nicf
  • Start date
6
1
Summary
Does the account of measurement in Neumaier's thermal interpretation papers actually depend on the thermal interpretation?
I'm a mathematician with a longstanding interest in physics, and I've recently been enjoying reading and thinking about Arnold Neumaier's thermal interpretation, including some threads on this forum. There's something that's still confusing me, though, and I'm hoping someone here can clear it up. Most of the questions here come from the third paper in the series.

Consider some experiment, like measuring the spin of a suitably prepared electron, where we can get one of two outcomes. The story usually goes that, before the electron sets off the detector, the state is something like ##\left[\sqrt{\frac12}(|\uparrow_e\rangle+|\downarrow_e\rangle)\right]\otimes|\mbox{ready}\rangle##, where ##|\uparrow_e\rangle## denotes the state of an electron which has spin up around the relevant axis, and afterwards the state is something like ##\sqrt{\frac12}(|\uparrow_e\rangle\otimes |\uparrow_D\rangle+|\downarrow_e\rangle\otimes|\downarrow_D\rangle)##, where ##|\uparrow_D\rangle## denotes a state in which the (macroscopic) detector has reacted the way it would have if the electron had started in the state ##|\uparrow_e\rangle##. It's usually argued that this macroscopic superposition has to arise because the Schrödinger equation is linear. Let's call this the first story.

This description has struck many people (including me) as confusing, since it seems to contradict what I actually see when I run the experiment: if I see the "up" result on my detector, then the "down" term above doesn't seem to have anything to do with the world I see in front of me. It's always seemed to me that this apparent contradiction is the core of the "measurement problem" and, to me at least, resolving it is the central reason to care about interpretations of quantum mechanics.

Neumaier seems to say that the first story is simply incorrect. Instead he tells what I'll call the second story: because the detector is in a hot, noisy, non-at-all-isolated environment, and I only care about a very small number of the relevant degrees of freedom, I should instead represent it as a reduced density matrix and, since I've chosen to ignore most of the physical degrees of freedom in this system, the detector's position evolves in some complicated nonlinear way, but with the two possible readings as the only (relevant) stable states of the system. Which result actually happens depends on details of the state of the detector and the environment which aren't practically knowable, but the whole process is, in principle, deterministic. But the macroscopic superposition from the first story doesn't actually ever obtain, or if it does it quickly evolves into one of the two stable states.

So, finally, here's what I'd like to understand better:

(0) Did I describe the second story correctly?

(1) It seems to me that the second story could be told entirely within what Neumaier calls the "formal core" of quantum mechanics, the part that every interpretation agrees on. In his language, after my experiment, the q-probability distribution of the location of the detector needle really is supported only in the "up" region, and this follows from ordinary, uncontroversial quantum mechanics. Is this right? Does anything about the second story actually depend on the thermal interpretation?

(2) A more philosophical question: If macroscopic superpositions never actually appear, why all the fuss about interpretations? (For example, the many worlds interpretation seems to exist entirely to describe what it would mean for the universe to end up in such a macroscopic superposition.) What else even is there to worry about? If this does resolve the measurement problem, why wasn't it pointed out a long time ago?

(3) I've seen many arguments (e.g. https://plato.stanford.edu/entries/qm-decoherence/#SolMeaPro, which cites https://arxiv.org/abs/quant-ph/0112095 and https://arxiv.org/abs/quant-ph/9506020 pp. 14-15) that sound to me like they're saying the second story can't possibly work, usually with language like "decoherence cannot solve the measurement problem". Am I misunderstanding them? If not, would the counterargument just be that they're making the same linearity mistake as the first story?
 

A. Neumaier

Science Advisor
Insights Author
6,254
2,366
(0) Did I describe the second story correctly?
Yes, if (in case of the qubit discussed in Part IV of my series of papers) 'the system' exclusively refers to the reduced 2-state system and not to any other property of the detector.
(1) It seems to me that the second story could be told entirely within what Neumaier calls the "formal core" of quantum mechanics, the part that every interpretation agrees on. In his language, after my experiment, the q-probability distribution of the location of the detector needle really is supported only in the "up" region, and this follows from ordinary, uncontroversial quantum mechanics. Is this right? Does anything about the second story actually depend on the thermal interpretation?
As long as one only looks at an ensemble of similarly prepared systems, nothing depends on the thermal interpretation. But the thermal interpretation explains what happens in each individual case, and why.
(2) A more philosophical question: If macroscopic superpositions never actually appear, why all the fuss about interpretations? (For example, the many worlds interpretation seems to exist entirely to describe what it would mean for the universe to end up in such a macroscopic superposition.) What else even is there to worry about? If this does resolve the measurement problem, why wasn't it pointed out a long time ago?
That macroscopic systems can in principle be described by a pure state is a prerequisite of the traditional discussions, and is part of almost all interpretaions in print. The thermal interpretation explicitly negates this assumption.
(3) I've seen many arguments (e.g. https://plato.stanford.edu/entries/qm-decoherence/#SolMeaPro, which cites https://arxiv.org/abs/quant-ph/0112095 and https://arxiv.org/abs/quant-ph/9506020 pp. 14-15) that sound to me like they're saying the second story can't possibly work, usually with language like "decoherence cannot solve the measurement problem". Am I misunderstanding them? If not, would the counterargument just be that they're making the same linearity mistake as the first story?
Decoherence cannot solve the measurement problem since it still assumes the eigenvalue link to measurement and hence has no explanation for unique outcomes. The thermal interpretation has unique outcomes built in from the outset, hence only has to explain the origin of th probabilities.
 
Last edited:
6
1
Thanks for taking the time to reply! I have a couple more questions, but what you've said so far is helpful.

Yes, if (in case of the qubit discussed in Part IV of my series of papers) 'the system' exclusively refers to the reduced 2-state system and not to any other property of the detector.
Yes, that's what I meant --- I'm referring to the variable that encodes which of the two readings ends up being displayed on the detector, and this omits the vast majority of the physical properties of the detector and indeed the rest of the universe. I think we're on the same page.

As long as one only looks at an ensemble of similarly prepared systems, nothing depends on the thermal interpretation. But the thermal interpretation explains what happens in each individual case, and why.
I think I understand what you're saying here, but I'm asking because I thought that, in addition, you were actually claiming something even stronger: that the "first story" fails on its own terms. That is, I read you as saying that the problem arises from describing the detector as a pure state, which forces you into linear dynamics, which in turn forces you into the macroscopic superposition. You seem to be saying the pure-state assumption is simply a mistake no matter which interpretation you subscribe to, because the detector needle isn't isolated from its environment. Is that right?

Once one agrees that the macroscopic superposition can't happen, and that in the end the q-probability distribution of location of the needle has almost all its mass located in one of the two separated regions, it seems to me that we've already eliminated all the "mystery" that's usually associated with quantum measurements --- you now just need some way to attach a physical meaning to the mathematical objects in front of you, and I agree with you that, since the q-variance is small, it's very natural to interpret the q-expectation of the needle position variable as "where the needle is".

Part of the reason I've enjoyed reading this series of papers is that I find your explanation of measurement very attractive; it's the only story I've ever seen that I could imagine finding fully satisfying. The reason I'm confused is that I don't understand why, if the macroscopic superposition actually doesn't occur, anyone would still be proposing things like many-worlds, Bohmian mechanics, or objective collapse theories. When smart people do things that don't make sense to me, it makes me think I'm not understanding something! Are the people proposing these other interpretations are just making the mistake of trying to describe the detector with a pure state?
 

A. Neumaier

Science Advisor
Insights Author
6,254
2,366
Once one agrees that the macroscopic superposition can't happen
Macroscopically, one has density operators, and talking about their superposition is meaningless.
Are the people proposing these other interpretations are just making the mistake of trying to describe the detector with a pure state?
In the standard interpretations, this is not a mistake but a principal feature!
 

vanhees71

Science Advisor
Insights Author
Gold Member
12,260
4,615
Well, there are some macroscopic systems which show specific quantum behavior (superfluidity of liquid Helium, super conductivity), but it's of course not so easy to prepare macroscopic systems in states such that quantum behavior is observable in macroscopic quantities. That's why classical physics indeed usually works so well for macroscopic matter. The standard interpretation of the QT formalism allows me to say that this is due to course graining, averaging over many microscopic degrees of freedom such that quantum fluctuations become irrelevant for the observed macrosocopic quantities on the scales of the typical resolution of their dynamical behavior. Within the thermal interpretaion I'm not allowed to say this anymore, but I don't know what I'm allowed to say.

It's new to me that detectors are described by "pure states". Usually it's described as a classical macroscopic device, or which particular example do you have in mind?
 

A. Neumaier

Science Advisor
Insights Author
6,254
2,366
It's new to me that detectors are described by "pure states". Usually it's described as a classical macroscopic device, or which particular example do you have in mind?
Well, surely a classical macroscopic device is also a quantum system, hence described by a quantum state. At least people who need to design detectors in silico treat the macroscopic system formed by the device as a quantum system.

Elsewhere you just said,
Incoherent light can, e.g., be described by taking the intensity of coherent light and randomize phase differences. The same holds for polarization.
Thus you regard the mixed quantum state with rotation invariant density matrix as a randomized pure (polarized) quantum state. This exemplifies the rule, stated in all standard quantum mechanics books, that the density operator of any quantum system is regarded (by the orthodoxy supported, e.g., by Landau and Lifshitz) as representing an unknown pure state, randomized over the macroscopic uncertainty.
 
(3) I've seen many arguments (e.g. https://plato.stanford.edu/entries/qm-decoherence/#SolMeaPro, which cites https://arxiv.org/abs/quant-ph/0112095 and https://arxiv.org/abs/quant-ph/9506020 pp. 14-15) that sound to me like they're saying the second story can't possibly work, usually with language like "decoherence cannot solve the measurement problem". Am I misunderstanding them? If not, would the counterargument just be that they're making the same linearity mistake as the first story?
Decoherence does solve a big part the measurement problem, because it shows how wavefunctions will seem to collapse upon measurement, without actually having to postulate that they do actually collapse. The demonstration of this is just a technical matter, independent of any interpretation. What it doesn't explain is where probabilities come from - that is another story.
 

A. Neumaier

Science Advisor
Insights Author
6,254
2,366
Well, there are some macroscopic systems which show specific quantum behavior (superfluidity of liquid Helium, super conductivity), but it's of course not so easy to prepare macroscopic systems in states such that quantum behavior is observable in macroscopic quantities.
The macroscopic laws of a superfluid are as classical as the macroscopic laws of hydromechanics for water, though quantitatively slightly different. Both depend for their details (thermodynamic state functions) on quantum properties of matter. But the macroscopic limit is in both cases classical and deterministic.

That's why classical physics indeed usually works so well for macroscopic matter. The standard interpretation of the QT formalism allows me to say that this is due to course graining, averaging over many microscopic degrees of freedom such that quantum fluctuations become irrelevant for the observed macrosocopic quantities on the scales of the typical resolution of their dynamical behavior. Within the thermal interpretation I'm not allowed to say this anymore, but I don't know what I'm allowed to say.
The thermal interpretation also explains this by coarse-graining, but the latter is not seen as an averaging process (which it isn't in the standard formulations of coarse graining, except in limiting cases such as very dilute gases). Instead, coarse-graining is seen as an approximation process in which one restricts attention to a collection of relevant macroscopic variables and neglects small amplitude variations with high spatial or temporal frequencies.
 

vanhees71

Science Advisor
Insights Author
Gold Member
12,260
4,615
Well, surely a classical macroscopic device is also a quantum system, hence described by a quantum state. At least people who need to design detectors in silico treat the macroscopic system formed by the device as a quantum system.

Elsewhere you just said,

Thus you regard the mixed quantum state with rotation invariant density matrix as a randomized pure (polarized) quantum state. This exemplifies the rule, stated in all standard quantum mechanics books, that the density operator of any quantum system is regarded (by the orthodoxy supported, e.g., by Landau and Lifshitz) as representing an unknown pure state, randomized over the macroscopic uncertainty.
Well, it's one way to describe it. I'd not say that's the most general case. Another important example is if you have a quantum system that may be prepared in a pure state and you want to describe a subsystem, which you describe the (usually mixed-state) statistical operator you get from a partial trace.

I'd say that usually a measurement device is usually rather described in a mixed rather than a pure state.
 

vanhees71

Science Advisor
Insights Author
Gold Member
12,260
4,615
The macroscopic laws of a superfluid are as classical as the macroscopic laws of hydromechanics for water, though quantitatively slightly different. Both depend for their details (thermodynamic state functions) on quantum properties of matter. But the macroscopic limit is in both cases classical and deterministic.


The thermal interpretation also explains this by coarse-graining, but the latter is not seen as an averaging process (which it isn't in the standard formulations of coarse graining, except in limiting cases such as very dilute gases). Instead, coarse-graining is seen as an approximation process in which one restricts attention to a collection of relevant macroscopic variables and neglects small amplitude variations with high spatial or temporal frequencies.
But that in fact IS the usual coarse-graining I'm talking about. You average over the many microscopic details to describe the average behavior of macroscopic observables. One usual formal way is the gradient expansion (which can also be formulated as a formal ##\hbar## expansion). So after all the thermal interpretation is again equivalent to the standard interpretation? Still puzzled...
 

A. Neumaier

Science Advisor
Insights Author
6,254
2,366
I'd say that usually a measurement device is usually rather described in a mixed rather than a pure state.
I agree. But this description is usually (and in particular by Landau & Lifshits) taken to be statistical, i.e., as a mixture indicating ignorance of the true pure state.
Well, it's one way to describe it. I'd not say that's the most general case. Another important example is if you have a quantum system that may be prepared in a pure state and you want to describe a subsystem, which you describe the (usually mixed-state) statistical operator you get from a partial trace.
Well, you could consider the detector as being a subsystem of the lab; then the lab would be in an unkown pure state (but described by a mixture mostly in local equilibrium) and the detector would be described by a partial trace.

Within the traditional foundation you cannot escape assuming that the biggest system considered should be in a pure state if the details could be gathered for describing this state.
 

A. Neumaier

Science Advisor
Insights Author
6,254
2,366
But that in fact IS the usual coarse-graining I'm talking about. You average over the many microscopic details to describe the average behavior of macroscopic observables.
I don't see any of this in the usual 2PI formalism for deriving the coarse-grained kinetic equations of
Kadanoff-Baym, say.
One usual formal way is the gradient expansion (which can also be formulated as a formal ##\hbar## expansion).
In which step, precisely, does the gradient expansion involve an average over microscopic details (rather than an ensemble average over imagined replicas of the fields) ?
So after all the thermal interpretation is again equivalent to the standard interpretation? Still puzzled...
On the level of statistical mechanics, the thermal interpretation is essentially equivalent to the standard interpretation, except for the way of talking about things. The thermal interpretation talk is adapted to the actual usage rather than to the foundational brimborium.

On this level, the thermal interpretation allows one to describe with the multicanonical ensemble a single lump of silver, whereas tradition takes the ensemble to be a literal ensemble of many identically prepared lumps of silver - even when only one of a particular form (a statue of Pallas Athene, say) has ever been prepared.
 

vanhees71

Science Advisor
Insights Author
Gold Member
12,260
4,615
In the 2PI treatment for deriving the Kadanoff-Baym equation and then doing the gradient expansion to get quantum-transport equations you work in the Wigner picture, i.e., a Fourier transform in ##(x-y)##, where ##x## and ##y## are the space-time coordinates of the two-point (contour) Green's function, i.e., you neglect the rapid changes in the variable ##(x-y)##, which is effectively an averaging out of (quantum) fluctuations.

In standard quantum-statistical mechanics one very well describes single macroscopic objects like a lump of silver. The coarse graining is over microscopic large but macroscopic small space-time cells. The Gibbs ensemble is just a tool to think statistically about this (or to program Monte Carlo simulations ;-)).

You still have not made clear to me, what's the Thermal Interpretation really is, if I'm not allowed to think about the ##Tr(\hat{\rho} \hat{O})## as an averaging procedure!
 

A. Neumaier

Science Advisor
Insights Author
6,254
2,366
In the 2PI treatment for deriving the Kadanoff-Baym equation and then doing the gradient expansion to get quantum-transport equations you work in the Wigner picture, i.e., a Fourier transform in ##(x-y)##, where ##x## and ##y## are the space-time coordinates of the two-point (contour) Green's function, i.e., you neglect the rapid changes in the variable ##(x-y)##, which is effectively an averaging out of (quantum) fluctuations.
It smoothes rapid spatial changes irrespective of their origin. This is of the same kind as when in classical optics one averages over fast oscillations. It has nothing to do with microscopic degrees
of freedom - it is not an average over a large number of atoms or electrons!

In standard quantum-statistical mechanics one very well describes single macroscopic objects like a lump of silver. The coarse graining is over microscopic large but macroscopic small space-time cells. The Gibbs ensemble is just a tool to think statistically about this (or to program Monte Carlo simulations ;-)).
As it is defined, the Gibbs ensemble is an ensemble of copies of the original lumps of silver. This was clearly understood in Gibbs' time, where it was a major point of criticism of his method! For example, on p.226f of
  • P. Hertz, Über die mechanischen Grundlagen der Thermodynamik, Ann. Physik IV. Folge (33) 1910, 225--274.
one can read:
Paul Hertz said:
So aufgefaßt, scheint die Gibbssche Definition geradezu widersinnig. Wie soll eine dem Körper wirklich eignende Größe abhangen nicht von dem Zustand, den er hat, sondern den er möglicherweise haben könnte? [...] Es wird eine Gesamtheit mathematisch fingiert [...] erscheint es schwierig, wenn nicht ausgeschlossen, dem Begriffe der kanonischen Gesamtheit eine physikalische Bedeutung abzugewinnen.
To reinterpret it as an ensemble of space-time cells is completely changing the meaning it has by definition!
You still have not made clear to me, what's the Thermal Interpretation really is, if I'm not allowed to think about the ##Tr(\hat{\rho} \hat{O})## as an averaging procedure!
You may think of it as a purely mathematical computation, of the same kind as many other purely mathematical computations done in the derivation of the Kadanoff-Baym equations. You may think of the result as the ''macroscopic value'' of ##O##, lying somewhere in the convex hull of the spectrum of ##O##.
 

vanhees71

Science Advisor
Insights Author
Gold Member
12,260
4,615
I see. So it's just an extreme form of the shutup-and-calculate advice: you use the established math wothout any heuristics simply because it works. I find this quite nice, but it's hard to believe that without some heuristics in the relation to the abstract formalism QT would ever have been so successfully applied to the description of real-world processes.

In our interpretation of the standard derivation I think we agree, because indeed it's just the same averaging process as in classical statistics. That you may average over much more than just quantum fluctuations is also clear. That's done to the extreme when you further break the dynamics down to ideal hydro, i.e. assuming local equilibrium. From this you do the other direction in figuring in ever more fluctuations in varous ways to derive viscous hydro (Chapman-Enkog, method of moments etc.).
 

Want to reply to this thread?

"Confusion about the thermal interpretation's account of measurement" You must log in or register to reply here.

Physics Forums Values

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving

Hot Threads

Top