Confusion about the thermal interpretation's account of measurement

  • I
  • Thread starter nicf
  • Start date
  • #1
12
2

Summary:

Does the account of measurement in Neumaier's thermal interpretation papers actually depend on the thermal interpretation?

Main Question or Discussion Point

I'm a mathematician with a longstanding interest in physics, and I've recently been enjoying reading and thinking about Arnold Neumaier's thermal interpretation, including some threads on this forum. There's something that's still confusing me, though, and I'm hoping someone here can clear it up. Most of the questions here come from the third paper in the series.

Consider some experiment, like measuring the spin of a suitably prepared electron, where we can get one of two outcomes. The story usually goes that, before the electron sets off the detector, the state is something like ##\left[\sqrt{\frac12}(|\uparrow_e\rangle+|\downarrow_e\rangle)\right]\otimes|\mbox{ready}\rangle##, where ##|\uparrow_e\rangle## denotes the state of an electron which has spin up around the relevant axis, and afterwards the state is something like ##\sqrt{\frac12}(|\uparrow_e\rangle\otimes |\uparrow_D\rangle+|\downarrow_e\rangle\otimes|\downarrow_D\rangle)##, where ##|\uparrow_D\rangle## denotes a state in which the (macroscopic) detector has reacted the way it would have if the electron had started in the state ##|\uparrow_e\rangle##. It's usually argued that this macroscopic superposition has to arise because the Schrödinger equation is linear. Let's call this the first story.

This description has struck many people (including me) as confusing, since it seems to contradict what I actually see when I run the experiment: if I see the "up" result on my detector, then the "down" term above doesn't seem to have anything to do with the world I see in front of me. It's always seemed to me that this apparent contradiction is the core of the "measurement problem" and, to me at least, resolving it is the central reason to care about interpretations of quantum mechanics.

Neumaier seems to say that the first story is simply incorrect. Instead he tells what I'll call the second story: because the detector is in a hot, noisy, non-at-all-isolated environment, and I only care about a very small number of the relevant degrees of freedom, I should instead represent it as a reduced density matrix and, since I've chosen to ignore most of the physical degrees of freedom in this system, the detector's position evolves in some complicated nonlinear way, but with the two possible readings as the only (relevant) stable states of the system. Which result actually happens depends on details of the state of the detector and the environment which aren't practically knowable, but the whole process is, in principle, deterministic. But the macroscopic superposition from the first story doesn't actually ever obtain, or if it does it quickly evolves into one of the two stable states.

So, finally, here's what I'd like to understand better:

(0) Did I describe the second story correctly?

(1) It seems to me that the second story could be told entirely within what Neumaier calls the "formal core" of quantum mechanics, the part that every interpretation agrees on. In his language, after my experiment, the q-probability distribution of the location of the detector needle really is supported only in the "up" region, and this follows from ordinary, uncontroversial quantum mechanics. Is this right? Does anything about the second story actually depend on the thermal interpretation?

(2) A more philosophical question: If macroscopic superpositions never actually appear, why all the fuss about interpretations? (For example, the many worlds interpretation seems to exist entirely to describe what it would mean for the universe to end up in such a macroscopic superposition.) What else even is there to worry about? If this does resolve the measurement problem, why wasn't it pointed out a long time ago?

(3) I've seen many arguments (e.g. https://plato.stanford.edu/entries/qm-decoherence/#SolMeaPro, which cites https://arxiv.org/abs/quant-ph/0112095 and https://arxiv.org/abs/quant-ph/9506020 pp. 14-15) that sound to me like they're saying the second story can't possibly work, usually with language like "decoherence cannot solve the measurement problem". Am I misunderstanding them? If not, would the counterargument just be that they're making the same linearity mistake as the first story?
 

Answers and Replies

  • #2
A. Neumaier
Science Advisor
Insights Author
2019 Award
7,333
3,240
(0) Did I describe the second story correctly?
Yes, if (in case of the qubit discussed in Part IV of my series of papers) 'the system' exclusively refers to the reduced 2-state system and not to any other property of the detector.
(1) It seems to me that the second story could be told entirely within what Neumaier calls the "formal core" of quantum mechanics, the part that every interpretation agrees on. In his language, after my experiment, the q-probability distribution of the location of the detector needle really is supported only in the "up" region, and this follows from ordinary, uncontroversial quantum mechanics. Is this right? Does anything about the second story actually depend on the thermal interpretation?
As long as one only looks at an ensemble of similarly prepared systems, nothing depends on the thermal interpretation. But the thermal interpretation explains what happens in each individual case, and why.
(2) A more philosophical question: If macroscopic superpositions never actually appear, why all the fuss about interpretations? (For example, the many worlds interpretation seems to exist entirely to describe what it would mean for the universe to end up in such a macroscopic superposition.) What else even is there to worry about? If this does resolve the measurement problem, why wasn't it pointed out a long time ago?
That macroscopic systems can in principle be described by a pure state is a prerequisite of the traditional discussions, and is part of almost all interpretaions in print. The thermal interpretation explicitly negates this assumption.
(3) I've seen many arguments (e.g. https://plato.stanford.edu/entries/qm-decoherence/#SolMeaPro, which cites https://arxiv.org/abs/quant-ph/0112095 and https://arxiv.org/abs/quant-ph/9506020 pp. 14-15) that sound to me like they're saying the second story can't possibly work, usually with language like "decoherence cannot solve the measurement problem". Am I misunderstanding them? If not, would the counterargument just be that they're making the same linearity mistake as the first story?
Decoherence cannot solve the measurement problem since it still assumes the eigenvalue link to measurement and hence has no explanation for unique outcomes. The thermal interpretation has unique outcomes built in from the outset, hence only has to explain the origin of the probabilities.
 
Last edited:
  • Like
Likes mattt
  • #3
12
2
Thanks for taking the time to reply! I have a couple more questions, but what you've said so far is helpful.

Yes, if (in case of the qubit discussed in Part IV of my series of papers) 'the system' exclusively refers to the reduced 2-state system and not to any other property of the detector.
Yes, that's what I meant --- I'm referring to the variable that encodes which of the two readings ends up being displayed on the detector, and this omits the vast majority of the physical properties of the detector and indeed the rest of the universe. I think we're on the same page.

As long as one only looks at an ensemble of similarly prepared systems, nothing depends on the thermal interpretation. But the thermal interpretation explains what happens in each individual case, and why.
I think I understand what you're saying here, but I'm asking because I thought that, in addition, you were actually claiming something even stronger: that the "first story" fails on its own terms. That is, I read you as saying that the problem arises from describing the detector as a pure state, which forces you into linear dynamics, which in turn forces you into the macroscopic superposition. You seem to be saying the pure-state assumption is simply a mistake no matter which interpretation you subscribe to, because the detector needle isn't isolated from its environment. Is that right?

Once one agrees that the macroscopic superposition can't happen, and that in the end the q-probability distribution of location of the needle has almost all its mass located in one of the two separated regions, it seems to me that we've already eliminated all the "mystery" that's usually associated with quantum measurements --- you now just need some way to attach a physical meaning to the mathematical objects in front of you, and I agree with you that, since the q-variance is small, it's very natural to interpret the q-expectation of the needle position variable as "where the needle is".

Part of the reason I've enjoyed reading this series of papers is that I find your explanation of measurement very attractive; it's the only story I've ever seen that I could imagine finding fully satisfying. The reason I'm confused is that I don't understand why, if the macroscopic superposition actually doesn't occur, anyone would still be proposing things like many-worlds, Bohmian mechanics, or objective collapse theories. When smart people do things that don't make sense to me, it makes me think I'm not understanding something! Are the people proposing these other interpretations are just making the mistake of trying to describe the detector with a pure state?
 
  • Like
Likes Troubled Muppet
  • #4
A. Neumaier
Science Advisor
Insights Author
2019 Award
7,333
3,240
Once one agrees that the macroscopic superposition can't happen
Macroscopically, one has density operators, and talking about their superposition is meaningless.
Are the people proposing these other interpretations are just making the mistake of trying to describe the detector with a pure state?
In the standard interpretations, this is not a mistake but a principal feature!
 
  • #5
vanhees71
Science Advisor
Insights Author
Gold Member
2019 Award
15,152
6,610
Well, there are some macroscopic systems which show specific quantum behavior (superfluidity of liquid Helium, super conductivity), but it's of course not so easy to prepare macroscopic systems in states such that quantum behavior is observable in macroscopic quantities. That's why classical physics indeed usually works so well for macroscopic matter. The standard interpretation of the QT formalism allows me to say that this is due to course graining, averaging over many microscopic degrees of freedom such that quantum fluctuations become irrelevant for the observed macrosocopic quantities on the scales of the typical resolution of their dynamical behavior. Within the thermal interpretaion I'm not allowed to say this anymore, but I don't know what I'm allowed to say.

It's new to me that detectors are described by "pure states". Usually it's described as a classical macroscopic device, or which particular example do you have in mind?
 
  • #6
A. Neumaier
Science Advisor
Insights Author
2019 Award
7,333
3,240
It's new to me that detectors are described by "pure states". Usually it's described as a classical macroscopic device, or which particular example do you have in mind?
Well, surely a classical macroscopic device is also a quantum system, hence described by a quantum state. At least people who need to design detectors in silico treat the macroscopic system formed by the device as a quantum system.

Elsewhere you just said,
Incoherent light can, e.g., be described by taking the intensity of coherent light and randomize phase differences. The same holds for polarization.
Thus you regard the mixed quantum state with rotation invariant density matrix as a randomized pure (polarized) quantum state. This exemplifies the rule, stated in all standard quantum mechanics books, that the density operator of any quantum system is regarded (by the orthodoxy supported, e.g., by Landau and Lifshitz) as representing an unknown pure state, randomized over the macroscopic uncertainty.
 
  • #7
344
90
(3) I've seen many arguments (e.g. https://plato.stanford.edu/entries/qm-decoherence/#SolMeaPro, which cites https://arxiv.org/abs/quant-ph/0112095 and https://arxiv.org/abs/quant-ph/9506020 pp. 14-15) that sound to me like they're saying the second story can't possibly work, usually with language like "decoherence cannot solve the measurement problem". Am I misunderstanding them? If not, would the counterargument just be that they're making the same linearity mistake as the first story?
Decoherence does solve a big part the measurement problem, because it shows how wavefunctions will seem to collapse upon measurement, without actually having to postulate that they do actually collapse. The demonstration of this is just a technical matter, independent of any interpretation. What it doesn't explain is where probabilities come from - that is another story.
 
  • Like
Likes Demystifier
  • #8
A. Neumaier
Science Advisor
Insights Author
2019 Award
7,333
3,240
Well, there are some macroscopic systems which show specific quantum behavior (superfluidity of liquid Helium, super conductivity), but it's of course not so easy to prepare macroscopic systems in states such that quantum behavior is observable in macroscopic quantities.
The macroscopic laws of a superfluid are as classical as the macroscopic laws of hydromechanics for water, though quantitatively slightly different. Both depend for their details (thermodynamic state functions) on quantum properties of matter. But the macroscopic limit is in both cases classical and deterministic.

That's why classical physics indeed usually works so well for macroscopic matter. The standard interpretation of the QT formalism allows me to say that this is due to course graining, averaging over many microscopic degrees of freedom such that quantum fluctuations become irrelevant for the observed macrosocopic quantities on the scales of the typical resolution of their dynamical behavior. Within the thermal interpretation I'm not allowed to say this anymore, but I don't know what I'm allowed to say.
The thermal interpretation also explains this by coarse-graining, but the latter is not seen as an averaging process (which it isn't in the standard formulations of coarse graining, except in limiting cases such as very dilute gases). Instead, coarse-graining is seen as an approximation process in which one restricts attention to a collection of relevant macroscopic variables and neglects small amplitude variations with high spatial or temporal frequencies.
 
  • Like
Likes vanhees71
  • #9
vanhees71
Science Advisor
Insights Author
Gold Member
2019 Award
15,152
6,610
Well, surely a classical macroscopic device is also a quantum system, hence described by a quantum state. At least people who need to design detectors in silico treat the macroscopic system formed by the device as a quantum system.

Elsewhere you just said,

Thus you regard the mixed quantum state with rotation invariant density matrix as a randomized pure (polarized) quantum state. This exemplifies the rule, stated in all standard quantum mechanics books, that the density operator of any quantum system is regarded (by the orthodoxy supported, e.g., by Landau and Lifshitz) as representing an unknown pure state, randomized over the macroscopic uncertainty.
Well, it's one way to describe it. I'd not say that's the most general case. Another important example is if you have a quantum system that may be prepared in a pure state and you want to describe a subsystem, which you describe the (usually mixed-state) statistical operator you get from a partial trace.

I'd say that usually a measurement device is usually rather described in a mixed rather than a pure state.
 
  • #10
vanhees71
Science Advisor
Insights Author
Gold Member
2019 Award
15,152
6,610
The macroscopic laws of a superfluid are as classical as the macroscopic laws of hydromechanics for water, though quantitatively slightly different. Both depend for their details (thermodynamic state functions) on quantum properties of matter. But the macroscopic limit is in both cases classical and deterministic.


The thermal interpretation also explains this by coarse-graining, but the latter is not seen as an averaging process (which it isn't in the standard formulations of coarse graining, except in limiting cases such as very dilute gases). Instead, coarse-graining is seen as an approximation process in which one restricts attention to a collection of relevant macroscopic variables and neglects small amplitude variations with high spatial or temporal frequencies.
But that in fact IS the usual coarse-graining I'm talking about. You average over the many microscopic details to describe the average behavior of macroscopic observables. One usual formal way is the gradient expansion (which can also be formulated as a formal ##\hbar## expansion). So after all the thermal interpretation is again equivalent to the standard interpretation? Still puzzled...
 
  • #11
A. Neumaier
Science Advisor
Insights Author
2019 Award
7,333
3,240
I'd say that usually a measurement device is usually rather described in a mixed rather than a pure state.
I agree. But this description is usually (and in particular by Landau & Lifshits) taken to be statistical, i.e., as a mixture indicating ignorance of the true pure state.
Well, it's one way to describe it. I'd not say that's the most general case. Another important example is if you have a quantum system that may be prepared in a pure state and you want to describe a subsystem, which you describe the (usually mixed-state) statistical operator you get from a partial trace.
Well, you could consider the detector as being a subsystem of the lab; then the lab would be in an unkown pure state (but described by a mixture mostly in local equilibrium) and the detector would be described by a partial trace.

Within the traditional foundation you cannot escape assuming that the biggest system considered should be in a pure state if the details could be gathered for describing this state.
 
  • #12
A. Neumaier
Science Advisor
Insights Author
2019 Award
7,333
3,240
But that in fact IS the usual coarse-graining I'm talking about. You average over the many microscopic details to describe the average behavior of macroscopic observables.
I don't see any of this in the usual 2PI formalism for deriving the coarse-grained kinetic equations of
Kadanoff-Baym, say.
One usual formal way is the gradient expansion (which can also be formulated as a formal ##\hbar## expansion).
In which step, precisely, does the gradient expansion involve an average over microscopic details (rather than an ensemble average over imagined replicas of the fields) ?
So after all the thermal interpretation is again equivalent to the standard interpretation? Still puzzled...
On the level of statistical mechanics, the thermal interpretation is essentially equivalent to the standard interpretation, except for the way of talking about things. The thermal interpretation talk is adapted to the actual usage rather than to the foundational brimborium.

On this level, the thermal interpretation allows one to describe with the multicanonical ensemble a single lump of silver, whereas tradition takes the ensemble to be a literal ensemble of many identically prepared lumps of silver - even when only one of a particular form (a statue of Pallas Athene, say) has ever been prepared.
 
  • #13
vanhees71
Science Advisor
Insights Author
Gold Member
2019 Award
15,152
6,610
In the 2PI treatment for deriving the Kadanoff-Baym equation and then doing the gradient expansion to get quantum-transport equations you work in the Wigner picture, i.e., a Fourier transform in ##(x-y)##, where ##x## and ##y## are the space-time coordinates of the two-point (contour) Green's function, i.e., you neglect the rapid changes in the variable ##(x-y)##, which is effectively an averaging out of (quantum) fluctuations.

In standard quantum-statistical mechanics one very well describes single macroscopic objects like a lump of silver. The coarse graining is over microscopic large but macroscopic small space-time cells. The Gibbs ensemble is just a tool to think statistically about this (or to program Monte Carlo simulations ;-)).

You still have not made clear to me, what's the Thermal Interpretation really is, if I'm not allowed to think about the ##Tr(\hat{\rho} \hat{O})## as an averaging procedure!
 
  • #14
A. Neumaier
Science Advisor
Insights Author
2019 Award
7,333
3,240
In the 2PI treatment for deriving the Kadanoff-Baym equation and then doing the gradient expansion to get quantum-transport equations you work in the Wigner picture, i.e., a Fourier transform in ##(x-y)##, where ##x## and ##y## are the space-time coordinates of the two-point (contour) Green's function, i.e., you neglect the rapid changes in the variable ##(x-y)##, which is effectively an averaging out of (quantum) fluctuations.
It smoothes rapid spatial changes irrespective of their origin. This is of the same kind as when in classical optics one averages over fast oscillations. It has nothing to do with microscopic degrees
of freedom - it is not an average over a large number of atoms or electrons!

In standard quantum-statistical mechanics one very well describes single macroscopic objects like a lump of silver. The coarse graining is over microscopic large but macroscopic small space-time cells. The Gibbs ensemble is just a tool to think statistically about this (or to program Monte Carlo simulations ;-)).
As it is defined, the Gibbs ensemble is an ensemble of copies of the original lumps of silver. This was clearly understood in Gibbs' time, where it was a major point of criticism of his method! For example, on p.226f of
  • P. Hertz, Über die mechanischen Grundlagen der Thermodynamik, Ann. Physik IV. Folge (33) 1910, 225--274.
one can read:
Paul Hertz said:
So aufgefaßt, scheint die Gibbssche Definition geradezu widersinnig. Wie soll eine dem Körper wirklich eignende Größe abhangen nicht von dem Zustand, den er hat, sondern den er möglicherweise haben könnte? [...] Es wird eine Gesamtheit mathematisch fingiert [...] erscheint es schwierig, wenn nicht ausgeschlossen, dem Begriffe der kanonischen Gesamtheit eine physikalische Bedeutung abzugewinnen.
To reinterpret it as an ensemble of space-time cells is completely changing the meaning it has by definition!
You still have not made clear to me, what's the Thermal Interpretation really is, if I'm not allowed to think about the ##Tr(\hat{\rho} \hat{O})## as an averaging procedure!
You may think of it as a purely mathematical computation, of the same kind as many other purely mathematical computations done in the derivation of the Kadanoff-Baym equations. You may think of the result as the ''macroscopic value'' of ##O##, lying somewhere in the convex hull of the spectrum of ##O##.
 
  • Like
Likes dextercioby
  • #15
vanhees71
Science Advisor
Insights Author
Gold Member
2019 Award
15,152
6,610
I see. So it's just an extreme form of the shutup-and-calculate advice: you use the established math wothout any heuristics simply because it works. I find this quite nice, but it's hard to believe that without some heuristics in the relation to the abstract formalism QT would ever have been so successfully applied to the description of real-world processes.

In our interpretation of the standard derivation I think we agree, because indeed it's just the same averaging process as in classical statistics. That you may average over much more than just quantum fluctuations is also clear. That's done to the extreme when you further break the dynamics down to ideal hydro, i.e. assuming local equilibrium. From this you do the other direction in figuring in ever more fluctuations in varous ways to derive viscous hydro (Chapman-Enkog, method of moments etc.).
 
  • #16
A. Neumaier
Science Advisor
Insights Author
2019 Award
7,333
3,240
I see. So it's just an extreme form of the shutup-and-calculate advice: you use the established math wothout any heuristics simply because it works. I find this quite nice, but it's hard to believe that without some heuristics in the relation to the abstract formalism QT would ever have been so successfully applied to the description of real-world processes.
The heuristic of ignoring tiny high frequency contributions in space or time - independent of any reference to the microscopic degrees of freedom - is very powerful and sufficient to motivate everything that works. For example, the gradient expansion can be motivated by single particle quantum mechanics where in the position representation, the gradient expansion is just an expansion into low powers of momentum, i.e., a low momentum = slow change expansion. One just keeps the least changing contributions. Clearly, this is not averaging over microscopic degrees of freedom.
In our interpretation of the standard derivation I think we agree, because indeed it's just the same averaging process as in classical statistics.
Effectively, yes, since you are employing the averaging idea for much more than only statistics.
But from a strict point of view there is a big difference, since the averaging per se has nothing to do with statistics. The thermal interpretation is about giving up statistics at a fundamental level and to employ it only where it is needed to reduce the amount of noise. This makes the thermal interpretation applicable to single systems where at least the literal use of the traditional quantum postulates would require (as in Gibbs' time) the use of a fictitious ensemble of imagined copies of the whole system.
 
  • #17
vanhees71
Science Advisor
Insights Author
Gold Member
2019 Award
15,152
6,610
I've still no clue, what's the meaning of the formula ##\langle A \rangle=\mathrm{Tr}(\hat{\rho} \hat{A})##, if it's not an averaging procedure? It doesn't need to be an ensemble average. You can also simply "coarse grain" in the sense you describe it, i.e., averaging over "microscopically large, macroscopically small" space (or space-time) volums. This is in fact what's effectively done in the gradient expansion.

Of course, another argument for the gradient expansion as a means to derive effective classical descriptions for macroscopic quantities is that it can as well be formalized as an expansion in powers of ##\hbar##.

I also don't see a problem with the treatment of a single system in standard quantum theory within the standard statistical interpretation of the state since, whenever the classical approximation is valid, the standard deviations from the mean values of the macroscopic observables are irrelevant (that's a tautology), and then the probabilisitic nature of the quantum state is simply hard to observe and everything looks classical.

Take the famous ##\alpha##-particles in a cloud chamber as an example a la Mott. Each single particle seems to behave classical, i.e., following a classical (straight) trajectory, but of course it's because it's not a single-particle system at all, but a single particle interacting (practically continuously) with the vapor in the cloud chamber. The macroscopic trajectory, for which you can in principle observe position and velocity of the particle on macroscopic level of accuracy by just observing the trails building up during the particle is moving through the chamber, is due to this interaction "with the environment". For a single ##\alpha## particle in a vacuum originating from a single ##\alpha##-decaying nucleus you cannot say too much indeed: You neither know when it's exactly created nor in which direction it's flying, while all this is known simply by observation of the macroscopic trails of the ##\alpha## particle in the cloud chamber.
 
  • #18
DarMM
Science Advisor
Gold Member
2,370
1,397
I've still no clue, what's the meaning of the formula ##\langle A \rangle=\mathrm{Tr}(\hat{\rho} \hat{A})##, if it's not an averaging procedure?
It's a property of the system like angular momentum in Classical Mechanics.
 
  • #19
A. Neumaier
Science Advisor
Insights Author
2019 Award
7,333
3,240
I've still no clue, what's the meaning of the formula ##\langle A \rangle=\mathrm{Tr}(\hat{\rho} \hat{A})##, if it's not an averaging procedure? It doesn't need to be an ensemble average.
But you defined it in your lecture notes as being the average over the ensemble of all systems prepared in the state ##\rho##. Since you now claim the opposite, you should emphasize in your lecture notes that it does not need to be an ensemble average, but also applies to a single, uniquely prepared system, such as the beautifully and uniquely shaped lump of silver under discussion.

In my view, the meaning of the formula ##\langle A \rangle=\mathrm{Tr}(\rho A)## is crystal clear. It is the trace of a product of two Hermitian operators, expressing a property of the system in the quantum state ##\rho##, just like a function ##A(p,q)## of a classical Hamiltonian system expresses a property of the system in the given classical state ##(p,q)##.

Ostensibly, ##\langle A \rangle## is not an average of anything (unless you introduce additional machinery and then prove it to be such an average). If the Hilbert space is ##C^n##, it is a weighted sum of the ##n^2## matrix elements of ##A##, with in general complex weights. Nothing at all suggests this to be an average over microscopic degrees of freedom, or over short times, or over whatever else you may think of.
 
  • #20
vanhees71
Science Advisor
Insights Author
Gold Member
2019 Award
15,152
6,610
Of course, I've defined it in this way, because it's most easy to derive the formalism from it. Also the mass is indeed crystal clear, but to do physics you need to know the meaning of the procedure to real-world observables. So still, if ##\langle A \rangle## is not an average, I don't know what it is and how to apply it to the real world.

Obviously you simply don't understand the argument, why the formal manipulations used to derive macroscopic (classical) behavior, be it from quantum or classical statistical physics, always is an averaging procedure over many microscopic degrees of freedom. That all started from the very beginning of statistical mechanics by Bernoulli, Maxwell, and Boltzmann. It has also been used in classical electrodynamics to describe the intensity of electromagnetic fields, particularly optics (where the averaging is a time average), the derivation of electromagnetic properties of matter from classical electron theory (where the averaging is over spatial cells) by Lorentz et al. The same holds true for hydrodynamics (local thermal equilibrium and the corresponding expansions around it to yield all kinds of transport coefficients). All this is completely analogous in Q(F)T. It's the same basic understanding of the physics underlying the mathematical techniques which indeed turned out to be successful.

So it seems as if the Thermal Interpretation is just the "shut-up-and-calculate interpretation" pushed into the extreme, such that it's not useful anymore for the (phenomenological) theoretical physicist. I've some sympathies for this approach, because it avoids philosophical gibberish confusing the subject, but if you don't allow heuristic thinking (like the extremely useful idea of the Gibbs ensemble), there's no chance to apply a theory to new physical problems in the real world.
 
  • #21
A. Neumaier
Science Advisor
Insights Author
2019 Award
7,333
3,240
if ##\langle A \rangle## is not an average, I don't know what it is and how to apply it to the real world.
By your (i.e., the conventional minimal statistical) definition, it is an average over the ensemble of equally prepared systems, and nothing else.

How to apply it to the real world, e.g., to a single beautifully shaped lump of silver or to hydromechanics, should be a consequence of the definitions given. If you interpret it as another average, you therefore need to derive it from this original definition (which is possible only in very special model cases). Otherwise, why should one believe you?
Obviously you simply don't understand the argument, why the formal manipulations used to derive macroscopic (classical) behavior, be it from quantum or classical statistical physics, always is an averaging procedure over many microscopic degrees of freedom. That all started from the very beginning of statistical mechanics by Bernoulli, Maxwell, and Boltzmann.
Yes, I really don't understand it, since in this generality it is simply false. Your argument is valid only in the special case where you assume (as Bernoulli, Maxwell, and Boltzmann) an ideal gas, so that you have an ensemble of independent particles and not (as in dense matter and in QFT) an ensemble of large systems.
if you don't allow heuristic thinking (like the extremely useful idea of the Gibbs ensemble), there's no chance to apply a theory to new physical problems in the real world.
The thermal interpretation turns the heuristic thinking of Gibbs (where people complained how the property of a realization can depend on a theory for all the possibilities, which is indeed not sensible) into something rational that needs no heuristic anymore. Physicists are still allowed to use in addition to the formally correct stuff all the heuristics they are accustomed to, as long as it leads to correct predictions, just as they use the heuristics of virtual particles popping in and out of existence, while in fact they just work with the formal rules.
 
  • #22
vanhees71
Science Advisor
Insights Author
Gold Member
2019 Award
15,152
6,610
Sure, in practice nearly everything is mapped to an ideal gas of quasiparticles, if possible, and it's amazing how far you come with this strategy. Among other things it can describe the color of a shiny lump of silver or the hydrodynamics of a fluid.
 
  • #23
12
2
Decoherence does solve a big part the measurement problem, because it shows how wavefunctions will seem to collapse upon measurement, without actually having to postulate that they do actually collapse. The demonstration of this is just a technical matter, independent of any interpretation. What it doesn't explain is where probabilities come from - that is another story.
This is a clearer way of saying exactly what I meant, thank you :). Let me use this as a jumping-off point to try to state my original question more clearly, since I think I am still confused.

The part of the measurement problem that's relevant to my question is exactly the part that decoherence doesn't try to solve: what determines which of (say) two possible measurement outcomes I actually end up seeing? The reason I'm confused is that, when I try to combine what I understand about decoherence with what I understand about the account described in the thermal interpretation, I arrive at two conclusions that don't line up with each other:

(a) The decoherence story as it's usually given explains how, using ordinary unitary quantum mechanics with no collapse, I end up in a state where neither possible outcome can "interfere" with the other (since both outcomes are entangled with the environment), thereby explaining why the wavefunction appears to collapse. But if I write down a mathematical description of the final state, there are parts of it that correspond to both of the two possibilities with no way to choose between them. This explanation comes with the additional claim that, due to the linearity of time evolution, there's no possible way that the final state could privilege one outcome over the other. (Bell is also often invoked here, although I don't know if I know how to turn that into a proof.)

(b) The description in the thermal interpretation papers seems to claim that, in fact, if I had a good enough description of the details of the initial state of the microscopic system and the measurement apparatus, I would be able to deduce which of the two possibilities "really happened", and that I could do this, again, using only ordinary unitary quantum mechanics with no collapse.

Since both stories use the same initial condition and the same rule for evolving in time, these seem to be two different claims about the exact same mathematical object --- the density matrix of the final state of the system. If that's true, then one of them ought to be wrong. The claim in (b) is much stronger than (a), and I think that if it works any reasonable person ought to regard something like (b) as a solution to the measurement problem! It would certainly be enough to satisfy me. But I've heard (a) enough times that I'm confused about (b). Is your position that the (a) story is incorrect, or am I misunderstanding something else?
 
  • Like
Likes akvadrako
  • #24
PeterDonis
Mentor
Insights Author
2019 Award
29,722
8,978
I could do this, again, using only ordinary unitary quantum mechanics with no collapse.
I'm not sure this is possible. Ordinary unitary QM with decoherence can give you two non-interfering outcomes. It can't give you a single outcome; that obviously violates linearity.

My understanding of the thermal interpretation (remember I'm not its author so my understanding might not be correct) is that the two non-interfering outcomes are actually a meta-stable state of the detector (i.e., of whatever macroscopic object is going to irreversibly record the measurement result), and that random fluctuations cause this meta-stable state to decay into just one of the two outcomes. An analogy that I have seen @A. Neumaier use is a ball on a very sharp peak between two valleys; the ball will not stay on the peak because random fluctuations will cause it to jostle one way or the other and roll down into one of the valleys.

However, the dynamics of this collapse of a meta-stable detector state into one of the two stable outcomes can't be just ordinary unitary QM, because ordinary unitary QM is linear and linear dynamics can't do that. In ordinary unitary QM, fluctuations in the detector would just become entangled with the system being measured and would preserve the multiple outcomes. There would have to be some nonlinear correction to the dynamics to collapse the state into just one outcome.
 
  • #25
12
2
I'm not sure this is possible. Ordinary unitary QM with decoherence can give you two non-interfering outcomes. It can't give you a single outcome; that obviously violates linearity.
Exactly, that's why I'm confused! My impression is that @A. Neumaier is somehow denying this, and that somehow the refusal to describe macroscopic objects with state vectors is related to the way he gets around this linearity argument, although I don't see how.

If we're supposed to be positing nonunitary dynamics on a fundamental level, then that would obviate my whole question, but from the papers I understood @A. Neumaier to be specifically not doing that.
 

Related Threads on Confusion about the thermal interpretation's account of measurement

Replies
17
Views
1K
Replies
826
Views
45K
Replies
6
Views
622
Replies
2
Views
887
Replies
24
Views
2K
Replies
25
Views
10K
Replies
45
Views
2K
Replies
63
Views
4K
Replies
59
Views
7K
Replies
7
Views
990
Top