I The thermal interpretation of quantum physics

  • #51
A. Neumaier said:
DarMM said:
I'm currently thinking a bit about the Bell inequalities for this interpretation.

For now as a side question have you considered the SIC-POVM conjecture, i.e. that specifying the ##d^2## collection of SIC-POVMs is enough to characterize the state ##\rho## completely. If true could this be taken into the Thermal Interpretation as the SIC-POVMs being the fundamental beables/quantities?
They are interesting from a combinatorial point of view but nothing fundamental.
The point is that the construction principles for them is irregular, hence has not enough mathematical structure for something that could be considered fundamental.

More importantly, every physical system that can move or vibrate is represented in an infinite-dimensional Hilbert space. Hence anything dependent on finitely many dimensions cannot be fundamental. Despite their recent popularity, foundations of quantum mechanics just based on quantum information theory are highly defective since they do not even have a way to represent the canonical commutation relations, which are fundamental for all of spectroscopy, scattering theory, and quantum chemistry.
 
  • Like
Likes dextercioby
Physics news on Phys.org
  • #52
akhmeteli said:
I only had in mind the original nonrelativistic Schrödinger equation.
The original nonrelativistic Schrödinger equation is only for a collection of spinless particles. It is far from what is today considered as the Schrödinger equation: the equation ##i\hbar \dot \psi = H \psi## for an arbitrary Hamiltonain ##H##. One needs other forms of ##H## almost everywhere - in spectroscopy, in quantum chemistry, in quantum optics, in quantum information theory.

A. Neumaier said:
an analogous statement about a free relativistic particle somehow prepared at time ##t## in a small region of spacetime suffers the same problem.
akhmeteli said:
Could you please give a reference?
I don't have a reference; this seems to have not been considered before. When you work out the solution in terms of the Fourier transform you get for ##\psi(x,t+x_0)## a convolution of ##\psi(x,t)## (assumed to have compact support) with a function that does not have causal support.
 
  • Like
Likes dextercioby
  • #53
A. Neumaier said:
. . . any interpretation as inadequate that cannot account for the meaning of quantum physics at a time before any life existed...

James B. Hartle shows that this problem is solved in the "post-Everett" CH-generalization of the QM , in his lectures "Spacetime QM and the QM of spacetime" (2014)
https://arxiv.org/abs/gr-qc/9304006

Page 21 of the PDF:

"There is nothing incorrect about Copenhagen quantum mechanics. Neither is it, in any sense, opposite to the post-Everett formulation"
 
Last edited:
  • #54
AlexCaledin said:
Page 21 of the PDF
Please link to the pdf.
 
  • #55
- sorry! here you are:
https://arxiv.org/pdf/gr-qc/9304006.pdf
" Spacetime Quantum Mechanics and the Quantum Mechanics of Spacetime
James B. Hartle
(Submitted on 5 Apr 1993 (v1), last revised 14 Jan 2014 (this version, v3))
These are the author's lectures at the 1992 Les Houches Summer School, "Gravitation and Quantizations". They develop a generalized sum-over-histories quantum mechanics for quantum cosmology that does not require either a preferred notion of time or a definition of measurement. The "post-Everett" quantum mechanics of closed systems is reviewed. Generalized quantum theories are defined by three elements (1) the set of fine-grained histories of the closed system which are its most refined possible description, (2) the allowed coarse grainings which are partitions of the fine-grained histories into classes, and (3) a decoherence functional which measures interference between coarse grained histories. Probabilities are assigned to sets of alternative coarse-grained histories that decohere as a consequence of the closed system's dynamics and initial condition. Generalized sum-over histories quantum theories are constructed for non-relativistic quantum mechanics, abelian gauge theories, a single relativistic world line, and for general relativity. For relativity the fine-grained histories are four-metrics and matter fields. Coarse grainings are four-dimensional diffeomorphism invariant partitions of these. The decoherence function is expressed in sum-over-histories form. The quantum mechanics of spacetime is thus expressed in fully spacetime form."
https://arxiv.org/abs/gr-qc/9304006
(it's the size of a good book though)
 
Last edited:
  • #56
A. Neumaier said:
My language is at least as standard as yours: Before you can apply the spectral theorem in some Hilbert space to some operator, you need definitions of both! I define an inner product on ##L^2(R)## and then the operators ##p## and ##q##, to get the necessary Hilbert space and two particular operators on it. Having these definitions, I don't need the spectral theorem at all - except when I need to define transcendental functions of some operator.The difference, given the position representation (or any other representation), is as follows:

What you call the minimal statistical or standard probabilistic interpretations uses this representation for defining irreducible probabilities of measurement in an ensemble of repeated observations, and thus introduces an ill-defined notion of measurement (and hence the measurement problem - though you close your eyes to it) into the very basis of quantum mechanics. It is no longer clear when something counts as a measurement (so that the unitary evolution is modified) and when the Schrödinger equation applies exactly; neither does it tell you why the unitary evolution of the big system consisting of the measured objects and the detector produces definite events. All this leads to the muddy reasoning visible in the literature on the measurement problem.

The thermal interpretation uses this representation instead to define the formal q-expectation of an arbitrary operator ##A## for which the trace in the formal Born rule can be evaluated. (There are many of these, including many nonhermitian ones and many Hermitian, non-selfadjoint ones.) This is the way q-expectations are used in all of statistical mechanics - including your slides. All this is on the formal side of the quantum formalism, with no interpretation implied, and no relation to observations. This eliminates the concept of probability from the foundations and hence allows progress to be made in the interpretation questions.
That's a cultural difference between physicists and mathematicians. While the mathematician can live with a set of rules (called axioms) without any reference to the "real world". Of course you can just start in the position representation and define a bunch of symbols calling them q-expectation and then work out the mathematical properties of this notion. The physicist however needs a relation of the symbols and mathematical notions to observations in the lab. That's what's called interpretation. As with theory and experiment (theory is needed to construct measurement devices for experiments, which might lead to observations that contradict the very theory; then the theory has to be adapted, and new experiments can be invented to test its consequences and consistency etc. etc.) also the interpretation is needed already for model building.

Now, I don't understand why I cannot interpret your q-expectations as usually as probabilistic expectation values. So the first very natural connection to experiments, which always need statistical arguments to make objective sense. For each measurement to be of credibility you need to repeat the experiment under the same circumstances (in q-language preparations of ensembles) and analyze the results both statistically as well as for systematical errors. The true art of experimenalists is not just to measure something but have a good handle about the errors, and statistics, based on mathematical probability theory is one of the basic tools of every physicist. This you get already in the first lesson of the introductory physics lab (to the dismay of most students, particularly the theoretically inclined, but it's indeed of vital importance particularly for them ;-)).

Concerning QT another pillar to make sense of the formalism, which is also already part of the interpretation, is to find the operators that describe the observables. The most convincing argument is to use the symmetries known from classical physics, defining associated conserved quantities via Noether's theorem. The minimal example for the first lessons of the QM1 lecture is the one-dimensional motion of a non-relativistic particle. There you have time-translation invariance leading to the time-evolution operator (in q-language called Hamiltonian) by finding the corresponding symmetry transformations (unitary for continuous smooth representations of Lie groups thanks to Wigner's theorem) and the generators defining the observable operators. I guess here comes the first place, where the observable operators should be represented by essentially self-adjoint operators, leading to the unitary representations of the (one-parameter) Lie symmetry groups. Then of course you also have momentum from translation invariance along the one direction the particle is moving and Galileo boosts to get also a position operator from the corresponding center-of-mass observable (I leave out the somewhat cumbersome discussion of mass in non-relativistic physics, which can fortunately postponed to the QM 2 lecture if you want to teach it at all ;-)).

Then you may argue to work in the position representation to begin with, and then the above considerations indeed lead to the operators of the "fundamental observables" position and momentum:
$$\hat{p} \psi(t,x) =-\mathrm{i} \partial_x \psi(t,x),$$
and the time-evolution equation (aka Schrödinger equation)
$$\mathrm{i} \partial_t \psi(t,x)=\hat{H} \psi(t,x).$$
Ok, but now if not having the Born interpretation (for the special case of pure states and precise measurements) at hand, I don't know, how to get the connection with real-world experiments.

It's an empirical fact that we can measure positions and momenta with correspondingly constructed macroscopic measurement devices. So we don't need to discuss the complicated technicalities of a particle detector which measures positions or a cloud chamber with a magnetic field to measure momenta and via the energy loss (also based on theory by Bethe and Bloch by the way) to have particle ID etc. etc.

However, I don't see how you make contact with these clearly existing macroscopic "traces" of the microworld, enabling to get quantitative knowledge about these microscopic entities we call, e.g., electrons, ##\alpha## particles etc. Having the statistical interpretation at hand, it's well known how the heuristics procedes, and as long as you don't insist that there is a "measurement problem" there is indeed none, because all I can hope from a theory, together with some consistent theoretical interpretation about its connection to these real-world observations, is to be consistent with these observations. You cannot expect it to satisfy your intuition from your macroscopic everyday experience which appears to be well-described by deterministic classical theories. The point is that this is also true for coarse-grained macroscopic observables, and this is in accordance with quantum statistics too. To coarse grain of course you need a description of the coarse grained observables, for which you need again statistics.

So the big for me still unanswered question is indeed this interpretive part of the "thermal interpretation". It's an enigma to me, how to make contact between the formalism (which includes also Ehrenfest's theorem which seems to be another corner stone of your interpretation too, but I don't see how it helps to make contact with the above described observations).

Then I note that the collection of all these q-expectations has a deterministic dynamics given by a Lie algebra structure, just as the collection of phase space functions in classical mechanics. In the thermal interpretation, the elements of both collections are considered to be beables.

Then I note that in statistical thermodynamics of local equilibrium, the q-expectations of the fields are actual observables, as they are the classical observables of fluid mechanics, whose dynamics is derived from the 1PI formalism - in complete analogy to your 2PI derivation of the Kadanoff-Baym equations. In practice one truncates to a deterministic dissipative theory approximating the deterministic dynamics of all q-expectations. This gives a link to observable deterministic physics - all of fluid mechanics, and thus provides an approximate operational meaning for the field expectations. This is not worse than the operational meaning of classical fields, which is also only approximate since one cannot measure fields at a point with zero diameter.
Yes, this is all very clear, as soon as I have the statistical interpretation and have extended it to "incomplete knowledge" and thus statstical operators to define non-pure states (i.e., states of non-zero entropy and thus implying incomplete knowledge). If I have just an abstract word like "q-expectations" there's no connection with classical (ideal or viscous) hydro. If I'm allowed to interpret "field expecations" in the usual way probabilistically, this is all well established. BTW. it's not a principle problem to use QFT instead of using the "first-quantization" formalism.
Then I prove that under certain other circumstances and especially for ideal binary measurements (rather than assume that always, or at least under unstated conditions), Born's interpretation of the formal Born rule as a statistical ensemble mean is valid. Thus I recover the probabilistic interpretation in the cases where it is essential, and only there, without having assumed it anywhere.
Well, but you need this probabilistic interpretation before you can derive hydro from the formalism. If not, I've obviously not realized, where and how this crucial step is done within your thermal interpretation.
What then is the meaning of the expectation in this case? It is just a formal q-expectation defined via the trace. Thus you should not complain about my notion!

Born's rule only enters when you interpret S-matrix elements or numerical simulation results in terms of cross sections.
It was about the Green's function in QFT or field correlators like $$\mathrm{i} G^{>}(x,y)=\mathrm{Tr} \hat{\rho} \hat{\phi}(x) \hat{\phi}(y)$$. Of course, that's not an expectation value of anyting observable. It's not forbidden to use such auxiliary functions in math to evaluate the observable quantities. Why should it be? As already Heisenberg learned from Einstein, the strictly positivistic approach (i.e., to work only with observable quantities) is neither necessary nor possible in theoretical physics. Also in classical electrodynamics you quite often work with the clearly unobservable potentials to derive the observable quantities (electromagnetic fields, or to be more precise the observable facts we understand as caused by the interaction of the charged matter building the detectors (e.g., our eyes) with the field in the standard interpretation of classical electromagnetism).
 
  • #57
A. Neumaier said:
The point is that the construction principles for them is irregular, hence has not enough mathematical structure for something that could be considered fundamental.

More importantly, every physical system that can move or vibrate is represented in an infinite-dimensional Hilbert space. Hence anything dependent on finitely many dimensions cannot be fundamental. Despite their recent popularity, foundations of quantum mechanics just based on quantum information theory are highly defective since they do not even have a way to represent the canonical commutation relations, which are fundamental for all of spectroscopy, scattering theory, and quantum chemistry.
I appreciate the construction point, but since your interpretation uses insights from AQFT (quite rightly, the reference to Yngvason is quite refreshing, I have often wondered how Many Worlds would deal with that result) would the "compactness criterion" of Haag, Sweica, Wichmann and Buchohlz be of any relevance?

In attempting to characterize those local algebras which admit asymptotic particle states Haag & Sweica proposed that the space of states on a local algebra ##\mathcal{A}\left(\mathcal{O}\right)## with energy below a threshold ##E## should be finite dimensional. Buchholz and Wichmann replaced this by a stronger property called the "Nuclearity condition" see:
Buchholz, Detley and Eyvind H. Wichmann. 1986. Causal independence and the energy-level density of states in local quantum field theory. Comm. Math. Phys.106: 321-344

With this condition you can demonstrate both decent thermodynamics and a particle interpretation.

So there is a chance that for QFT infinite-dimensional Hilbert spaces are just unphysical idealizations like pure states.
 
  • #58
DarMM said:
It's a "beable" in Bell's terminology, that is a property of the system in question no different from properties in classical mechanics. Or at least thus is my understanding so far.
How is it related to the outcomes of measurements in the lab, if I'm not allowed to interpret as an average in the probabilistic/statistical sense? That's my question. It's no question within the standard interpretation, where macroscopic measurement outcomes are derivable from the very notion of expectation values in probability theory.
 
  • #59
vanhees71 said:
How is it related to the outcomes of measurements in the lab, if I'm not allowed to interpret as an average in the probabilistic/statistical sense? That's my question. It's no question within the standard interpretation, where macroscopic measurement outcomes are derivable from the very notion of expectation values in probability theory.
My understanding is that lack of knowledge of the unmodelled environment in which the measuring device is embedded will ensure that the measured value ##A_m## will deviate from the true value ##\langle A\rangle##.

In a sense we invert the typical conclusion. Rather than ##\langle A\rangle## predicting the average value of our "precise" measurements, our imprecise noisy measurements prevent us from directly measuring the value ##\langle A\rangle## and we use the statistics of multiple such measurements to compute our measured value of ##\langle A\rangle##.

Ultimately it is no different from measuring a Classical quantity. There are measurement errors which one controls by building a large sample.
 
  • Like
Likes A. Neumaier
  • #60
@A. Neumaier a few questions:

  1. Do you have a physical picture for ##\mathbb{L}^{*}## the dual of the Lie algebra of q-expectations? I mean simply what is it/how do you imagine it physically. Just to get a better sense of the Hamiltonian dynamics.
  2. What is the significance of ##\mathbb{L}^{*}## not being symplectic? Note for both these questions I know the mathematical theory, it's easy to show ##\mathfrak{g}^{*}## is a Poisson manifold for a Lia algebra ##\mathfrak{g}##. I'm more looking for the physical significance in the Thermal interpretation
  3. Should I understand ##\mathbb{L}## formally, i.e. the algebra of expectation "symbols" as such, not the algebra of expectations of a specific state ##\rho##? In other words it isn't truly ##\mathbb{L}_{\rho}##

Forgive the naivety of these, the interpretation has yet to solidify in my head
 
  • #61


akhmeteli said:
@A. Neumaier: A quote from your work: " When a particle has been prepared in an ion trap (and hence is there with certainty), Born’s rule implies a tiny but positive probability that at an arbitrarily short time afterwards it is detected a light year away"

I guess this is only true if one assumes nonrelativistic equation of motion?
A. Neumaier said:
It is true in quantum mechanics, not in quantum field theory. Note that quantum mechanics has no consistent relativistic particle picture, except in the free case. Thus an atom in an ion trap cannot be consistently modeled by (fully) relativistic quantum mechanics.

But for a free particle, if one would know the position at one time to be located in a small compact region of space, it could be the next moment almost everywhere with a nonzero probability.
akhmeteli said:
So it looks like the statement I quoted is indeed true only for nonrelativistic quantum mechanics, if the atom in a trap cannot be modeled by relativistic quantum mechanics, without using quantum field theory.
A. Neumaier said:
The statement about the ion trap yes, but an analogous statement about a free relativistic particle somehow prepared at time t'>tt in a small region of spacetime suffers the same problem.
akhmeteli said:
Could you please give a reference?
A. Neumaier said:
I don't have a reference; this seems to have not been considered before. When you work out the solution in terms of the Fourier transform you get for ##\psi(x,t+x_0)## a convolution of ##\psi(x,t)## (assumed to have compact support) with a function that does not have causal support.
Your reasoning is not convincing at all (at least not until you provide more details). So far I cannot accept your statement for a free relativistic particle, and the reasoning is as follows. As far as I know, the retarded Green's function for the Klein-Gordon operator has support within the future light cone (including its boundaries) (see, e.g., https://books.google.com/books?id=t...on retarded green function light cone&f=false). It satisfies the Klein-Gordon equation outside the source, for example, for t>0. So the function has a compact support at t=1, evolves in accordance with the Klein-Gordon equation between t=1 and t=2, and has a compact support at t=2.​
 
  • #62
akhmeteli said:
So far I cannot accept your statement for a free relativistic particle, and the reasoning is as follows. As far as I know, the retarded Green's function for the Klein-Gordon operator has support within the future light cone (including its boundaries) (see, e.g., https://books.google.com/books?id=ttuO8-_D_oUC&pg=PA173&lpg=PA173&dq=klein+gordon+retarded+green+function+light+cone&source=bl&ots=24Z2Z4hYeD&sig=ACfU3U1ajzmVFBVlS53NpibBGXJVDovgHA&hl=en&sa=X&ved=2ahUKEwjN3_Szv-zgAhVPZawKHdEaBe04ChDoATAFegQICRAB#v=onepage&q=klein gordon retarded green function light cone&f=false). It satisfies the Klein-Gordon equation outside the source, for example, for t>0. So the function has a compact support at t=1, evolves in accordance with the Klein-Gordon equation between t=1 and t=2, and has a compact support at t=2.
I agree that the retarded Greens functions and their linear combinations.are causal. They form a representation of the physical Hilbert space of the electron.

However, in this representation (for fixed time ##t##) , ##|\psi(x,t)|^2## does not have the interpretation of a position probability interpretation! The reason is that multiplication by ##x## is not an operator on a dense subspace of this Hilbert space. It introduces negative energy frequencies! Therefore having compact support ##C## in this representation cannot be interpreted as being localized in ##C##.

To get a probability interpretation you need a valid 3D position operator with commuting components. This is the Newton-Wigner operator. See the discussion in the item ''Particle positions and the position operator'' from my Theoretical Physics FAQ, and the remarks following https://www.physicsforums.com/posts/6136475/
If you transform to a representation in which the Newton-Wigner operator is diagonal you get a transformed wave function with a probability interpretation. But in this representation, relativistic causality is lost - since the Newton-Wigner operator is observer dependent.
 
  • Like
Likes dextercioby and vanhees71
  • #63
DarMM said:
  1. Do you have a physical picture for ##\mathbb{L}^{*}## the dual of the Lie algebra of q-expectations? I mean simply what is it/how do you imagine it physically. Just to get a better sense of the Hamiltonian dynamics.
  2. What is the significance of ##\mathbb{L}^{*}## not being symplectic? Note for both these questions I know the mathematical theory, it's easy to show ##\mathfrak{g}^{*}## is a Poisson manifold for a Lia algebra ##\mathfrak{g}##. I'm more looking for the physical significance in the Thermal interpretation
  3. Should I understand ##\mathbb{L}## formally, i.e. the algebra of expectation "symbols" as such, not the algebra of expectations of a specific state ##\rho##? In other words it isn't truly ##\mathbb{L}_{\rho}##
Consider first the Lie *-algebra ##\mathbb{L}^{*}## of smooth functions ##f(p,q)## on classical phase space with the negative Poisson bracket as Lie product and * as complex conjugation . The Lie *-algebra can be partially ordered by defining ##f\ge 0## iff ##f## takes values in the nonnegative reals. A state is a (nice enough) monotone *-linear functional on ##\mathbb{L}##, hence an element of ##\mathbb{L}^{*}##. A general element of ##\mathbb{L}^{*}## may therefore be considered as a ''complex state'' in the same sense as one can generalize measures to complex measures.

Essentially the same holds in the quantum case for the Lie *-algebra of q-expectation symbols (as you observed). In abstract terms it is by definition isomorphic to the Lie *-algebra of linear operators ##A## on a nuclear space in QM with the quantum Lie product and taking adjoints as *, in QFT a more complicated Lie *-algebra (the traditional ##C^*##-algebraic setting by Haag is not quite appropriate at is doesn't contain the most relevant physical observables, which are unbounded), with the partial order induced by defining ##A\ge 0## iff ##A## is Hermitian and positive semidefinite. States are again (nice enough) monotone linear functionals. They turn the q-expectation symbols into actual q-expectations (i.e., complex numbers). Thus states are again the most well-behaved elements of ##\mathbb{L}^{*}##.

This should answer 1. and 3.. As to 2., a nonsymplectic Poisson manifold can (in finite dimensions) be foliated into symplectic leaves, often characterized by specific values of Casimir operators (i.e., elements in the Lie-Poisson algebra whose Lie product with everything vanishes). The actual Hamiltonian dynamics happens on one of these symplectic leaves since all Casimirs are conserved. In infinite dimensions (needed already for a single thermal oscillator), this too holds in a less rigorous sense,

A simple example is ##R^3## with the cross product as Lie product. It is isomorphic to ##so(3)## and describes in this representation a rigid
rotator. ##\mathbb{L}^{*}## is spanned by the three components of ##J##, and the functions of ##J^2## are the Casimir operators. Assigning to ##J## a particular 3-dimensional vector gives the classical angular momentum in a particular state. The Lie-* algebra is the corresponding complexification, hence strictly speaking it is ##C^3##.

The same Lie algebra is also isomorphic to ##su(2)##, the Lie algebra of traceless Hermitian ##2\times 2## matrices, and then describes (in complexified form) the thermal setting of a single qubit. In this case, we think of ##\mathbb{L}^{*}## as mapping the three Pauli matrices ##\sigma_j## to three numbers ##S_j##, and extending the map linearly to the whole Lie algebra. Augmented by ##S_0=1## to account for the identity matrix, which extends the Lie algebra to that of all Hermitian matrices, this leads to the classical description of the qubit discussed in Subsection 3.5 of Part III. (Note: misprints there: all ##SS## should be bold ##\mathbf{S}##; there must be a macro problem in the arXiv version!)
 
Last edited:
  • Like
Likes dextercioby and DarMM
  • #64
DarMM said:
I appreciate the construction point, but since your interpretation uses insights from AQFT (quite rightly, the reference to Yngvason is quite refreshing, I have often wondered how Many Worlds would deal with that result) would the "compactness criterion" of Haag, Sweica, Wichmann and Buchohlz be of any relevance?
I don't know. Much of algebraic QFT is for my taste far too abstract, and I cannot easily read papers on the subject. I just borrowed the simplest aspects, in as far as I found them useful.
DarMM said:
Haag & Sweica proposed that the space of states on a local algebra ##\mathcal{A}\left(\mathcal{O}\right)## with energy below a threshold ##E## should be finite dimensional. [...]
So there is a chance that for QFT infinite-dimensional Hilbert spaces are just unphysical idealizations like pure states.
No. For a satisfactory interpretation, one needs all energies, not only those below some threshold. The contributions of the arbitrarily high energies (with their associated arbitrarily high frequencies) are precisely what makes thermal physics dissipative and hence realistic, and what gives rise to the stochastic aspects of quantum physics!
 
  • Like
Likes dextercioby, Mentz114 and DarMM
  • #65
vanhees71 said:
The physicist however needs a relation of the symbols and mathematical notions to observations in the lab. That's what's called interpretation. As with theory and experiment (theory is needed to construct measurement devices for experiments, which might lead to observations that contradict the very theory; then the theory has to be adapted, and new experiments can be invented to test its consequences and consistency etc. etc.) also the interpretation is needed already for model building.
Well, I told you how to interpret ##\langle A\rangle## for macroscopic q-observables ##A## in terms of a single measurement of a piece of matter in equilibrium, but this didn't reach your understanding. I also told you that Subsections 3.3-3.4 of Part II spell out conditions under which ##\langle A\rangle## can be viewed as a sample average, but you apparently didn't even read it. You simply don't care about how I want things to be interpreted!
vanhees71 said:
Now, I don't understand why I cannot interpret your q-expectations as usually as probabilistic expectation values.
Because then you get your minimal interpretation and not the thermal interpretation. You cannot interpret one interpretation in terms of another nonequivalent one! That you try to do this rather than trying to understand the thermal interpretation in its own terms is the reason why in this thread we practically always talk past each other.
vanhees71 said:
Then you may argue to work in the position representation to begin with, and then the above considerations indeed lead to the operators of the "fundamental observables" position and momentum:
$$\hat{p} \psi(t,x) =-\mathrm{i} \partial_x \psi(t,x),$$
and the time-evolution equation (aka Schrödinger equation)
$$\mathrm{i} \partial_t \psi(t,x)=\hat{H} \psi(t,x).$$
Ok, but now if not having the Born interpretation (for the special case of pure states and precise measurements) at hand, I don't know, how to get the connection with real-world experiments.
I get it in the same informal way as in the classical case, where there is no a Born interpretation but we still know how to measure the approximate position and momentum of a particle. In both the classical case and the quantum case we measure the position and the momentum (knowing how this is done from experience with lab experiments) and get an approximation for its value. That's it! Your minimal interpretation is that you get in this way an approximation of an eigenvalue; my thermal interpretation is instead that you get an approximation of the q-expectation. Both are compatible with experiment, although quite different in their theoretical implications!

Most interpretations even claim that one gets an exact eigenvalue. But this contradicts experiment: The energy levels of atoms and molecules are only approximately known though they are given exactly by the eigenvalues of the Hamiltonian H, supposedly the only possible results of measurements of the - suitably normalized - energy. And H is the most important ''observable'' in statistical mechanics!

vanhees71 said:
However, I don't see how you make contact with these clearly existing macroscopic "traces" of the microworld, enabling to get quantitative knowledge about these microscopic entities we call, e.g., electrons, ##\alpha## particles
The thermal interpretation says that particles are fiction (which may be under special circumstances appropriate). In reality you have beams (states of the electron field, an effective alpha particle field, etc., concentrated along a small neighborhood of a mathematical curve) with approximately known properties (charge densities, spin densities, energy densities, etc.) If you place a detector into the path of a beam you measure these densities - accurately if the densities are high, erratically and inaccurately when they are very low. This is very close to experimental practice, how could it be closer?
vanhees71 said:
Well, but you need this probabilistic interpretation before you can derive hydro from the formalism. If not, I've obviously not realized, where and how this crucial step is done within your thermal interpretation.
No. You only need the 1PI formalism, which nowhere talks about probabilities. It uses q-expectations throughout, nothing else!
vanhees71 said:
It was about the Green's function in QFT or field correlators like $$\mathrm{i} G^{>}(x,y)=\mathrm{Tr} \hat{\rho} \hat{\phi}(x) \hat{\phi}(y)$$. Of course, that's not an expectation value of anything observable.
Thus you use expectation terminology and notation (i.e., q-expectations) for something that is not an expectation value of anything, and you get useful results that you can later interpret in the right context in terms of experimental cross sections, etc. The thermal interpretation just does this consistently, observing that in almost everything done in quantum mechanics and quantum field theory, only q-expectations are computed and worked with, and the experimental interpretation comes only at the very end!

Sometimes, the experiment involves stochastic data (counts of events of certain kinds, many low accuracy measurements) and the theoretical result is interpreted as a probability or sample mean. In many other cases, the experiment involves just a few measurements - for example, of temperature, pressure, and mass, or of spectral lines and spectral widths -, and the theoretical result is interpreted without invoking any probability or statistics.

Therefore there is no need at all to put the statistical/probabilistic stuff into the foundations of quantum physics. As it always was before the advent of quantum mechanics, statistics and probability are experimental techniques for producing reproducible information from nonreproducible (and thus noisy) measurements; nothing more!
 
Last edited:
  • #66
A. Neumaier said:
Most interpretations even claim that one gets an exact eigenvalue. But this contradicts experiment: The energy levels of atoms and molecules are only approximately known though they are given exactly by the eigenvalues of the Hamiltonian H, supposedly the only possible results of measurements of the - suitably normalized - energy. And H is the most important ''observable'' in statistical mechanics!
Your yesterday revised lecture notes on statistical mechanics (p.20 in the version of 5th March, 2019) is a little more cautious in formulating the traditional Born rule:
Hendrik van Hees said:
A possible result of a precise measurement of the observable O is necessarily an eigenvalue of the corresponding operator O
With this formulation, my argument only shows that there are no ''precise measurements'' of energy.

But then with your foundations, the whole of statistical mechanics hangs in the air because these foundations are too imprecise!

You seem to interpret the total energy in statistical thermodynamics as a mean of somehow measured energies of the zillions of atoms in the macroscopic body.
vanhees71 said:
This is the formal description of an "ensemble average" in the sense that one averages over the microscopic fluctuations by just "blurring" the observation to the accuracy/resolution of typical macroscopic time and space scales, and thus "averaging" over all fluctuations at the microscopic space-time scales.
But your postulates in the lecture notes apply (as stated) only to measurements, not to unmeasured averages over unobserved fluctuations. Thus it seems that you assume that a body in equilibrium silently and miraculously performs ##10^{23}## measurements and averages these. But how are these measured? how often? how long does it take? Where are the recorded measurement results? What is the underlying notion of measurement? And how do these surely very inaccurate and tiny measurements result in a highly accurate q-expectation value? Where is an associated error analysis guaranteeing the observed accuracy of the total energy measured by the thermal engineer?

You cannot seriously assume these zillions of measurements. But then you cannot conclude anything from your postulates, which are explicitly about measured stuff.

Or are they about unmeasured stuff? But then it is not a bridge to the observed world, and the word 'measurement' is just pretense that it were so.

The thermal interpretation has no such problems! It only claims that the q-expectation is approximately measured when it is known to be measured and a measurement result is obtained by the standard measurement protocols.
 
Last edited:
  • Like
Likes dextercioby
  • #67
A. Neumaier said:
No. For a satisfactory interpretation, one needs all energies, not only those below some threshold. The contributions of the arbitrarily high energies (with their associated arbitrarily high frequencies) are precisely what makes thermal physics dissipative and hence realistic, and what gives rise to the stochastic aspects of quantum physics!
Could you explain this a bit more? Surely a finite subregion of spacetime contains a maximum energy level and the compactness criterion is known to be valid for free fields (as is the Nuclearity condition), generally in AQFT it is considered that the Hilbert space of states in a finite subregion is finite dimensional as this condition implies a sensible thermodynamics and asymptotic particle interpretation.

I appreciate how dissipation allows a realist account of the stochastic nature of QM in your interpretation (based on the lucid account in section 5.2 of Paper III), so no argument there. I'm simply wondering about the need for infinite-dimensional Hilbert spaces in finite spacetime volumes.
 
  • #68
A. Neumaier said:
Consider first ...which extends the Lie algebra to that of all Hermitian matrices, this leads to the classical description of the qubit discussed in Subsection 3.5 of Part III. (Note: misprints there: all ##SS## should be bold ##\mathbf{S}##; there must be a macro problem in the arXiv version!)
Thank you for this very clear!

So a separate question for let's say a two system state ##\rho_{AB}## with reduced density matrices ##\rho_A## and ##\rho_B## where we have two observables, ##\mathcal{O}_A## and ##\mathcal{O}_B## we can obviously have:
$$\rho_{AB}\left(\mathcal{O}_A\mathcal{O}_B\right) \neq \rho_A\left(\mathcal{O}_A\right)\rho_B\left(\mathcal{O}_B\right)$$
(Obvious abuse of notation here where on the left hand side what is labelled ##\mathcal{O}_A## is really ##\mathcal{O}_A \otimes \mathbb{I}_{B}##)

In most "probabilistic interpretations" this is simply correlation. However if ##\langle \mathcal{O}_A\mathcal{O}_B\rangle_{\rho_{AB}}## is an ontic property of the total system what does it mean for it not to simply be the product of the single system ontic properties ##\langle \mathcal{O}_A \rangle_{\rho_A}## and ##\langle \mathcal{O}_B \rangle_{\rho_B}##?
 
  • #69
A. Neumaier said:
I agree that the retarded Greens functions and their linear combinations.are causal. They form a representation of the physical Hilbert space of the electron.

However, in this representation (for fixed time ##t##) , ##|\psi(x,t)|^2## does not have the interpretation of a position probability interpretation! The reason is that multiplication by ##x## is not an operator on a dense subspace of this Hilbert space. It introduces negative energy frequencies!
Well, this value ##|\psi(x,t)|^2## cannot be a probability density for Klein-Gordon for a different reason - it is not the temporal component of the current. However, ##\bar{\psi}\gamma^0\psi## can be a probability density for the Dirac equation. Your argument against that is about negative energy, therefore, it is based on the fact that there is no consistent one-particle interpretation of the Dirac equation, either free or not (in one of your previous posts you seemed to suggest that using holes is OK for free Dirac, but as soon as you mention holes you don't have a one-particle theory). Therefore, the free Dirac equation also has a serious problem. As I said, you cannot fault the Born's rule for having a problem with a problematic equation.
 
  • #70
DarMM said:
t's say a two system state ##\rho_{AB}## with reduced density matrices ##\rho_A## and ##\rho_B## where we have two observables, ##\mathcal{O}_A## and ##\mathcal{O}_B## we can obviously have:
$$\rho_{AB}\left(\mathcal{O}_A\mathcal{O}_B\right) \neq \rho_A\left(\mathcal{O}_A\right)\rho_B\left(\mathcal{O}_B\right)$$
(Obvious abuse of notation here where on the left hand side what is labelled ##\mathcal{O}_A## is really ##\mathcal{O}_A \otimes \mathbb{I}_{B}##)

In most "probabilistic interpretations" this is simply correlation. However if ##\langle \mathcal{O}_A\mathcal{O}_B\rangle_{\rho_{AB}}## is an ontic property of the total system what does it mean for it not to simply be the product of the single system ontic properties ##\langle \mathcal{O}_A \rangle_{\rho_A}## and ##\langle \mathcal{O}_B \rangle_{\rho_B}##?
It means that there are additional correlation degrees of freedom:

Take your observables to be fields you get pair correlations of the fluctuations. Locally via a Wigner transformation this gives kinetic contributions, but if A and B refer to casually disjoint regions, say, you get nonlocal correlations, the beables needed to violate the assumptions of Bell's theorem.
 
Last edited:
  • Like
Likes DarMM
  • #71
DarMM said:
Could you explain this a bit more? Surely a finite subregion of spacetime contains a maximum energy level and the compactness criterion is known to be valid for free fields (as is the Nuclearity condition), generally in AQFT it is considered that the Hilbert space of states in a finite subregion is finite dimensional as this condition implies a sensible thermodynamics and asymptotic particle interpretation.

I appreciate how dissipation allows a realist account of the stochastic nature of QM in your interpretation (based on the lucid account in section 5.2 of Paper III), so no argument there. I'm simply wondering about the need for infinite-dimensional Hilbert spaces in finite spacetime volumes.
Unbounded space and unbounded energy are needed to make dissipation possible!

Classically it ensures for example that Poincare''s recurrence theorem cannot be applied. I don't know what the right quantum analogy should be.

I don't know yet the precise mechanism that could rigorously lead to dissipation. The common wisdom is to employ the thermodynamic limit and an associated phase transition, but this limit is an idealization that is unlikely to be the full truth.

Thus there are many interesting open questions with significant mathematical challenges. In my opinion, these are much more important than proving or analyzing no-go theorems that assume that the Born rule is an exact law of Nature.
 
Last edited:
  • Like
Likes dextercioby and DarMM
  • #72
What exactly does "exact" mean, when applied to a probabilistic rule?
 
  • #73
AlexCaledin said:
What exactly does "exact" mean, when applied to a probabilistic rule?
Exact refers to that
  1. the possible measurement values are the exact eigenvalues (according to most interpretations),
  2. that theoretical conclusions are drawn on the level of probability theory (which is exact, except for its application to reality), and
  3. that the probabilities follow exactly the law of large numbers (when compared with experiment).
 
  • Like
Likes AlexCaledin
  • #74
Thank you... So exact rules can be in mathematical models.
 
  • #75
A. Neumaier said:
Your yesterday revised lecture notes on statistical mechanics (p.20 in the version of 5th March, 2019) is a little more cautious in formulating the traditional Born rule:

With this formulation, my argument only shows that there are no ''precise measurements'' of energy.

But then with your foundations, the whole of statistical mechanics hangs in the air because these foundations are too imprecise!

You seem to interpret the total energy in statistical thermodynamics as a mean of somehow measured energies of the zillions of atoms in the macroscopic body.

But your postulates in the lecture notes apply (as stated) only to measurements, not to unmeasured averages over unobserved fluctuations. Thus it seems that you assume that a body in equilibrium silently and miraculously performs ##10^{23}## measurements and averages these. But how are these measured? how often? how long does it take? Where are the recorded measurement results? What is the underlying notion of measurement? And how do these surely very inaccurate and tiny measurements result in a highly accurate q-expectation value? Where is an associated error analysis guaranteeing the observed accuracy of the total energy measured by the thermal engineer?

You cannot seriously assume these zillions of measurements. But then you cannot conclude anything from your postulates, which are explicitly about measured stuff.

Or are they about unmeasured stuff? But then it is not a bridge to the observed world, and the word 'measurement' is just pretense that it were so.

The thermal interpretation has no such problems! It only claims that the q-expectation is approximately measured when it is known to be measured and a measurement result is obtained by the standard measurement protocols.
The meaning of your interpretation gets more and more enigmatic to me.

In the standard interpretation the possible values of observables are given by the spectral values of self-adjoint operators. To find these values you'd have to measure energy precisely. This is a fiction of course. It's even a fiction in classical physics, because real-world measurements are always uncertain, and that's why we need statistics from day one in the introductory physics lab to evaluate our experiments. Quantum theory has nothing to do with these uncertainties of real-world measurements.

At the same time you say the very same about measurements within your thermal interpretation I express it within the standard interpretation. As long as the meaning of q-averages is not clarified, I cannot even understand the difference of the statements. That's the problem.

In another posting you claim, I'd not have read Section 3.3. I have read it, but obviously it did not convey to me what you really wanted to say. Because already in the very beginning, I cannot make any sense of the words without the standard probabilistic interpretation of the meaning of the trace formula. That's the meaning the Ehrenfest theorem has in QT. I've no clue, what you mean by "Ehrenfest picture". I know the Schrödinger, the Heisenberg and the general Dirac picture, but that's something completely different. Misunderstanding a text is not always and always not solely the fault of the reader...

As I'd already suggested in a private e-mail conversation, for me your thermal interpretation is not different from the standard interpretation as expressed by van Kampen in the following informal paper:

https://doi.org/10.1016/0378-4371(88)90105-7

There's also no problem with single measurements in the standard representation. The definite reading of a meausurement apparatus's pointer is due to the coarse graining of the reading: The macroscopic pointer position is an average over many fluctuations over macroscopically small but microcopically huge times, the fluctuations being invisible to us within the resolution of the reading.

A classical analogon is the definite reading of a galvanometer measuring of a rectified DC current. The inertia of the pointer leads to an effective time-averaging over the fluctuating current, leading to the "effective current" (via appropriate gauging of the scale). For the unrectified DC current the same setup gives a 0 reading of the galvanometer through the same "averaging process".

Averaging in the standard representation of QT is not necessarily the repetition of a measurement in the sense of a Gibbs ensemble!
 
Last edited:
  • #76
A. Neumaier said:
The thermal interpretation says that particles are fiction

So what is an electron in a hydrogen atom? Or electrons in a silver atom for that matter?
 
  • #77
ftr said:
So what is an electron in a hydrogen atom? Or electrons in a silver atom for that matter?
This isn't really something confined to @A. Neumaier 's thermal interpretation. In interacting QFTs particles only exist asymptotically in scattering processes. In the Standard model Hydrogen is a state which (under scattering processes) can evolve to a state with large overlap with a proton and electron product state.

In QFT the only sense you can give to one particle "being made of" a collection of others is that at asymptotic times it has large overlap with the multiparticle state of such a collection. However for many particles it doesn't overlap asymptotically with a single unique multiparticle state, so you have freedom in what you choose to say something is made of.
 
  • Like
Likes vanhees71 and dextercioby
  • #78
DarMM said:
This isn't really something confined to @A. Neumaier 's thermal interpretation. In interacting QFTs particles only exist asymptotically in scattering processes. In the Standard model Hydrogen is a state which (under scattering processes) can evolve to a state with large overlap with a proton and electron product state.

In QFT the only sense you can give to one particle "being made of" a collection of others is that at asymptotic times it has large overlap with the multiparticle state of such a collection. However for many particles it doesn't overlap asymptotically with a single unique multiparticle state, so you have freedom in what you choose to say something is made of.

But in non relativistic QM we do have the concept of single electron. In Thermal interpretation( for NQM) the claim is that there are no particles, that is puzzling.
 
  • #79
ftr said:
So what is an electron in a hydrogen atom? Or electrons in a silver atom for that matter?
A manifestation of the electron field with a computable charge distribution, covering more or less the classical size of the atom.
ftr said:
But in non relativistic QM we do have the concept of single electron. In Thermal interpretation( for NQM) the claim is that there are no particles, that is puzzling.
The concept of a single electron is a convenient approximation of the more fundamental concept of the electron field from QED.

The nonexistence of single electrons inside a nonrelativistic multi-electron system can also be seen from the fact that on the Hilbert space of a multi-electron system (the space of antisymmetrized wave functions) there are no position operators for single electrons, while there are distribution-valued operators for the charge density at any space-time point.

Only in certain approximations, one can talk in some sense about single electrons. For example, in the Hartree-Fock approximation of an atom, one can talk about the outermost electron, namely the one whose energy is largest. This is possible because in this approximation, the total energy of an ##N##-electron system can be naturally decomposed into a sum of ##N## energies for single electrons.

In general, secondary concept in physics are emergent approximate concepts arising from an appropriate approximate version of a more fundamental concept. Just like an atom has no temperature, but a macroscopic body has one.
 
  • Like
Likes dextercioby
  • #80
A. Neumaier said:
A manifestation of the electron field with a computable charge distribution, covering more or less the classical size of the atom.

In effect you are saying that the electron has a size, what is inside it. what is charge distribution?
 
  • #81
ftr said:
In effect you are saying that the electron has a size, what is inside it. what is charge distribution?
No. The electron field has a charge density concentrated in a small region of atom size if bounded, of beam shape if fast moving.
 
  • Like
Likes vanhees71 and dextercioby
  • #82
A. Neumaier said:
No. The electron field has a charge density concentrated in a small region of atom size if bounded, of beam shape if fast moving.

I am sorry I did not get what you meant, I ask again what is "charge density" what gives rise to it. Moreover, the electron "cloud" surrounds the proton, so the electron "field" does not seem to be contiguous, is it like a glass of water and the proton an ice cube!
 
  • #83
How does the Ehrenfest-Tolman effect affect this?
 
Last edited:
  • #84
A. Neumaier said:
It means that there are additional correlation degrees of freedom Take your observables to be fields you get pair correlations of the fluctuations. Locally via a Wigner transformation this gives kinetic contributions, but if A and B refer to casually disjoint regions, say, you get nonlocal correlations, the beables needed to violate the assumptions of Bell's theorem.
Perfect, this was clear from the discussion of QFT, but I just wanted to make sure of my understanding in the NRQM case (although even this is fairly clear from 4.5 of Paper II).

So in the Thermal Interpretation we have the following core features:
  1. Q-Expectations and Q-correlators are physical properties of quantum systems, not predicted averages. This makes these objects highly "rich" in terms of properties for ##\langle\phi(t_1)\phi(t_2)\rangle## is not merely a statistic for the field value, but actually a property itself and so on for higher correlators.
  2. Due to the above we have a certain "lack of reduction" (there may be better ways of phrasing this), a 2-photon system is not simply "two photons" since it has non-local correlator properties neither of them possesses alone.
  3. From point 2 we may infer that quantum systems are highly extended objects in many cases. What is considered two spacelike separated photons normally is in fact a highly extended object.
  4. Stochastic features of QM are generated by the system interacting with the environment. Under certain assumptions (Markov, infinite limit) we can show the environment causes a transition from a system pure state to a probability distribution of system pure states, what is called "collapse" normally. Standard Born-Markov stuff, environment is essentially a reservoir in thermal equilibrium, under Markov assumption it "forgets" information about the system so information purely dissipates into the environment without transfer back to the system. System is stochastically driven into a "collapsed" state. I'm not sure if this also requires the secular approximation (i.e. system's isolated evolution ##H_S## is on a much shorter time scale than the environmental influence ##H_{ES}##, but no matter.
Thus we may characterize quantum mechanics as the physics of property-rich non-reductive highly extended nonlocal objects which are highly sensitive to their environment (i.e. the combined system-environment states are almost always metastable and "collapse" stochastically).

As we remove these features, i.e. less environmentally sensitive, more reductive and less property rich (so that certain properties become purely functions of others and properties of the whole are purely those of the parts) and more locally concentrated, we approach Classical Physics.
 
Last edited:
  • #85
*now* said:
How does the Ehrenfest-Tolman effect affect this?
Please give a reference for discussion.
 
  • #86
ftr said:
I am sorry I did not get what you meant, I ask again what is "charge density" what gives rise to it. Moreover, the electron "cloud" surrounds the proton, so the electron "field" does not seem to be contiguous, is it like a glass of water and the proton an ice cube!
What is informally viewed as an electron cloud or drawn as orbitals are aspects of the electron field extending over some region around the nuclei. Similarly, the nuclei, often modeled as points or in more detail as fluids are aspects of the nucleon field, or ob a more detailed level of the quark field.
 
  • Like
Likes dextercioby
  • #87
DarMM said:
Thus we may characterize quantum mechanics as the physics of property-rich non-reductive highly extended nonlocal objects which are highly sensitive to their environment (i.e. the combined system-environment states are almost always metastable and "collapse" stochastically).

- so, the TI seems just camouflaging Bohm's "guiding" , trying to ascribe it to the universal thermal reservoir, right?
 
Last edited:
  • #88
ftr said:
So what is an electron in a hydrogen atom? Or electrons in a silver atom for that matter?
Well, even in classical relativistic physics "point particles are strangers", as Sommerfeld put it. The troubles with the point-particle concept became apparent already from the very beginning of Lorentz's "Theory of Electrons". I'm not sure, whether it's the first source, but already in 1916 the troubles with divergent self-energy in the context of the attempt to find closed equations for the motion of charged point particles ("electrons") and the electromagnetic fields became apparent. The trouble has been attact by some of the greatest physicists like Dirac or Schwinger with no real success. Today, as far as we know, the best one can do is to even approximate the famous Abraham-Lorentz-Dirac equation further, boiling it down to the Landau-Lifshitz equation, as it can be found in the famous textbook series (vol. 2, among the best textbooks on classical relativistic field theory ever written).

Even in the classical regime the most natural way to describe the mechanics of charged particles is a continuum description like hydrodynamics or relativistic kinetic theory (aka Boltzmann equation). One very practical application is the construction of modern particle accelerators like the FAIR accelerator here in Darmstadt, Germany, where the high-intensity particle bunches need a description taking into account not only the interaction between the particles ("space-charge effects") but also radiation losses, and there a hydro simulation (i.e., continuum desription of the particles) leads to the conclusion that for the discrete-particle picture the Landau-Lifshitz approximation to the Abraham-Lorentz-Dirac equation, describing the (accelerated) motion of charged particles, including the radiation-reaction forces.

The most fundamental theory we have today about "elementary particles" is the Standard Model of elementary-particle physics, which is based on relativistic, local (microcausal) quantum field theory (QFT). Here the trouble persists but is quite a lot milder. The early failed attempts to formulate a relativistic quantum mechanics clearly show that relativity needs many-body description even if you start with a few particles only as in the usual scattering experiments, where you consider reactions of two particles in the initial state. The reason is that at relativistic collision energies (i.e., where these energies come into the same order of magnitude as the masses (##\times c^2##, but I set ##c=\hbar=1##) of the lightest particles allowed to be created in the reaction (where allowed means not violating any of the empirically known conservation laws like energy, momentum, angular momentum and several conserved-charge conservation laws) there's always some probability to create new particles and/or destroying the initial colliding particles.

In QFT the fundamental concept are fields, as the name suggests. QFT was known from the very beginning of the development of modern QFT. Immediately after Heisenberg's ingenious insight during his hay-fever enforced stay on Helgoland in the summer of 1925, his vague ideas were amazingly quickly worked out by Born and Jordan and also Heisenberg himself as a formalism today known as "matrix mechanics", and already in one of these very early papers (the famous "Dreimännerarbeit" with Born, Jordan, and Heisenberg) everything was "quantized", i.e., not only the particles (electrons) but also the electromagnetic field. At the time ironically man physicists thought to also quantized the em. field is "too much of a revolution", and it was considered as unnecessary for a short while. The reason is simple: It is not so easy to see the necessity for field quantization at lower energies, available in atomic physics at this time. Although it was well known that for some phenomena a "particle picture for radiation", as proposed in Einstein's famous paper of 1905 on what we nowadays call "light quanta", can more easily explain several phenomena (like the photoelectric effect and Compton scattering) than the classical-field picture, to understand atomic physics for almost everything a treatment, where only the electrons were quantized and the interaction was described by electrostatics and the radiation by classical electromagnetic fields. What, however, was known at the time was the necessity for "spontaneous emission", i.e., if if there's no radiation field present which could lead to induced emission, there must be some probability for an excited atomic state (i.e., an energy eigenstate of the electrons around a nucleus) to emit a photon. This is the only phenomenon at the time which cannot be described by the semiclassical theory, where only the electrons were quantized but not the electromagnetic field. Everything else, including the photoelectric effect and Compton scattering as well as first applications to condensed-matter phenomena like the theory of dispersion of em. waves in matter can be successfully described in the semiclassical approximation.

The idea of field quantization was rediscovered by Dirac in 1927 when he formulated the theory of emission and absorption of electromagnetic radiation in terms of annihilation and creation operators for photons, leading to the correct postdiction of spontaneous emission, which was needed to explain Plancks black-body radiation formula which started the entire quantum business in 1900. It was well known by Einstein's (also very famous) paper of 1917 on the quantum-kinetic derivation of the Planck spectrum within "old quantum mechanics" that the spontaneous emission had to be postulated in addition to induced emission and absorption to get the correct Planck formula from kinetic considerations, but before Dirac there was no clear formalism for it.

Shortly thereafter among others Heisenberg and Pauli formulated quantum electrodynamics, and the use of perturbation theory lead to quite some success as long as one used only the lowest-order approximations (what we nowadays call the tree-level approximations using the pictorial notation in terms of Feynman diagrams). But to go to higher orders was plagued by the old demon of divergences known from the classical theory of radiation reactions, i.e., the interaction of charged particles with their own radiation fields, leading to the same self-energy divergences known already from classical theory, but the divergences were less severe than in classical theory, and the solution of the problem within perturbation theory was found in 1948 when Tomonaga, Schwinger, and Feynman developed their renormalization theory, also largely triggered by the fact that the "radiative corrections", i.e., the higher-order corrections leading to divergences in naive perturbation theory, became measurable (particularly Lamb's discovery of a little shift in the finestructure of the hydrogen-atom spectrum, now named after him "Lamb shift"). The final solution of the problem within perturbative QFT came in the late 1960ies, proving then crucial for the Standard Model, when in 1971 't Hooft and Veltman could prove the perturbative renormalizability of Abelian as well as non-Abelian gauge theories to any order of perturbation theory.

The upshot of this long story is that the particle picture of subatomic phenomena is quite restricted. One cannot make true sense of the particle picture accept in the sense of asymptotically free states, i.e., only when the quantum fields can be seen as essentially non-interacting a particle interpretation of quantum fields in terms of Fock states (eigenstates of the particle-number operators) becomes sensible.

Particularly for photons a classical-particle picture, as envisaged by Einstein in his famous 1905 paper on "light quanta", carefully titled as "a heuristic approach", is highly misleading. There's not even a formal way to define a position operator for massless quanta (as I prefer to say instead of "particles") in the narrow sense. All we can calculate is a probability for a photon to hit a detector at the place where this detector is located.
 
  • Like
Likes dextercioby, *now*, ftr and 1 other person
  • #89
AlexCaledin said:
- so, the TI seems just camouflaging Bohm's "guiding" , trying to ascribe it to the universal thermal reservoir, right?
I wouldn't say so. The thermal reservoir, the environment, is responsible for the stochastic nature of subsystems when you don't track the environment. However it doesn't guide them like the Bohmian potential, it's not an external object of a different class/type to the particles it's just another system. Also it's not universal, i.e. the environment is just whatever external source of noise is relevant for the current system, e.g. air in the lab, thermal fluctuations of atomic structure of the measuring device.
 
  • #90
DarMM said:
Perfect, this was clear from the discussion of QFT, but I just wanted to make sure of my understanding in the NRQM case (although even this is fairly clear from 4.5 of Paper II).

So in the Thermal Interpretation we have the following core features:
  1. Q-Expectations and Q-correlators are physical properties of quantum systems, not predicted averages. This makes these objects highly "rich" in terms of properties for ##\langle\phi(t_1)\phi(t_2)\rangle## is not merely a statistic for the field value, but actually a property itself and so on for higher correlators.
  2. Due to the above we have a certain "lack of reduction" (there may be better ways of phrasing this), a 2-photon system is not simply "two photons" since it has non-local correlator properties neither of them possesses alone.
  3. From point 2 we may infer that quantum systems are highly extended objects in many cases. What is considered two spacelike separated photons normally is in fact a highly extended object.
  4. Stochastic features of QM are generated by the system interacting with the environment. Under certain assumptions (Markov, infinite limit) we can show the environment causes a transition from a system pure state to a probability distribution of system pure states, what is called "collapse" normally. Standard Born-Markov stuff, environment is essentially a reservoir in thermal equilibrium, under Markov assumption it "forgets" information about the system so information purely dissipates into the environment with transfer back to the system. System is stochastically driven into a "collapsed" state. I'm not sure if this also requires the secular approximation (i.e. system's isolated evolution ##H_S## is on a much shorter time scale than the environmental influence ##H_{ES}##, but no matter.
Thus we may characterize quantum mechanics as the physics of property-rich non-reductive highly extended nonlocal objects which are highly sensitive to their environment (i.e. the combined system-environment states are almost always metastable and "collapse" stochastically).

As we remove these features, i.e. less environmentally sensitive, more reductive and less property rich (so that certain properties become purely functions of others and properties of the whole are purely those of the parts) and more locally concentrated, we approach Classical Physics.
Great! If this is indeed the correct summary of what is meant by "Thermal Interpretation", it's pretty clear that it is just a formalization of the usual practical use of Q(F)T in analyzing real-world observations.

It's indeed clear that in the above considered description of the two-photon Bell experiments, the photons are not localizable in a classical sense but that the localization is through the localization of the detectors' "click events", which are clearly and well-defined macroscopic manifestations (plus the fundamental assumption of locality, microcausality leading to the validity of the linked-cluster theorem for the QFT S-matrix).

Of course the q-expectation values have somehow to be heuristically introduced too to make sense to a physicist, and I still don't see, how this heuristics can be given without recurse to the standard probabilistic interpretation of the "state" (i.e., the statistical operator of the orthodox minimal interpretation), but as an axiomized final formalism it makes perfect sense.
 
  • #91
DarMM said:
I wouldn't say so. The thermal reservoir, the environment, is responsible for the stochastic nature of subsystems when you don't track the environment. However it doesn't guide them like the Bohmian potential, it's not an external object of a different class/type to the particles it's just another system. Also it's not universal, i.e. the environment is just whatever external source of noise is relevant for the current system, e.g. air in the lab, thermal fluctuations of atomic structure of the measuring device.
In addition the most important point in contradistinction to the Bohmian theory, which I think still convincingly only works in the non-relativistic approximation, the thermal interpreation (if I finally understand it right as meant by @A. Neumaier) as summarized in #84, there's no need for a Bohmian non-local description but one can use the standard description in terms of local relativistic QFTs without the need to develop a pilot-wave theory (which would be needed for fields rather than particles, I'd guess).
 
  • Like
Likes DarMM
  • #92
vanhees71 said:
Particularly for photons a classical-particle picture, as envisaged by Einstein in his famous 1905 paper on "light quanta", carefully titled as "a heuristic approach", is highly misleading. There's not even a formal way to define a position operator for massless quanta (as I prefer to say instead of "particles") in the narrow sense. All we can calculate is a probability for a photon to hit a detector at the place where this detector is located.
It's interesting that in Haag's book "Local Quantum Physics" and Steinmann's "Perturbative Quantum Electrodynamics and Axiomatic Field Theory" the notation of a detector operator or probe is introduced to give formal meaning to the particle concept. With an n-particle state being a state that can activate at most n such probes.
 
  • Like
Likes Peter Morgan, dextercioby and vanhees71
  • #93
AlexCaledin said:
- so, the TI seems just camouflaging Bohm's "guiding" , trying to ascribe it to the universal thermal reservoir, right?
vanhees71 said:
In addition the most important point in contradistinction to the Bohmian theory, which I think still convincingly only works in the non-relativistic approximation, the thermal interpretation as summarized in #84, there's no need for a Bohmian non-local description but one can use the standard description in terms of local relativistic QFTs without the need to develop a pilot-wave theory.
There are similarities and differences:

The thermal interpretation is deterministic, and the nonlocal multipoint q-expectations ignored in approximate calculations are hidden variables accounting for the stochastic effects observed in the coarse-grained descriptions of the preparation and detection processes.

But there are no additional particle coordinates as in Bohmian mechanics that would need to be guided; instead, the particle concept is declared to be an approximation only.
 
Last edited:
  • Like
Likes AlexCaledin
  • #94
vanhees71 said:
Great! If this is indeed the correct summary of what is meant by "Thermal Interpretation", it's pretty clear that it is just a formalization of the usual practical use of Q(F)T in analyzing real-world observations.
Yes, assuming I'm right of course! :nb)

I would say the major difference is that the q-expectations ##\langle A\rangle## are seen as actual quantities, not averages of a quantity over an ensemble of results. So for instance ##\langle A(t)B(s) \rangle## isn't some kind of correlation between ##A(t)## and ##B(s)## but a genuinely new property. Also these properties are fundamentally deterministic, there is no fundamental randomness, just lack of control of the environment.
 
  • #95
I guess @A. Neumaier will tell us. Why the heck, hasn't he written this down in a 20-30p physics paper rather than with so much text obviously addressed to philosophers? (It's not meant in as bad a way as it may sound ;-))).
 
  • #96
vanhees71 said:
Great! If this is indeed the correct summary of what is meant by "Thermal Interpretation", it's pretty clear that it is just a formalization of the usual practical use of Q(F)T in analyzing real-world observations.
It is intended to be precisely the latter, without the partially misleading probabilistic underpinning in the foundations that gave rise to nearly a century of uneasiness and dispute.
Part III said:
The thermal interpretation is inspired by what physicists actually do rather than what they say. It is therefore the interpretation that people actually work with in the applications (as contrasted to work on the foundations themselves), rather than only paying lipservice to it.
DarMM said:
Yes, assuming I'm right of course!
vanhees71 said:
I guess @A. Neumaier will tell us. Why the heck, hasn't he written this down in a 20-30p physics paper
It is partially right, but a number of details need correction. I have little time today and tomorrow, will reply on Sunday afternoon.
 
Last edited:
  • Like
Likes dextercioby, vanhees71 and DarMM
  • #97
No rush! I least I'm right to first order, I await the nonperturbative corrections!
 
  • Like
Likes vanhees71
  • #98
vanhees71 said:
The upshot of this long story is that the particle picture of subatomic phenomena is quite restricted.
Thank you for the long post. I am aware of what you wrote, but your summary is very good.
 
  • #99
vanhees71 said:
The meaning of your interpretation gets more and more enigmatic to me.

In the standard interpretation the possible values of observables are given by the spectral values of self-adjoint operators. To find these values you'd have to measure energy precisely. This is a fiction of course. It's even a fiction in classical physics, because real-world measurements are always uncertain, and that's why we need statistics from day one in the introductory physics lab to evaluate our experiments. Quantum theory has nothing to do with these uncertainties of real-world measurements.

At the same time you say the very same about measurements within your thermal interpretation I express it within the standard interpretation. As long as the meaning of q-averages is not clarified, I cannot even understand the difference of the statements. That's the problem.
The meaning is enigmatic only when viewed in terms of the traditional interpretations, which look at the same matter in a very different way.$$\def\<{\langle} \def\>{\rangle}$$
Given a physical quantity represented by a self-adjoint operator ##A## and a state ##\rho## (of rank 1 if pure),
  • all traditional interpretations give the same recipe for computing a number of possible idealized measurement values, the eigenvalues of ##A##, of which one is exactly (according to most formulations) or approximately (according to your cautious formulation) measured with probabilities computed from ##A## and ##\rho## by another recipe, Born's rule (probability form), while
  • the thermal interpretation gives a different recipe for computing a single possible idealized measurement value, the q-expectation ##\<A\>:=Tr~\rho A## of ##A##, which is approximately measured.
  • In both cases, the measurement involves an additional uncertainty related to the degree of reproducibility of the measurement, given by the standard deviation of the results of repeated measurements.
  • Tradition and the thermal interpretation agree in that this uncertainty is at least ##\sigma_A:=\sqrt{\<A^2\>-\<A\>^2}## (which leads, among others, to Heisenberg's uncertainty relation).
  • But they make very different assumptions concerning the nature of what is to be regarded as idealized measurement result.
That quantities with large uncertainty are erratic in measurement is nothing special to quantum physics but very familiar form the measurement of classical noisy systems. The thermal interpretation asserts that all uncertainty is of this kind, and much of my three papers are about arguing why this is indeed consistent with the assumptions of the thermal interpretation.

Now it is experimentally undecidable what an ''idealized measurement result'' should be since measured are only actual results, not idealized ones.

What to consider as idealized version is a matter of interpretation. What one chooses determines what one ends up with!

As a result, the traditional interpretations are probabilistic from the start, while the thermal interpretation is deterministic from the start.

The thermal interpretation has two advantages:
  • It assumes at the levels of the postulates less technical mathematics (no spectral theorem, no notion of eigenvalue, no probability theory).
  • It allows to make definite statements about each single quantum system, no matter how large or small it is.
 
  • #100
So what is a wavefunction?
 

Similar threads

Back
Top