I The thermal interpretation of quantum physics

  • #61


akhmeteli said:
@A. Neumaier: A quote from your work: " When a particle has been prepared in an ion trap (and hence is there with certainty), Born’s rule implies a tiny but positive probability that at an arbitrarily short time afterwards it is detected a light year away"

I guess this is only true if one assumes nonrelativistic equation of motion?
A. Neumaier said:
It is true in quantum mechanics, not in quantum field theory. Note that quantum mechanics has no consistent relativistic particle picture, except in the free case. Thus an atom in an ion trap cannot be consistently modeled by (fully) relativistic quantum mechanics.

But for a free particle, if one would know the position at one time to be located in a small compact region of space, it could be the next moment almost everywhere with a nonzero probability.
akhmeteli said:
So it looks like the statement I quoted is indeed true only for nonrelativistic quantum mechanics, if the atom in a trap cannot be modeled by relativistic quantum mechanics, without using quantum field theory.
A. Neumaier said:
The statement about the ion trap yes, but an analogous statement about a free relativistic particle somehow prepared at time t'>tt in a small region of spacetime suffers the same problem.
akhmeteli said:
Could you please give a reference?
A. Neumaier said:
I don't have a reference; this seems to have not been considered before. When you work out the solution in terms of the Fourier transform you get for ##\psi(x,t+x_0)## a convolution of ##\psi(x,t)## (assumed to have compact support) with a function that does not have causal support.
Your reasoning is not convincing at all (at least not until you provide more details). So far I cannot accept your statement for a free relativistic particle, and the reasoning is as follows. As far as I know, the retarded Green's function for the Klein-Gordon operator has support within the future light cone (including its boundaries) (see, e.g., https://books.google.com/books?id=t...on retarded green function light cone&f=false). It satisfies the Klein-Gordon equation outside the source, for example, for t>0. So the function has a compact support at t=1, evolves in accordance with the Klein-Gordon equation between t=1 and t=2, and has a compact support at t=2.​
 
Physics news on Phys.org
  • #62
akhmeteli said:
So far I cannot accept your statement for a free relativistic particle, and the reasoning is as follows. As far as I know, the retarded Green's function for the Klein-Gordon operator has support within the future light cone (including its boundaries) (see, e.g., https://books.google.com/books?id=ttuO8-_D_oUC&pg=PA173&lpg=PA173&dq=klein+gordon+retarded+green+function+light+cone&source=bl&ots=24Z2Z4hYeD&sig=ACfU3U1ajzmVFBVlS53NpibBGXJVDovgHA&hl=en&sa=X&ved=2ahUKEwjN3_Szv-zgAhVPZawKHdEaBe04ChDoATAFegQICRAB#v=onepage&q=klein gordon retarded green function light cone&f=false). It satisfies the Klein-Gordon equation outside the source, for example, for t>0. So the function has a compact support at t=1, evolves in accordance with the Klein-Gordon equation between t=1 and t=2, and has a compact support at t=2.
I agree that the retarded Greens functions and their linear combinations.are causal. They form a representation of the physical Hilbert space of the electron.

However, in this representation (for fixed time ##t##) , ##|\psi(x,t)|^2## does not have the interpretation of a position probability interpretation! The reason is that multiplication by ##x## is not an operator on a dense subspace of this Hilbert space. It introduces negative energy frequencies! Therefore having compact support ##C## in this representation cannot be interpreted as being localized in ##C##.

To get a probability interpretation you need a valid 3D position operator with commuting components. This is the Newton-Wigner operator. See the discussion in the item ''Particle positions and the position operator'' from my Theoretical Physics FAQ, and the remarks following https://www.physicsforums.com/posts/6136475/
If you transform to a representation in which the Newton-Wigner operator is diagonal you get a transformed wave function with a probability interpretation. But in this representation, relativistic causality is lost - since the Newton-Wigner operator is observer dependent.
 
  • Like
Likes dextercioby and vanhees71
  • #63
DarMM said:
  1. Do you have a physical picture for ##\mathbb{L}^{*}## the dual of the Lie algebra of q-expectations? I mean simply what is it/how do you imagine it physically. Just to get a better sense of the Hamiltonian dynamics.
  2. What is the significance of ##\mathbb{L}^{*}## not being symplectic? Note for both these questions I know the mathematical theory, it's easy to show ##\mathfrak{g}^{*}## is a Poisson manifold for a Lia algebra ##\mathfrak{g}##. I'm more looking for the physical significance in the Thermal interpretation
  3. Should I understand ##\mathbb{L}## formally, i.e. the algebra of expectation "symbols" as such, not the algebra of expectations of a specific state ##\rho##? In other words it isn't truly ##\mathbb{L}_{\rho}##
Consider first the Lie *-algebra ##\mathbb{L}^{*}## of smooth functions ##f(p,q)## on classical phase space with the negative Poisson bracket as Lie product and * as complex conjugation . The Lie *-algebra can be partially ordered by defining ##f\ge 0## iff ##f## takes values in the nonnegative reals. A state is a (nice enough) monotone *-linear functional on ##\mathbb{L}##, hence an element of ##\mathbb{L}^{*}##. A general element of ##\mathbb{L}^{*}## may therefore be considered as a ''complex state'' in the same sense as one can generalize measures to complex measures.

Essentially the same holds in the quantum case for the Lie *-algebra of q-expectation symbols (as you observed). In abstract terms it is by definition isomorphic to the Lie *-algebra of linear operators ##A## on a nuclear space in QM with the quantum Lie product and taking adjoints as *, in QFT a more complicated Lie *-algebra (the traditional ##C^*##-algebraic setting by Haag is not quite appropriate at is doesn't contain the most relevant physical observables, which are unbounded), with the partial order induced by defining ##A\ge 0## iff ##A## is Hermitian and positive semidefinite. States are again (nice enough) monotone linear functionals. They turn the q-expectation symbols into actual q-expectations (i.e., complex numbers). Thus states are again the most well-behaved elements of ##\mathbb{L}^{*}##.

This should answer 1. and 3.. As to 2., a nonsymplectic Poisson manifold can (in finite dimensions) be foliated into symplectic leaves, often characterized by specific values of Casimir operators (i.e., elements in the Lie-Poisson algebra whose Lie product with everything vanishes). The actual Hamiltonian dynamics happens on one of these symplectic leaves since all Casimirs are conserved. In infinite dimensions (needed already for a single thermal oscillator), this too holds in a less rigorous sense,

A simple example is ##R^3## with the cross product as Lie product. It is isomorphic to ##so(3)## and describes in this representation a rigid
rotator. ##\mathbb{L}^{*}## is spanned by the three components of ##J##, and the functions of ##J^2## are the Casimir operators. Assigning to ##J## a particular 3-dimensional vector gives the classical angular momentum in a particular state. The Lie-* algebra is the corresponding complexification, hence strictly speaking it is ##C^3##.

The same Lie algebra is also isomorphic to ##su(2)##, the Lie algebra of traceless Hermitian ##2\times 2## matrices, and then describes (in complexified form) the thermal setting of a single qubit. In this case, we think of ##\mathbb{L}^{*}## as mapping the three Pauli matrices ##\sigma_j## to three numbers ##S_j##, and extending the map linearly to the whole Lie algebra. Augmented by ##S_0=1## to account for the identity matrix, which extends the Lie algebra to that of all Hermitian matrices, this leads to the classical description of the qubit discussed in Subsection 3.5 of Part III. (Note: misprints there: all ##SS## should be bold ##\mathbf{S}##; there must be a macro problem in the arXiv version!)
 
Last edited:
  • Like
Likes dextercioby and DarMM
  • #64
DarMM said:
I appreciate the construction point, but since your interpretation uses insights from AQFT (quite rightly, the reference to Yngvason is quite refreshing, I have often wondered how Many Worlds would deal with that result) would the "compactness criterion" of Haag, Sweica, Wichmann and Buchohlz be of any relevance?
I don't know. Much of algebraic QFT is for my taste far too abstract, and I cannot easily read papers on the subject. I just borrowed the simplest aspects, in as far as I found them useful.
DarMM said:
Haag & Sweica proposed that the space of states on a local algebra ##\mathcal{A}\left(\mathcal{O}\right)## with energy below a threshold ##E## should be finite dimensional. [...]
So there is a chance that for QFT infinite-dimensional Hilbert spaces are just unphysical idealizations like pure states.
No. For a satisfactory interpretation, one needs all energies, not only those below some threshold. The contributions of the arbitrarily high energies (with their associated arbitrarily high frequencies) are precisely what makes thermal physics dissipative and hence realistic, and what gives rise to the stochastic aspects of quantum physics!
 
  • Like
Likes dextercioby, Mentz114 and DarMM
  • #65
vanhees71 said:
The physicist however needs a relation of the symbols and mathematical notions to observations in the lab. That's what's called interpretation. As with theory and experiment (theory is needed to construct measurement devices for experiments, which might lead to observations that contradict the very theory; then the theory has to be adapted, and new experiments can be invented to test its consequences and consistency etc. etc.) also the interpretation is needed already for model building.
Well, I told you how to interpret ##\langle A\rangle## for macroscopic q-observables ##A## in terms of a single measurement of a piece of matter in equilibrium, but this didn't reach your understanding. I also told you that Subsections 3.3-3.4 of Part II spell out conditions under which ##\langle A\rangle## can be viewed as a sample average, but you apparently didn't even read it. You simply don't care about how I want things to be interpreted!
vanhees71 said:
Now, I don't understand why I cannot interpret your q-expectations as usually as probabilistic expectation values.
Because then you get your minimal interpretation and not the thermal interpretation. You cannot interpret one interpretation in terms of another nonequivalent one! That you try to do this rather than trying to understand the thermal interpretation in its own terms is the reason why in this thread we practically always talk past each other.
vanhees71 said:
Then you may argue to work in the position representation to begin with, and then the above considerations indeed lead to the operators of the "fundamental observables" position and momentum:
$$\hat{p} \psi(t,x) =-\mathrm{i} \partial_x \psi(t,x),$$
and the time-evolution equation (aka Schrödinger equation)
$$\mathrm{i} \partial_t \psi(t,x)=\hat{H} \psi(t,x).$$
Ok, but now if not having the Born interpretation (for the special case of pure states and precise measurements) at hand, I don't know, how to get the connection with real-world experiments.
I get it in the same informal way as in the classical case, where there is no a Born interpretation but we still know how to measure the approximate position and momentum of a particle. In both the classical case and the quantum case we measure the position and the momentum (knowing how this is done from experience with lab experiments) and get an approximation for its value. That's it! Your minimal interpretation is that you get in this way an approximation of an eigenvalue; my thermal interpretation is instead that you get an approximation of the q-expectation. Both are compatible with experiment, although quite different in their theoretical implications!

Most interpretations even claim that one gets an exact eigenvalue. But this contradicts experiment: The energy levels of atoms and molecules are only approximately known though they are given exactly by the eigenvalues of the Hamiltonian H, supposedly the only possible results of measurements of the - suitably normalized - energy. And H is the most important ''observable'' in statistical mechanics!

vanhees71 said:
However, I don't see how you make contact with these clearly existing macroscopic "traces" of the microworld, enabling to get quantitative knowledge about these microscopic entities we call, e.g., electrons, ##\alpha## particles
The thermal interpretation says that particles are fiction (which may be under special circumstances appropriate). In reality you have beams (states of the electron field, an effective alpha particle field, etc., concentrated along a small neighborhood of a mathematical curve) with approximately known properties (charge densities, spin densities, energy densities, etc.) If you place a detector into the path of a beam you measure these densities - accurately if the densities are high, erratically and inaccurately when they are very low. This is very close to experimental practice, how could it be closer?
vanhees71 said:
Well, but you need this probabilistic interpretation before you can derive hydro from the formalism. If not, I've obviously not realized, where and how this crucial step is done within your thermal interpretation.
No. You only need the 1PI formalism, which nowhere talks about probabilities. It uses q-expectations throughout, nothing else!
vanhees71 said:
It was about the Green's function in QFT or field correlators like $$\mathrm{i} G^{>}(x,y)=\mathrm{Tr} \hat{\rho} \hat{\phi}(x) \hat{\phi}(y)$$. Of course, that's not an expectation value of anything observable.
Thus you use expectation terminology and notation (i.e., q-expectations) for something that is not an expectation value of anything, and you get useful results that you can later interpret in the right context in terms of experimental cross sections, etc. The thermal interpretation just does this consistently, observing that in almost everything done in quantum mechanics and quantum field theory, only q-expectations are computed and worked with, and the experimental interpretation comes only at the very end!

Sometimes, the experiment involves stochastic data (counts of events of certain kinds, many low accuracy measurements) and the theoretical result is interpreted as a probability or sample mean. In many other cases, the experiment involves just a few measurements - for example, of temperature, pressure, and mass, or of spectral lines and spectral widths -, and the theoretical result is interpreted without invoking any probability or statistics.

Therefore there is no need at all to put the statistical/probabilistic stuff into the foundations of quantum physics. As it always was before the advent of quantum mechanics, statistics and probability are experimental techniques for producing reproducible information from nonreproducible (and thus noisy) measurements; nothing more!
 
Last edited:
  • #66
A. Neumaier said:
Most interpretations even claim that one gets an exact eigenvalue. But this contradicts experiment: The energy levels of atoms and molecules are only approximately known though they are given exactly by the eigenvalues of the Hamiltonian H, supposedly the only possible results of measurements of the - suitably normalized - energy. And H is the most important ''observable'' in statistical mechanics!
Your yesterday revised lecture notes on statistical mechanics (p.20 in the version of 5th March, 2019) is a little more cautious in formulating the traditional Born rule:
Hendrik van Hees said:
A possible result of a precise measurement of the observable O is necessarily an eigenvalue of the corresponding operator O
With this formulation, my argument only shows that there are no ''precise measurements'' of energy.

But then with your foundations, the whole of statistical mechanics hangs in the air because these foundations are too imprecise!

You seem to interpret the total energy in statistical thermodynamics as a mean of somehow measured energies of the zillions of atoms in the macroscopic body.
vanhees71 said:
This is the formal description of an "ensemble average" in the sense that one averages over the microscopic fluctuations by just "blurring" the observation to the accuracy/resolution of typical macroscopic time and space scales, and thus "averaging" over all fluctuations at the microscopic space-time scales.
But your postulates in the lecture notes apply (as stated) only to measurements, not to unmeasured averages over unobserved fluctuations. Thus it seems that you assume that a body in equilibrium silently and miraculously performs ##10^{23}## measurements and averages these. But how are these measured? how often? how long does it take? Where are the recorded measurement results? What is the underlying notion of measurement? And how do these surely very inaccurate and tiny measurements result in a highly accurate q-expectation value? Where is an associated error analysis guaranteeing the observed accuracy of the total energy measured by the thermal engineer?

You cannot seriously assume these zillions of measurements. But then you cannot conclude anything from your postulates, which are explicitly about measured stuff.

Or are they about unmeasured stuff? But then it is not a bridge to the observed world, and the word 'measurement' is just pretense that it were so.

The thermal interpretation has no such problems! It only claims that the q-expectation is approximately measured when it is known to be measured and a measurement result is obtained by the standard measurement protocols.
 
Last edited:
  • Like
Likes dextercioby
  • #67
A. Neumaier said:
No. For a satisfactory interpretation, one needs all energies, not only those below some threshold. The contributions of the arbitrarily high energies (with their associated arbitrarily high frequencies) are precisely what makes thermal physics dissipative and hence realistic, and what gives rise to the stochastic aspects of quantum physics!
Could you explain this a bit more? Surely a finite subregion of spacetime contains a maximum energy level and the compactness criterion is known to be valid for free fields (as is the Nuclearity condition), generally in AQFT it is considered that the Hilbert space of states in a finite subregion is finite dimensional as this condition implies a sensible thermodynamics and asymptotic particle interpretation.

I appreciate how dissipation allows a realist account of the stochastic nature of QM in your interpretation (based on the lucid account in section 5.2 of Paper III), so no argument there. I'm simply wondering about the need for infinite-dimensional Hilbert spaces in finite spacetime volumes.
 
  • #68
A. Neumaier said:
Consider first ...which extends the Lie algebra to that of all Hermitian matrices, this leads to the classical description of the qubit discussed in Subsection 3.5 of Part III. (Note: misprints there: all ##SS## should be bold ##\mathbf{S}##; there must be a macro problem in the arXiv version!)
Thank you for this very clear!

So a separate question for let's say a two system state ##\rho_{AB}## with reduced density matrices ##\rho_A## and ##\rho_B## where we have two observables, ##\mathcal{O}_A## and ##\mathcal{O}_B## we can obviously have:
$$\rho_{AB}\left(\mathcal{O}_A\mathcal{O}_B\right) \neq \rho_A\left(\mathcal{O}_A\right)\rho_B\left(\mathcal{O}_B\right)$$
(Obvious abuse of notation here where on the left hand side what is labelled ##\mathcal{O}_A## is really ##\mathcal{O}_A \otimes \mathbb{I}_{B}##)

In most "probabilistic interpretations" this is simply correlation. However if ##\langle \mathcal{O}_A\mathcal{O}_B\rangle_{\rho_{AB}}## is an ontic property of the total system what does it mean for it not to simply be the product of the single system ontic properties ##\langle \mathcal{O}_A \rangle_{\rho_A}## and ##\langle \mathcal{O}_B \rangle_{\rho_B}##?
 
  • #69
A. Neumaier said:
I agree that the retarded Greens functions and their linear combinations.are causal. They form a representation of the physical Hilbert space of the electron.

However, in this representation (for fixed time ##t##) , ##|\psi(x,t)|^2## does not have the interpretation of a position probability interpretation! The reason is that multiplication by ##x## is not an operator on a dense subspace of this Hilbert space. It introduces negative energy frequencies!
Well, this value ##|\psi(x,t)|^2## cannot be a probability density for Klein-Gordon for a different reason - it is not the temporal component of the current. However, ##\bar{\psi}\gamma^0\psi## can be a probability density for the Dirac equation. Your argument against that is about negative energy, therefore, it is based on the fact that there is no consistent one-particle interpretation of the Dirac equation, either free or not (in one of your previous posts you seemed to suggest that using holes is OK for free Dirac, but as soon as you mention holes you don't have a one-particle theory). Therefore, the free Dirac equation also has a serious problem. As I said, you cannot fault the Born's rule for having a problem with a problematic equation.
 
  • #70
DarMM said:
t's say a two system state ##\rho_{AB}## with reduced density matrices ##\rho_A## and ##\rho_B## where we have two observables, ##\mathcal{O}_A## and ##\mathcal{O}_B## we can obviously have:
$$\rho_{AB}\left(\mathcal{O}_A\mathcal{O}_B\right) \neq \rho_A\left(\mathcal{O}_A\right)\rho_B\left(\mathcal{O}_B\right)$$
(Obvious abuse of notation here where on the left hand side what is labelled ##\mathcal{O}_A## is really ##\mathcal{O}_A \otimes \mathbb{I}_{B}##)

In most "probabilistic interpretations" this is simply correlation. However if ##\langle \mathcal{O}_A\mathcal{O}_B\rangle_{\rho_{AB}}## is an ontic property of the total system what does it mean for it not to simply be the product of the single system ontic properties ##\langle \mathcal{O}_A \rangle_{\rho_A}## and ##\langle \mathcal{O}_B \rangle_{\rho_B}##?
It means that there are additional correlation degrees of freedom:

Take your observables to be fields you get pair correlations of the fluctuations. Locally via a Wigner transformation this gives kinetic contributions, but if A and B refer to casually disjoint regions, say, you get nonlocal correlations, the beables needed to violate the assumptions of Bell's theorem.
 
Last edited:
  • Like
Likes DarMM
  • #71
DarMM said:
Could you explain this a bit more? Surely a finite subregion of spacetime contains a maximum energy level and the compactness criterion is known to be valid for free fields (as is the Nuclearity condition), generally in AQFT it is considered that the Hilbert space of states in a finite subregion is finite dimensional as this condition implies a sensible thermodynamics and asymptotic particle interpretation.

I appreciate how dissipation allows a realist account of the stochastic nature of QM in your interpretation (based on the lucid account in section 5.2 of Paper III), so no argument there. I'm simply wondering about the need for infinite-dimensional Hilbert spaces in finite spacetime volumes.
Unbounded space and unbounded energy are needed to make dissipation possible!

Classically it ensures for example that Poincare''s recurrence theorem cannot be applied. I don't know what the right quantum analogy should be.

I don't know yet the precise mechanism that could rigorously lead to dissipation. The common wisdom is to employ the thermodynamic limit and an associated phase transition, but this limit is an idealization that is unlikely to be the full truth.

Thus there are many interesting open questions with significant mathematical challenges. In my opinion, these are much more important than proving or analyzing no-go theorems that assume that the Born rule is an exact law of Nature.
 
Last edited:
  • Like
Likes dextercioby and DarMM
  • #72
What exactly does "exact" mean, when applied to a probabilistic rule?
 
  • #73
AlexCaledin said:
What exactly does "exact" mean, when applied to a probabilistic rule?
Exact refers to that
  1. the possible measurement values are the exact eigenvalues (according to most interpretations),
  2. that theoretical conclusions are drawn on the level of probability theory (which is exact, except for its application to reality), and
  3. that the probabilities follow exactly the law of large numbers (when compared with experiment).
 
  • Like
Likes AlexCaledin
  • #74
Thank you... So exact rules can be in mathematical models.
 
  • #75
A. Neumaier said:
Your yesterday revised lecture notes on statistical mechanics (p.20 in the version of 5th March, 2019) is a little more cautious in formulating the traditional Born rule:

With this formulation, my argument only shows that there are no ''precise measurements'' of energy.

But then with your foundations, the whole of statistical mechanics hangs in the air because these foundations are too imprecise!

You seem to interpret the total energy in statistical thermodynamics as a mean of somehow measured energies of the zillions of atoms in the macroscopic body.

But your postulates in the lecture notes apply (as stated) only to measurements, not to unmeasured averages over unobserved fluctuations. Thus it seems that you assume that a body in equilibrium silently and miraculously performs ##10^{23}## measurements and averages these. But how are these measured? how often? how long does it take? Where are the recorded measurement results? What is the underlying notion of measurement? And how do these surely very inaccurate and tiny measurements result in a highly accurate q-expectation value? Where is an associated error analysis guaranteeing the observed accuracy of the total energy measured by the thermal engineer?

You cannot seriously assume these zillions of measurements. But then you cannot conclude anything from your postulates, which are explicitly about measured stuff.

Or are they about unmeasured stuff? But then it is not a bridge to the observed world, and the word 'measurement' is just pretense that it were so.

The thermal interpretation has no such problems! It only claims that the q-expectation is approximately measured when it is known to be measured and a measurement result is obtained by the standard measurement protocols.
The meaning of your interpretation gets more and more enigmatic to me.

In the standard interpretation the possible values of observables are given by the spectral values of self-adjoint operators. To find these values you'd have to measure energy precisely. This is a fiction of course. It's even a fiction in classical physics, because real-world measurements are always uncertain, and that's why we need statistics from day one in the introductory physics lab to evaluate our experiments. Quantum theory has nothing to do with these uncertainties of real-world measurements.

At the same time you say the very same about measurements within your thermal interpretation I express it within the standard interpretation. As long as the meaning of q-averages is not clarified, I cannot even understand the difference of the statements. That's the problem.

In another posting you claim, I'd not have read Section 3.3. I have read it, but obviously it did not convey to me what you really wanted to say. Because already in the very beginning, I cannot make any sense of the words without the standard probabilistic interpretation of the meaning of the trace formula. That's the meaning the Ehrenfest theorem has in QT. I've no clue, what you mean by "Ehrenfest picture". I know the Schrödinger, the Heisenberg and the general Dirac picture, but that's something completely different. Misunderstanding a text is not always and always not solely the fault of the reader...

As I'd already suggested in a private e-mail conversation, for me your thermal interpretation is not different from the standard interpretation as expressed by van Kampen in the following informal paper:

https://doi.org/10.1016/0378-4371(88)90105-7

There's also no problem with single measurements in the standard representation. The definite reading of a meausurement apparatus's pointer is due to the coarse graining of the reading: The macroscopic pointer position is an average over many fluctuations over macroscopically small but microcopically huge times, the fluctuations being invisible to us within the resolution of the reading.

A classical analogon is the definite reading of a galvanometer measuring of a rectified DC current. The inertia of the pointer leads to an effective time-averaging over the fluctuating current, leading to the "effective current" (via appropriate gauging of the scale). For the unrectified DC current the same setup gives a 0 reading of the galvanometer through the same "averaging process".

Averaging in the standard representation of QT is not necessarily the repetition of a measurement in the sense of a Gibbs ensemble!
 
Last edited:
  • #76
A. Neumaier said:
The thermal interpretation says that particles are fiction

So what is an electron in a hydrogen atom? Or electrons in a silver atom for that matter?
 
  • #77
ftr said:
So what is an electron in a hydrogen atom? Or electrons in a silver atom for that matter?
This isn't really something confined to @A. Neumaier 's thermal interpretation. In interacting QFTs particles only exist asymptotically in scattering processes. In the Standard model Hydrogen is a state which (under scattering processes) can evolve to a state with large overlap with a proton and electron product state.

In QFT the only sense you can give to one particle "being made of" a collection of others is that at asymptotic times it has large overlap with the multiparticle state of such a collection. However for many particles it doesn't overlap asymptotically with a single unique multiparticle state, so you have freedom in what you choose to say something is made of.
 
  • Like
Likes vanhees71 and dextercioby
  • #78
DarMM said:
This isn't really something confined to @A. Neumaier 's thermal interpretation. In interacting QFTs particles only exist asymptotically in scattering processes. In the Standard model Hydrogen is a state which (under scattering processes) can evolve to a state with large overlap with a proton and electron product state.

In QFT the only sense you can give to one particle "being made of" a collection of others is that at asymptotic times it has large overlap with the multiparticle state of such a collection. However for many particles it doesn't overlap asymptotically with a single unique multiparticle state, so you have freedom in what you choose to say something is made of.

But in non relativistic QM we do have the concept of single electron. In Thermal interpretation( for NQM) the claim is that there are no particles, that is puzzling.
 
  • #79
ftr said:
So what is an electron in a hydrogen atom? Or electrons in a silver atom for that matter?
A manifestation of the electron field with a computable charge distribution, covering more or less the classical size of the atom.
ftr said:
But in non relativistic QM we do have the concept of single electron. In Thermal interpretation( for NQM) the claim is that there are no particles, that is puzzling.
The concept of a single electron is a convenient approximation of the more fundamental concept of the electron field from QED.

The nonexistence of single electrons inside a nonrelativistic multi-electron system can also be seen from the fact that on the Hilbert space of a multi-electron system (the space of antisymmetrized wave functions) there are no position operators for single electrons, while there are distribution-valued operators for the charge density at any space-time point.

Only in certain approximations, one can talk in some sense about single electrons. For example, in the Hartree-Fock approximation of an atom, one can talk about the outermost electron, namely the one whose energy is largest. This is possible because in this approximation, the total energy of an ##N##-electron system can be naturally decomposed into a sum of ##N## energies for single electrons.

In general, secondary concept in physics are emergent approximate concepts arising from an appropriate approximate version of a more fundamental concept. Just like an atom has no temperature, but a macroscopic body has one.
 
  • Like
Likes dextercioby
  • #80
A. Neumaier said:
A manifestation of the electron field with a computable charge distribution, covering more or less the classical size of the atom.

In effect you are saying that the electron has a size, what is inside it. what is charge distribution?
 
  • #81
ftr said:
In effect you are saying that the electron has a size, what is inside it. what is charge distribution?
No. The electron field has a charge density concentrated in a small region of atom size if bounded, of beam shape if fast moving.
 
  • Like
Likes vanhees71 and dextercioby
  • #82
A. Neumaier said:
No. The electron field has a charge density concentrated in a small region of atom size if bounded, of beam shape if fast moving.

I am sorry I did not get what you meant, I ask again what is "charge density" what gives rise to it. Moreover, the electron "cloud" surrounds the proton, so the electron "field" does not seem to be contiguous, is it like a glass of water and the proton an ice cube!
 
  • #83
How does the Ehrenfest-Tolman effect affect this?
 
Last edited:
  • #84
A. Neumaier said:
It means that there are additional correlation degrees of freedom Take your observables to be fields you get pair correlations of the fluctuations. Locally via a Wigner transformation this gives kinetic contributions, but if A and B refer to casually disjoint regions, say, you get nonlocal correlations, the beables needed to violate the assumptions of Bell's theorem.
Perfect, this was clear from the discussion of QFT, but I just wanted to make sure of my understanding in the NRQM case (although even this is fairly clear from 4.5 of Paper II).

So in the Thermal Interpretation we have the following core features:
  1. Q-Expectations and Q-correlators are physical properties of quantum systems, not predicted averages. This makes these objects highly "rich" in terms of properties for ##\langle\phi(t_1)\phi(t_2)\rangle## is not merely a statistic for the field value, but actually a property itself and so on for higher correlators.
  2. Due to the above we have a certain "lack of reduction" (there may be better ways of phrasing this), a 2-photon system is not simply "two photons" since it has non-local correlator properties neither of them possesses alone.
  3. From point 2 we may infer that quantum systems are highly extended objects in many cases. What is considered two spacelike separated photons normally is in fact a highly extended object.
  4. Stochastic features of QM are generated by the system interacting with the environment. Under certain assumptions (Markov, infinite limit) we can show the environment causes a transition from a system pure state to a probability distribution of system pure states, what is called "collapse" normally. Standard Born-Markov stuff, environment is essentially a reservoir in thermal equilibrium, under Markov assumption it "forgets" information about the system so information purely dissipates into the environment without transfer back to the system. System is stochastically driven into a "collapsed" state. I'm not sure if this also requires the secular approximation (i.e. system's isolated evolution ##H_S## is on a much shorter time scale than the environmental influence ##H_{ES}##, but no matter.
Thus we may characterize quantum mechanics as the physics of property-rich non-reductive highly extended nonlocal objects which are highly sensitive to their environment (i.e. the combined system-environment states are almost always metastable and "collapse" stochastically).

As we remove these features, i.e. less environmentally sensitive, more reductive and less property rich (so that certain properties become purely functions of others and properties of the whole are purely those of the parts) and more locally concentrated, we approach Classical Physics.
 
Last edited:
  • #85
*now* said:
How does the Ehrenfest-Tolman effect affect this?
Please give a reference for discussion.
 
  • #86
ftr said:
I am sorry I did not get what you meant, I ask again what is "charge density" what gives rise to it. Moreover, the electron "cloud" surrounds the proton, so the electron "field" does not seem to be contiguous, is it like a glass of water and the proton an ice cube!
What is informally viewed as an electron cloud or drawn as orbitals are aspects of the electron field extending over some region around the nuclei. Similarly, the nuclei, often modeled as points or in more detail as fluids are aspects of the nucleon field, or ob a more detailed level of the quark field.
 
  • Like
Likes dextercioby
  • #87
DarMM said:
Thus we may characterize quantum mechanics as the physics of property-rich non-reductive highly extended nonlocal objects which are highly sensitive to their environment (i.e. the combined system-environment states are almost always metastable and "collapse" stochastically).

- so, the TI seems just camouflaging Bohm's "guiding" , trying to ascribe it to the universal thermal reservoir, right?
 
Last edited:
  • #88
ftr said:
So what is an electron in a hydrogen atom? Or electrons in a silver atom for that matter?
Well, even in classical relativistic physics "point particles are strangers", as Sommerfeld put it. The troubles with the point-particle concept became apparent already from the very beginning of Lorentz's "Theory of Electrons". I'm not sure, whether it's the first source, but already in 1916 the troubles with divergent self-energy in the context of the attempt to find closed equations for the motion of charged point particles ("electrons") and the electromagnetic fields became apparent. The trouble has been attact by some of the greatest physicists like Dirac or Schwinger with no real success. Today, as far as we know, the best one can do is to even approximate the famous Abraham-Lorentz-Dirac equation further, boiling it down to the Landau-Lifshitz equation, as it can be found in the famous textbook series (vol. 2, among the best textbooks on classical relativistic field theory ever written).

Even in the classical regime the most natural way to describe the mechanics of charged particles is a continuum description like hydrodynamics or relativistic kinetic theory (aka Boltzmann equation). One very practical application is the construction of modern particle accelerators like the FAIR accelerator here in Darmstadt, Germany, where the high-intensity particle bunches need a description taking into account not only the interaction between the particles ("space-charge effects") but also radiation losses, and there a hydro simulation (i.e., continuum desription of the particles) leads to the conclusion that for the discrete-particle picture the Landau-Lifshitz approximation to the Abraham-Lorentz-Dirac equation, describing the (accelerated) motion of charged particles, including the radiation-reaction forces.

The most fundamental theory we have today about "elementary particles" is the Standard Model of elementary-particle physics, which is based on relativistic, local (microcausal) quantum field theory (QFT). Here the trouble persists but is quite a lot milder. The early failed attempts to formulate a relativistic quantum mechanics clearly show that relativity needs many-body description even if you start with a few particles only as in the usual scattering experiments, where you consider reactions of two particles in the initial state. The reason is that at relativistic collision energies (i.e., where these energies come into the same order of magnitude as the masses (##\times c^2##, but I set ##c=\hbar=1##) of the lightest particles allowed to be created in the reaction (where allowed means not violating any of the empirically known conservation laws like energy, momentum, angular momentum and several conserved-charge conservation laws) there's always some probability to create new particles and/or destroying the initial colliding particles.

In QFT the fundamental concept are fields, as the name suggests. QFT was known from the very beginning of the development of modern QFT. Immediately after Heisenberg's ingenious insight during his hay-fever enforced stay on Helgoland in the summer of 1925, his vague ideas were amazingly quickly worked out by Born and Jordan and also Heisenberg himself as a formalism today known as "matrix mechanics", and already in one of these very early papers (the famous "Dreimännerarbeit" with Born, Jordan, and Heisenberg) everything was "quantized", i.e., not only the particles (electrons) but also the electromagnetic field. At the time ironically man physicists thought to also quantized the em. field is "too much of a revolution", and it was considered as unnecessary for a short while. The reason is simple: It is not so easy to see the necessity for field quantization at lower energies, available in atomic physics at this time. Although it was well known that for some phenomena a "particle picture for radiation", as proposed in Einstein's famous paper of 1905 on what we nowadays call "light quanta", can more easily explain several phenomena (like the photoelectric effect and Compton scattering) than the classical-field picture, to understand atomic physics for almost everything a treatment, where only the electrons were quantized and the interaction was described by electrostatics and the radiation by classical electromagnetic fields. What, however, was known at the time was the necessity for "spontaneous emission", i.e., if if there's no radiation field present which could lead to induced emission, there must be some probability for an excited atomic state (i.e., an energy eigenstate of the electrons around a nucleus) to emit a photon. This is the only phenomenon at the time which cannot be described by the semiclassical theory, where only the electrons were quantized but not the electromagnetic field. Everything else, including the photoelectric effect and Compton scattering as well as first applications to condensed-matter phenomena like the theory of dispersion of em. waves in matter can be successfully described in the semiclassical approximation.

The idea of field quantization was rediscovered by Dirac in 1927 when he formulated the theory of emission and absorption of electromagnetic radiation in terms of annihilation and creation operators for photons, leading to the correct postdiction of spontaneous emission, which was needed to explain Plancks black-body radiation formula which started the entire quantum business in 1900. It was well known by Einstein's (also very famous) paper of 1917 on the quantum-kinetic derivation of the Planck spectrum within "old quantum mechanics" that the spontaneous emission had to be postulated in addition to induced emission and absorption to get the correct Planck formula from kinetic considerations, but before Dirac there was no clear formalism for it.

Shortly thereafter among others Heisenberg and Pauli formulated quantum electrodynamics, and the use of perturbation theory lead to quite some success as long as one used only the lowest-order approximations (what we nowadays call the tree-level approximations using the pictorial notation in terms of Feynman diagrams). But to go to higher orders was plagued by the old demon of divergences known from the classical theory of radiation reactions, i.e., the interaction of charged particles with their own radiation fields, leading to the same self-energy divergences known already from classical theory, but the divergences were less severe than in classical theory, and the solution of the problem within perturbation theory was found in 1948 when Tomonaga, Schwinger, and Feynman developed their renormalization theory, also largely triggered by the fact that the "radiative corrections", i.e., the higher-order corrections leading to divergences in naive perturbation theory, became measurable (particularly Lamb's discovery of a little shift in the finestructure of the hydrogen-atom spectrum, now named after him "Lamb shift"). The final solution of the problem within perturbative QFT came in the late 1960ies, proving then crucial for the Standard Model, when in 1971 't Hooft and Veltman could prove the perturbative renormalizability of Abelian as well as non-Abelian gauge theories to any order of perturbation theory.

The upshot of this long story is that the particle picture of subatomic phenomena is quite restricted. One cannot make true sense of the particle picture accept in the sense of asymptotically free states, i.e., only when the quantum fields can be seen as essentially non-interacting a particle interpretation of quantum fields in terms of Fock states (eigenstates of the particle-number operators) becomes sensible.

Particularly for photons a classical-particle picture, as envisaged by Einstein in his famous 1905 paper on "light quanta", carefully titled as "a heuristic approach", is highly misleading. There's not even a formal way to define a position operator for massless quanta (as I prefer to say instead of "particles") in the narrow sense. All we can calculate is a probability for a photon to hit a detector at the place where this detector is located.
 
  • Like
Likes dextercioby, *now*, ftr and 1 other person
  • #89
AlexCaledin said:
- so, the TI seems just camouflaging Bohm's "guiding" , trying to ascribe it to the universal thermal reservoir, right?
I wouldn't say so. The thermal reservoir, the environment, is responsible for the stochastic nature of subsystems when you don't track the environment. However it doesn't guide them like the Bohmian potential, it's not an external object of a different class/type to the particles it's just another system. Also it's not universal, i.e. the environment is just whatever external source of noise is relevant for the current system, e.g. air in the lab, thermal fluctuations of atomic structure of the measuring device.
 
  • #90
DarMM said:
Perfect, this was clear from the discussion of QFT, but I just wanted to make sure of my understanding in the NRQM case (although even this is fairly clear from 4.5 of Paper II).

So in the Thermal Interpretation we have the following core features:
  1. Q-Expectations and Q-correlators are physical properties of quantum systems, not predicted averages. This makes these objects highly "rich" in terms of properties for ##\langle\phi(t_1)\phi(t_2)\rangle## is not merely a statistic for the field value, but actually a property itself and so on for higher correlators.
  2. Due to the above we have a certain "lack of reduction" (there may be better ways of phrasing this), a 2-photon system is not simply "two photons" since it has non-local correlator properties neither of them possesses alone.
  3. From point 2 we may infer that quantum systems are highly extended objects in many cases. What is considered two spacelike separated photons normally is in fact a highly extended object.
  4. Stochastic features of QM are generated by the system interacting with the environment. Under certain assumptions (Markov, infinite limit) we can show the environment causes a transition from a system pure state to a probability distribution of system pure states, what is called "collapse" normally. Standard Born-Markov stuff, environment is essentially a reservoir in thermal equilibrium, under Markov assumption it "forgets" information about the system so information purely dissipates into the environment with transfer back to the system. System is stochastically driven into a "collapsed" state. I'm not sure if this also requires the secular approximation (i.e. system's isolated evolution ##H_S## is on a much shorter time scale than the environmental influence ##H_{ES}##, but no matter.
Thus we may characterize quantum mechanics as the physics of property-rich non-reductive highly extended nonlocal objects which are highly sensitive to their environment (i.e. the combined system-environment states are almost always metastable and "collapse" stochastically).

As we remove these features, i.e. less environmentally sensitive, more reductive and less property rich (so that certain properties become purely functions of others and properties of the whole are purely those of the parts) and more locally concentrated, we approach Classical Physics.
Great! If this is indeed the correct summary of what is meant by "Thermal Interpretation", it's pretty clear that it is just a formalization of the usual practical use of Q(F)T in analyzing real-world observations.

It's indeed clear that in the above considered description of the two-photon Bell experiments, the photons are not localizable in a classical sense but that the localization is through the localization of the detectors' "click events", which are clearly and well-defined macroscopic manifestations (plus the fundamental assumption of locality, microcausality leading to the validity of the linked-cluster theorem for the QFT S-matrix).

Of course the q-expectation values have somehow to be heuristically introduced too to make sense to a physicist, and I still don't see, how this heuristics can be given without recurse to the standard probabilistic interpretation of the "state" (i.e., the statistical operator of the orthodox minimal interpretation), but as an axiomized final formalism it makes perfect sense.
 

Similar threads

  • · Replies 24 ·
Replies
24
Views
4K
  • · Replies 155 ·
6
Replies
155
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 42 ·
2
Replies
42
Views
8K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
48
Views
6K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 53 ·
2
Replies
53
Views
7K
  • · Replies 25 ·
Replies
25
Views
5K
  • · Replies 7 ·
Replies
7
Views
3K