The thermal interpretation of quantum physics

In summary: I like your summary, but I disagree with the philosophical position you take.In summary, I think Dr Neumaier has a good point - QFT may indeed be a better place for interpretations. I do not know enough of his thermal interpretation to comment on its specifics.
  • #281
The Thermal Interpretation isn't identifying ##J^{\mu}(x)## with the observed quantity, it's saying that it is an actual property of the system.

The whole notion of statistics in the TI arises because, as mentioned in my summary, the observed quantities don't accurately reflect this underlying ontic quantity.
 
Physics news on Phys.org
  • #282
vanhees71 said:
how does your thermal interpretation explain this situation
OK. Let us reconsider your example for the measurement of a current with a galvanometer. From a quantum field theoretical point of view, the electric current consists of the motion of the electron field in a wire at room temperature. The thermal interpretation says that at any description level you have an electron field, and the theoretically exact current density is described by the distribution-valued beable (q-expectation)
$$J^{\mu}(x) = \mathrm{Tr} (\hat{\rho} \hat{j}(x))$$
determined by the current operator
$$\hat{j}^{\mu}(x)=-e :\bar{\psi}(x) \gamma^{\mu} \psi(x):,$$
where the colons denote normal ordering and ##\hat{\rho}=e^{-S/k_B}## is the density operator describing the exact state of the galvanometer plus wire. (I usually don't write hats over the operators, but kept them when copying your formulas.)

At this level of description there is no approximation at all; the latter is introduced only when one replaces the exact ##S## by a numerically tractable approximation. At or close to thermal equilibrium, it is well-established empirical knowledge that we have ##S \approx (H+PV-\mu N)/T## (equality defines exact equilibrium). We can substitute this (or a more accurate nonequilibrium) approximation into the defining formula for ##J(x)## to compute a numerical approximation.

Measurable by the galvanometer is a smeared version
$$J^{\mu}_m(x) =\int dz\ h(z) J^{\mu}(x+z)$$
of the exact current density, where ##h(z)## is a function that is negligible for ##z## larger than the size of the current-sensitive part of the galvanometer. The precise ##h## can be found by calibration (as discussed earlier). You didn't explain what your ##\vec f## refers to, but taking your formula as being correct with an appropriate ##\vec f##, the galvanometer provides (ignoring the reading uncertainty) the current
$$I=\int \mathrm{d} \vec{f} \cdot \vec{J_m}$$
right away as a pointer reading.

This is without the standard statistical interpretation of the formalism. I made nowhere use of any statistical argument, treating the trace simply as a calculational device - as you do it for defining field correlations.

The only averaging is the smearing needed for mathematical reasons to turn the distribution-valued current into an observable vector, and for physical reasons since the galvanometer is insensitive to very high spatial or temporal frequencies. Note that this smearing has nothing to do with coarse-graining or averaging over microscopic degrees of freedom: It is also needed in (already coarse-grained) classical field theories. For example, in hydromechanics, the Navier-Stokes equations generally have only weak (distributional) solutions that make numerical sense only after smearing. Thus the quantum situation is in the thermal interpretation no different from the classical situation (except that there are many more beables than in the classical case).

vanhees71 said:
Hm, well, that's just the usual sloppy language of some textbooks on QM to identify the expectation values with the actually observed quantities. I think there's much more behind Arnold's interpretation, which however I still miss to understand :-(.
I can see nothing sloppy in my arguments.
 
Last edited:
  • Like
Likes Mentz114, DarMM and dextercioby
  • #283
My problem is that there is no physical motivation given for all the formal mathematical manipulations you do. It's of course all right formally, but to understand, why you are doing all these manipulations you need an interpretation, which you still don't provide since you simply say it's not the usual statistical interpretation behind it, but you still don't say, what instead is behind these manipulations. I don't think we come to a conclusion about these different pictures of what an interpretation in principle should provide, repeating the same arguments over and over again.
 
  • #284
stevendaryl said:
I consider orthodox quantum mechanics to be incomplete (or maybe inconsistent) [...]
Alice and Bob prepare an electron so that it is a superposition of spin-up and spin-down. Alice measures the spin. According to orthodox quantum mechanics, she either gets spin-up with such-and-such probability, or she gets spin-down with such-and-such probability.
But Bob, considering the composite system of Alice + electron
Orthodox quantum mechanics applies without problems when only a single quantum system is considered, as the Copenhagen interpretation requires.

Inconsistencies appear (only) when one considers simultaneously a quantum system and a subsystem of it (as in your example). The standard foundations (which work with pure states only and treat mixed states as states of incomplete knowledge, with classical uncertainty about the true pure state) simply do not cater appropriately for this situation. The reason is discussed in detail in Subsection 3.5 of Part I of my series of papers. In short, the standard foundations assert

(S1) The pure state of a system (at a given time) encodes everything that can be said (or ''can be known'') about the system at this time, including the possible predictions at later times, and nothing else.

On the other hand, common sense tells us

(S2) Every property of a subsystem is also a property of the whole system.

Now (S1) and (S2) imply

(S3) The pure state of a system determines the pure state of all its subsystems.

But (S3) is denied by the standard foundations, since knowing the pure state of an entangled system does not tell us anything at all about the pure state of its constituents. Nothing in the literature tells us anything about how to determine the pure state of a subsystem from the pure state of a system (except in the idealized, unentangled situation). Not even that - not a single property of a subsystem alone can be deduced from the pure state of the full system if the latter is completely entangled (which is the rule for multiparticle systems)!

The conclusion is that in the standard foundations, we have in place of (S2)

(S??) No property of a subsystem is also a property of the whole system.

This absurd conclusion causes the well-known foundational problems.

Not only that - taken at face value, (S??) would mean that the quantum state of a physics lab has nothing to do with the quantum states of the equipment in it, and of the particles probed there!

This is very strange for a science such as physics that studies large systems primarily by decomposing them into its simple constituents and infers properties of the former from collective properties of the latter.

The thermal interpretation avoids all these problems by taking the state to be an arbitrary monotone linear functional defining q-expectations (thus allowing for mixed states), and the (functions of ) q-expectations of operators as the properties of a system. The properties of a subsystem are then simply the (functions of) q-expectations of operators that act on the subsystem alone. Thus the properties (S1)-(S3) - with the word ''pure'' dropped - are trivially valid!
 
Last edited:
  • Like
Likes Mentz114 and dextercioby
  • #285
A. Neumaier said:
Orthodox quantum mechanics applies without problems when only a single system is considered, as the Copenhagen interpretation requires.

I would say, on the contrary, that the Copenhagen interpretation always involves at least two systems: the system being measured and the measuring device.

The thermal interpretation avoids all these problems by taking the state to be an arbitrary monotone linear functional (allowing for mixed states), and the (functions of ) q-expectations of operators as the properties of a system. The properties of a subsystem are then simply the (functions of) q-expectations of operators that act on the subsystem alone. Thus the properties (S1)-(S3) - with the word ''pure'' dropped - are trivially valid!

Yes, I appreciate that. But I don't completely understand it. If you have, for example, an entangled pair of spin-1/2 particles, ##\sqrt{\frac{1}{2}} (|ud\rangle - |du\rangle)##, and someone measures the spin of the first particle, then the result would seem to be something that is not computable from expectations and correlations.
 
  • #286
vanhees71 said:
My problem is that there is no physical motivation given for all the formal mathematical manipulations you do. It's of course all right formally, but to understand, why you are doing all these manipulations you need an interpretation, which you still don't provide since you simply say it's not the usual statistical interpretation behind it, but you still don't say, what instead is behind these manipulations.
You have the same problem when introducing in your research work Green's functions as purely calculational tools, deferring the interpretation in terms of experiments to results derived from the formal manipulations.

vanhees71 said:
I don't think we come to a conclusion about these different pictures of what an interpretation in principle should provide, repeating the same arguments over and over again.
I demand from an interpretation only that it gives a clear interface between the theoretical predictions and the experimental record, and in this respect the thermal interpretations serves very well.

People were always doing those formal manipulations where history has shown that it works, while the intuition associated with the formal manipulations changed. Maxwell derived his equations from intuition about a mechanical ether; we think of this today as crutches that are no longer needed, and in fact are detrimental for a clear understanding.

Thus whether or not someone accompanies the formal manipulations with intuitive blabla - about microscopic degrees of freedom, or virtual particles popping in and out of existence, or Bohmian ghost variables, or many worlds, or whatever - is a matter of personal taste. I prefer to cut all this out using Ockham's razor, and leave it to the subjective sphere of individual physicists.
 
Last edited:
  • Like
Likes Mentz114 and dextercioby
  • #287
The orthodox quantum mechanics (which for me is the formalism + the minimal statistical interpretion, i.e., that flavors of Copenhagen that do not envoke some esoterical "quantum-classical cut" nor an even more esoterical "collapse of the state") work perfectly well, because orthodox quantum mechanics does not claim that there are only pure states.

Of course, you have maximal possible knowledge about a system only, if you have prepared it in a pure state. If you now prepare a composite system, described in a product space ##\mathcal{H}_1 \otimes \mathcal{H}_2##, with the Hilbert spaces referring to the parts, in a pure state ##\hat{\rho}=|\Psi \rangle \langle \Psi \rangle## you have maximal possible knowledge about this composite systems.

If you choose to ignore one of the parts, by definition you through away information, by choosing to ignore the other part of the system. Then, in general, of course the partial system you choose to consider, is not prepared in a pure state, but in the state given by the partial trace, i.e., if you only like to take notice of part 1 of the system, you describe it, according to the basic definitions of probability theory, by the mixed state given by the partial trace,
$$\hat{\rho}_1=\mathrm{Tr}_2 \hat{\rho}.$$
 
  • #288
@A. Neumaier what do you consider the open problems of the Thermal Interpretation?

Establishing that devices in certain canonical scenarios have the metasability properties and slow mode disconnected manifolds required? And also that these in fact lead to discrete outcomes whose long run frequencies have an average equal to the true value ##\langle A\rangle##? i.e. that in certain states the device will fall into one slow mode more frequently for example.
 
  • #289
A. Neumaier said:
You have the same problem when introducing in your research work Green's functions as purely calculational tools, deferring the interpretation in terms of experiments to results derived from the formal manipulations.I demand from an interpretation only that it gives a clear interface between the theoretical predictions and the experimental record, and in this respect the thermal interpretations serves very well.

People were always doing those formal manipulations where history has shown that it works, while the intuition associated with the formal manipulations changed. Maxwell derived his equations from intuition about a mechanical ether; we think of this today as crutches that are no longer needed, and in fact are detrimental for a clear understanding.

Thus whether or not someone accompanies the formal manipulations with intuitive blabla - about microscopic degrees of freedom, or virtual particles popping in and out of existence, or Bohmian ghost variables, or whatever - is a matter of personal taste. I prefer to cut all this out using Ockham's razor, and leave it to the subjective sphere of individual physicists.
I don't have a problem to define Green's functions. Why should I have a problem? They are calculational tools, as are, e.g., the potentials in electromagnetism which are not observable as well. There are plenty of calculational objects in theoretical physics which do not directly refer to observables. Why should I have problems introducing them to help to calculate predictions for observables?

It's also clear that you can get correct theories from wrong heuristics, as your example of the historical derivation of Maxwell's equations from mechanistic aether models show. Of course, here we have also a perfect example, how physicists are forced to change their heuristics and the interpretation with progress in science: Maxwell himself finally abandoned his own mechanistic aether interpretation. As is well known, the aether was completely abandoned only much later after Einstein's discovery of special relativity, which also lead to a change of interpretation in terms of relativistic space-time descriptions.

The only thing, I still find lacking with your "thermal interpretation" is that it doesn't provide an interpretation at all but just tries to sell the formalism as the interpretation. It precisely does not provide what you yourself defined as the purpose of any interpretation, namely the link between the formalism and the observable facts in the lab!
 
  • #290
stevendaryl said:
the Copenhagen interpretation always involves at least two systems: the system being measured and the measuring device.
But the measurement device is treated classically, not in terms of a quantum state.
stevendaryl said:
Yes, I appreciate that. But I don't completely understand it. If you have, for example, an entangled pair of spin-1/2 particles, ##\sqrt{\frac{1}{2}} (|ud\rangle - |du\rangle)##, and someone measures the spin of the first particle, then the result would seem to be something that is not computable from expectations and correlations.
The measurement result is primarily a property of the detector, and is computable - up to measurement errors - from the state of the detector (e,g., from the position ##\langle q\rangle## of the center of mass ##q## of the pointer).

As an approximate measurement of the first particle, it is a poor approximation (binary, with large uncertainty) of the the z-component of the spin of the first particle, which - in the thermal interpretation - is the real number ##\langle J_3\rangle=\langle i\sigma_3\rangle##.

To reduce the uncertainty one can measure the spin of a whole ensemble of ##N## particles and compute the ensemble average, thereby reducing the uncertainty by a factor of ##\sqrt{N}##. For large ##N## this provides a statistical interpretation of the q-expectation.
 
Last edited:
  • Like
Likes dextercioby
  • #291
A. Neumaier said:
But the measurement device is treated classically, not in terms of a quantum state.

Okay. But that has to be an approximation, because the measurement device is made out of electrons and protons and photons, just like the system being measured.

The measurement result is primarily a property of the detector, and is computable - up to measurement errors - from the state of the detector (e,g., from the position ##\langle q\rangle## of the center of mass ##q## of the pointer).

I don't see how that can be true. In an EPR twin-pair experiment, if Alice measures spin-up for her particle along the z-axis, then Bob will measure spin-down along that axis. The details of Bob's measuring device seem completely irrelevant (other than the fact that it actually measures the z-component of the spin).
 
  • #292
It's of course not true. A measurement device by definition and construction (that's half of the very art of experimentalists, the second being the art of evaluating the measurement results properly, including an estimate of the statistical (sic!) and systematic errors) after interacting with the system to be measured, provides some information of the measured observable of the system.

The most simple example is the Stern-Gerlach experiment. The measurement device is the magnet, leading to an (approximately!) entanglement of the position of the particle (pointer variable) and the spin component in the direction of the (homogeneous part of) the magnetic field. This means to measure that spin component you just have to collect the particles (in the original Frankfurt experiment Ag atoms) on a screen and look at the position. As you see, here the pointer variable is in the very particle itself, namely its position. There's no need for any classical approximation whatsoever (except for the final step of "collecting" the particle on the screen).
 
  • #293
DarMM said:
what do you consider the open problems of the Thermal Interpretation?

Establishing that devices in certain canonical scenarios have the metasability properties and slow mode disconnected manifolds required? And also that these in fact lead to discrete outcomes whose long run frequencies have an average equal to the true value ##\langle A\rangle##? i.e. that in certain states the device will fall into one slow mode more frequently for example.
Yes, this kind of things, which are argued in my papers only qualitatively. I listed some open problems at the end of the Conclusions of Part III. Also, the AB&N work is too complex to be studied in depth by many, and finding simpler scenarios that can be analyzed in a few pages (so that they are suitable for courses in quantum mechanics) would be worthwhile.
 
  • #294
vanhees71 said:
It's of course not true. A measurement device by definition and construction (that's half of the very art of experimentalists, the second being the art of evaluating the measurement results properly, including an estimate of the statistical (sic!) and systematic errors) after interacting with the system to be measured, provides some information of the measured observable of the system.

The most simple example is the Stern-Gerlach experiment. The measurement device is the magnet, leading to an (approximately!) entanglement of the position of the particle (pointer variable) and the spin component in the direction of the (homogeneous part of) the magnetic field. This means to measure that spin component you just have to collect the particles (in the original Frankfurt experiment Ag atoms) on a screen and look at the position. As you see, here the pointer variable is in the very particle itself, namely its position. There's no need for any classical approximation whatsoever (except for the final step of "collecting" the particle on the screen).

I would say that there SHOULD be no need for any classical approximation, if quantum mechanics were consistent and complete.
 
  • #295
It's of course a challenge, but I don't think that there are any principle problems in understanding macroscopic measurement devices (as in this example the screen used to collect the Ag atoms) quantum theoretical since measurement devices are just macroscopic physical systems as anything around us, and these are quite well understood in terms of quantum many-body theory. I'd say condensed-matter physics is one of the most successful branches of physics, and its very successful research is precisely dealing with the quantum theoretical understanding of the classical properties of macroscopic systems, and measurement devices are just such macroscopic systems manipulated by experimentalists and engineers to enable their use as measurement devices.

That's why I was so inclined to the "thermal interpretation" at first, because I thought it just takes this usual point of view of the macroscopic behavior being described by quantum-many body theory in terms of appropriate coarse-graining procedures, which is a generically statistical-physics argument. Then I learned in this discussion, that I completely misunderstood the three papers, because it's precisely how I should NOT understand the interprtation, but now I'm lost since I've no clue what the interpretation now is meant to be.
 
  • #296
stevendaryl said:
Okay. But that has to be an approximation, because the measurement device is made out of electrons and protons and photons, just like the system being measured.
Yes, but Copenhagen says nothing about the nature of this approximation. It takes definite classical measurement values as an irreducible fact. The measurement problem begins precisely when one wants to derive this fact rather than postulate it.

stevendaryl said:
I don't see how that can be true. In an EPR twin-pair experiment, if Alice measures spin-up for her particle along the z-axis, then Bob will measure spin-down along that axis. The details of Bob's measuring device seem completely irrelevant (other than the fact that it actually measures the z-component of the spin).
Well, you surely agree that the pointer position can be calculated (in principle) from the state of the measurement device at the time the reading is done. This has nothing at all to do with what is measured, it is only the fact that the pointer is somewhere where it can be read.

What you and I don't see is how precisely Bob's detector knows which pointer reading to display to conform to the correlated quantum statistics. Somehow, Nature does it, and since we haven't found any deviation from quantum physics, it is presumably a consequence of the Schrödinger dynamics of the state of the universe. In Subsections 4.4 and 4.5 of Part II, I discussed the sense in which this can be partially understood: the extendedness of quantum systems (entangled long-distance states are very fragile extended systems) together with conditional knowledge make things reasonably intelligible to me. In any case, it is in full agreement with the requirements of relativity theory.
 
Last edited:
  • #297
stevendaryl said:
I would say that there SHOULD be no need for any classical approximation, if quantum mechanics were consistent and complete.

The basic postulate of the minimal interpretation of quantum mechanics is: If you measure an observable, then you will get an eigenvalue of the corresponding operator. But if measurement is definable in terms of more basic concepts, then that rule should be expressible in terms of those more basic concepts.

So for example, an observable is measured when it interacts with a macroscopic system (the measuring device) so that the value of that observable can be "read off" from the state of the macroscopic system. In other words, the microscopic variable is "amplified" to make it have a macroscopic effect.

So the rule about measurement producing an eigenvalue boils down to a claim that a macroscopic measurement device is always in one of a number of macroscopically distinguishable states.

So the minimal interpretation of QM, it seems to me, is equivalent to the following:

Any system, no matter how complex, evolves according to unitary evolution with the exception that a macroscopic system is always in one of a number of macroscopically distinguishable states. If unitary evolution would put the system into a superposition of macroscopically distinguishable states, then the system will pick one, with probability given by the square of the amplitude corresponding to that possibility.

This way of stating the Born rule is, as far as I know, exactly equivalent in its empirical content to the minimal interpretation. But it differs in that it does not claim that measurements produce eigenvalues. That is a derivable consequence (from the definition of "measurement" plus the rules for unitary evolution). So I actually think that this formulation is better. But it explicitly makes a distinction between macroscopic and microscopic observables. This distinction is already implicit in the minimal interpretation, but is hidden by the lack of a clear definition of "measurement".
 
  • #298
vanhees71 said:
I don't have a problem to define Green's functions. Why should I have a problem? They are calculational tools, as are, e.g., the potentials in electromagnetism which are not observable as well. There are plenty of calculational objects in theoretical physics which do not directly refer to observables. Why should I have problems introducing them to help to calculate predictions for observables?
But why do you demand more for my calculational tools, the q-expectations? Justify them exactly in the same way as you justify your definition of Green's function, and you'll understand me.

vanhees71 said:
The only thing, I still find lacking with your "thermal interpretation" is that it doesn't provide an interpretation at all but just tries to sell the formalism as the interpretation. It precisely does not provide what you yourself defined as the purpose of any interpretation, namely the link between the formalism and the observable facts in the lab!
I only claim that some of the q-expectations give predictions for actual observations, and how these are to be interpreted is fully specified in my posts #263, #266, and #278 (and elsewhere).
 
Last edited:
  • #299
A. Neumaier said:
Well, you surely agree that the pointer position can be calculated (in principle) from the state of the measurement device at the time the reading is done. This has nothing at all to do with what is measured, it is only the fact that the pointer is somewhere where it can be read.

Let's look at the specific case: You have an electron sent through a Stern-Gerlach device. Depending on the electron's spin, it either is deflected left, and makes a black dot on a photographic plate on the left, or is deflected right, and makes a black dot on a photographic plate on the right.

If the electron is deflected left, then it WILL make a dot on the left plate. So the details of the photographic plate just don't seem relevant, other than the fact that its atoms are in an unstable equilibrium so that a tiny electron can trigger a macroscopic state change.
 
  • #300
stevendaryl said:
Let's look at the specific case: You have an electron sent through a Stern-Gerlach device. Depending on the electron's spin, it either is deflected left, and makes a black dot on a photographic plate on the left, or is deflected right, and makes a black dot on a photographic plate on the right.

If the electron is deflected left, then it WILL make a dot on the left plate. So the details of the photographic plate just don't seem relevant, other than the fact that its atoms are in an unstable equilibrium so that a tiny electron can trigger a macroscopic state change.
I completely agree. But this doesn't invalidate what I said:

That there is a dot on the left, say, is a property of the plate that is computable from the state of the plate at the time of looking at it to record the event. For the state of the plate with a dot on the right is quite different from the state of the plate with a dot on the left or that without a dot. Hence the state differentiates between these possibilities, and hence allows one to determine which possibility happened.

The only slighly mysterious thing is why Alice can predict Bob's measurement. Here I don't have a full explanation, but only arguments that show that nothing goes wrong with relativistic causality.

Note also that the prediction comes out correct only when the entangled state is undisturbed and the detector is not switched off at the time the photon hits - things that Alice cannot check until having compared notes with Bob. Thus her prediction is not a certainty but a conditional prediction only.
 
  • Like
Likes Jimster41
  • #301
vanhees71 said:
here the pointer variable is in the very particle itself, namely its position.
No. It is the center of mass of the positions of a huge number of silver atoms.
vanhees71 said:
It's of course a challenge, but I don't think that there are any principle problems in understanding macroscopic measurement devices (as in this example the screen used to collect the Ag atoms) quantum theoretical since measurement devices are just macroscopic physical systems as anything around us, and these are quite well understood in terms of quantum many-body theory.
There is a problem of principle, namely precisely the measurement problem.

You sweep this problem under the carpet by treating the detectors in a roundabout way, without restricting your interpretation to what is recorded in your postulates.
 
Last edited:
  • #302
A. Neumaier said:
The only slighly mysterious thing is why Alice can predict Bob's measurement. Here I don't have a full explanation, but only arguments that show that nothing goes wrong with relativistic causality.

I would say that that is the central mystery of the EPR experiment.
 
  • #303
stevendaryl said:
I would say that that is the central mystery of the EPR experiment.
Yes, and the thermal interpretation does not solve it, though it makes it (at least for me) less of a mystery.

But the thermal interpretation removes the contradictions that appear when the standard interpretations are applied to the problem of measurement itself. This has become possible by decoupling measurement issues from ontological issues.
 
  • Like
Likes Jimster41 and stevendaryl
  • #304
A. Neumaier said:
But why do you demand more for my calculational tools, the q-expectations? Justify them exactly in the same way as you justify your definition of Green's function, and you'll understand me.I only claim that some of the q-expectation give predictions for actual observations, and how these are to be interpreted is fully specified in my posts #263, #266, and #278 (and elsewhere).
I'm obviously still not able to make my point. Perhaps I misunderstand the entire motivation of what you are doing with this project, but you call it "Thermal Interpretation". This I understand as another attempt to provide a different interpretation of the formalism. I've no problem with your math at all (although I don't claim that I understand all the subtle details since I'm a physicist not a mathematician). I still do not see, what is the interpretation, i.e., what is the relation of your q-expectation values to experimental observables in the lab, and that's what "interpretation" in this context means. Despite your claim, you precisely don't specify precisely this relation. You simply forbid to use the usual statistical interpretation but don't say what to use instead. Maybe it's also difficult for me to understand your arguments in this direction, because every physicist from day 1 on is trained to use statistical methods to evaluate data. Even more, there is no other way to objectively treat empirical data if not statistically to begin with. Of course, there's more to error analysis and error estimates, namely what's called systematic errors, but that's not the issue here, because that's anyway a very specific task for any individual experiment. Interpretation as the physics part of a mathematical theory of model doesn't deal with this, but with the relation between the mathematical formalism and the observable values. As I said repeatedly this doesn't forbid you to use "auxiliary quantities" like a set of n-point functions of various kinds in QFT or the gauge-dependent potentials of electromagnetic or non-Abelian gauge fields etc. These have nothing to do with interpretation anyway (it's also historically interesting that the Maxwellians struggled with interpretations using the potentials, but that was before the full meaning of gauge invariance has been understood with the works by Noether, Weyl, Kaluza et al). All you need to provide if you want to give an interpretation is the relation of what's meant to be observable quantities in the formalism.

If I understand it right, you claim, it's the q-expectation values, but this is a common misconception of some not too well thought introductory textbooks, leading to the common wrong interpretation of the Heisenberg uncertainty relation(s) as restrictions on the accuracy of measurements and the inevitable disturbance of the system by the measurements. Heisenberg himself trapped into this misconception, but was corrected by Bohr immediately. It's also obvious, because the uncertainty relation refers to (in the standard statistical interpretation) statistical properties of observables of a system without even considering a specific measurement process at all.
 
  • #305
A. Neumaier said:
Yes, and the thermal interpretation does not solve it, though it makes it (at least for me) less of a mystery.

But the thermal interpretation removes the contradictions that appear when the standard interpretations are applied to the problem of measurement itself. This has become possible by decoupling measurement issues from ontological issues.
The minimal interpretation very simply solves it: It's about correlations, and these are imprinted due to the preparation procedure (e.g., two polarization entangled photons due to the decay of a neutral pion, just to avoid the problem of collective behavior of a crystal in parametric downconversion, although this doesn't make any difference).

Of course, the minimal interpretation also resolves these problems by decoupling it from any "ontological issues" in interpreting the quatnum state as epistemic to begin with. That's not very surprising since everything physics cares about is epistemic. Natural science is about what can be objectively observed and not about some metaphysical ontology!
 
  • #306
vanhees71 said:
I still do not see, what is the interpretation, i.e., what is the relation of your q-expectation values to experimental observables in the lab, and that's what "interpretation" in this context means. Despite your claim, you precisely don't specify precisely this relation
The q-expectation is what you're measuring. Let's make this much simpler. Take spin in the z-direction and let's say we have a state with (Units with ##\hbar = 1##):
$$\langle S_z \rangle = -\frac{1}{4}$$

The Thermal Intepretation is saying that this is not an expectation, but an actual value. It is a property of the state with value ##\frac{1}{4}##, just like charge for example.

However when we measure it, due to the bistability of the measuring device we only get outcomes ##−\frac{1}{2}## and ##\frac{1}{2}##, not the true value of ##−\frac{1}{4}##.

However the true value will show up in the fact that the fact that ##−\frac{1}{2}## will occur more often and eventually through gathering enough measurement samples you can reconstruct the ##−\frac{1}{4}## value as it will show up as the sample mean.
 
  • #307
A. Neumaier said:
I completely agree. But this doesn't invalidate what I said:

That there is a dot on the left, say, is a property of the plate that is computable from the state of the plate at the time of looking at it to record the event. For the state of the plate with a dot on the right is quite different from the state of the plate with a dot on the left or that without a dot. Hence the state differentiates between these possibilities, and hence allows one to determine which possibility happened.

The only slighly mysterious thing is why Alice can predict Bob's measurement. Here I don't have a full explanation, but only arguments that show that nothing goes wrong with relativistic causality.

Note also that the prediction comes out correct only when the entangled state is undisturbed and the detector is not switched off at the time the photon hits - things that Alice cannot check until having compared notes with Bob. Thus her prediction is not a certainty but a conditional prediction only.
But it should be stressed that the plate just fixes the observable "position" of the measured electron. If you consider the SGE, as usual, as a measurement of the corresponding spin component, the point is the (nearly) 100% entanglement between position (pointer variable) and spin component (measured observable). There's nothing mysterious in this, but it can be predicted by using quantum dynamics only. There's no mystery about the SG experiment and the involved measurement devices (a magnet with the appropriately tuned magnetic fields and a screen to collect the particles).
 
  • #308
DarMM said:
The q-expectation is what you're measuring. Let's make this much simpler. Take spin in the z-direction and let's say we have a state with (Units with ℏ=1ℏ=1):

⟨Sz⟩ρ=−14⟨Sz⟩ρ=−14​

The Thermal Intepretation is saying that this is not an expectation, but an actual value. It is a property of the state with value 2323, just like charge for example.

However when we measure it, due to the bistability of the measuring device we only get outcomes −12−12 and 1212, not the true value of −14−14.

However the true value will show up in the fact that the fact that −12−12 will occur more often and eventually through gathering enough measurement samples you can reconstruct the −14−14 value as it will show up as the sample mean.
Well, this is a contradictio in adjecto. It's a very clear example for the misconception of taking the expectation values as the "true observables". What's measured are the values ##\pm 1/2## not the expectation value which well may be ##-1/4##.
 
  • #309
vanhees71 said:
Well, this is a contradictio in adjecto. The values of observables are what a (sufficiently precise) measurement device measures. It's a very clear example for the misconception of taking the expectation values as the "true observables". What's measured are the values ##\pm 1/2## not the expectation value which well may be ##-1/4##.
 
Last edited:
  • #310
Something strange happened with the formatting there!

vanhees71 said:
Well, this is a contradictio in adjecto. It's a very clear example for the misconception of taking the expectation values as the "true observables". What's measured are the values ##\pm 1/2## not the expectation value which well may be ##-1/4##.
The Thermal Interpretation though explains why ##\pm 1/2## are measured despite ##-1/4## being the true value.
 
  • Like
Likes A. Neumaier
  • #311
vanhees71 said:
every physicist from day 1 on is trained to use statistical methods to evaluate data. Even more, there is no other way to objectively treat empirical data if not statistically to begin with.
This is the usual classical statistics, to improve the accuracy of raw measurements. This use of statistics is necessary and useful. Indeed, I even claim that all statistics ever needed in physics (quantum or not) is precisely of this kind.

But the use of statistics to justify theoretical calculations is not necessary, and the thermal interpretation removes the need for it in quantum mechanics, in the same way as modern physics removed the need for a mechanical interpretation of the Maxwell equations deemed neceessary in Maxwell's time.
vanhees71 said:
I still do not see, what is the interpretation, i.e., what is the relation of your q-expectation values to experimental observables in the lab, and that's what "interpretation" in this context means.
Let me repeat from post #298 that some of the q-expectations give predictions for actual observations, and how these are to be interpreted is fully specified in my posts #263, #266, and #278 (and elsewhere). This is the relation you always asked for and never accepted.
vanhees71 said:
As I said repeatedly this doesn't forbid you to use "auxiliary quantities" like a set of n-point functions of various kinds in QFT or the gauge-dependent potentials of electromagnetic or non-Abelian gauge fields etc.
Then please take note that all q-expectations are auxiliary quantities of this kind, except for the ones for which I gave an experimental interpretation in my posts #263, #266, and #278. Please reply line by line to these posts, why you think the statements there are not an answer to your request - for this is what I cannot understand!
vanhees71 said:
If I understand it right, you claim, it's the q-expectation values
It is some q-expectations, primarily those discussed in the above three threads.
vanhees71 said:
but this is a common misconception of some not too well thought introductory textbooks, leading to the common wrong interpretation of the Heisenberg uncertainty relation(s) as restrictions on the accuracy of measurements and the inevitable disturbance of the system by the measurements.
I discuss in Subsection 2.5 of Part II Heisenberg's uncertainty relation and what it means in the thermal interpretation. It is completely independent of measurement (not a restriction of the accuracy of measurements) and limits the accuracy for which a numerical concept is meaningful.

The position of a car - unlike that of its center of mass - is not determined to mm accuracy, and if you specify its position to that accuracy, the trailing digits are spurious. Similarly for position or spin in quantum mechanics.
 
Last edited:
  • #312
DarMM said:
Something strange happened with the formatting there!
This happens whenever you quote or reply to part of an answer, and this part contains formulas; these then appear as a mess, as in post #308 by @vanhees71. It is a bug since long... (@Greg Bernhardt - can something be done about it?)

To get the formulas quoted correctly, always reply to the whole answer, and cut out the unwanted part (and/or insert additional QUOTE pairs).
 
Last edited:
  • Like
Likes DarMM
  • #313
vanhees71 said:
Well, this is a contradictio in adjecto. It's a very clear example for the misconception of taking the expectation values as the "true observables". What's measured are the values ##\pm 1/2## not the expectation value which well may be ##-1/4##.
Note that the uncertainty of the spin in such a state is quite large, which explains why the measured values ##\pm 1/2## can be so far of from the (according to the thermal interpretation) true value ##-1/4## of the spin.
 
  • #314
vanhees71 said:
There's no mystery about the SG experiment and the involved measurement devices
Yes; see post #135 for how this is viewed by the thermal interpretation.

But there is some mystery about how Nature manages to get the right correlations in experimental tests of long distance entanglement (the case under discussion here). Quantum mechanics predicts the right correlations, but does not explain so far how it is possible that these are actually obtained!
 
  • #315
vanhees71 said:
Natural science is about what can be objectively observed and not about some metaphysical ontology!
But isn't interpretation is about some clarity which only comes with some level of ontology that our brains can rap around it.
 

Similar threads

  • Quantum Interpretations and Foundations
Replies
1
Views
200
  • Quantum Interpretations and Foundations
Replies
24
Views
3K
  • Quantum Interpretations and Foundations
2
Replies
42
Views
5K
  • Quantum Interpretations and Foundations
Replies
2
Views
773
  • Quantum Interpretations and Foundations
Replies
25
Views
1K
  • Quantum Interpretations and Foundations
Replies
1
Views
1K
  • Quantum Interpretations and Foundations
Replies
17
Views
2K
  • Quantum Interpretations and Foundations
2
Replies
48
Views
4K
  • Quantum Interpretations and Foundations
11
Replies
376
Views
10K
  • Quantum Interpretations and Foundations
Replies
7
Views
699
Back
Top