# I Measurement results and two conflicting interpretations

#### vanhees71

Gold Member
If one assumes the former, i.e., that measurement results are not eigenvalues but only approximations to eigenvalues (the new convention presented explicitly by @vanhees71 in his lecture notes, which I have never seen elsewhere), one has other problems:
In which of my lecture notes and where have you interpreted this in? Of course, I never wanted to claim this nonsense. Maybe I was sloppy in some formulation that you could come to this conclusion.

It's of course true that any measurement device has some imprecision, but you can in principle measure as precisely as you want, and theoretical physics deals with idealized measurements. The possible outcomes of measurements are the eigenvalues of the operator of the measured observable.

#### A. Neumaier

If your claim that one always measures a q-expectation value were true, the SGE would always result in a single strip of Ag atoms on the screen, as predicted by classical physics
No. I only claim approximate measurements, in this case with a big (and predictably big!) error, and I did not claim that the errors are normally or uniformly distributed as you seem to assume. In the present case, an analysis of the slow manifold of the measurement process would show that the errors have a highly peaked bimodal distribution centered at the two spots, just as what one gets when measuring a classical diffusion process in a double-well potential.

Note that random measurements at the two spots that one actually sees in an experiment are crude approximations of any point in between, in particular of the q-expectation. This can be seen by looking at the screen from very far away where one cannot resolve the two spots.

By the way, the quote you answered to was a comment to Heisenberg's statement, not to mine! You are welcome to read the whole context in which he made his statement! See also the Heisenberg quote in post #20.

Last edited:

#### vanhees71

Gold Member
Sorry, post #2 is fine (though I exclude your comments on the thermal interpretation, on which I reserve judgement, since I haven't studied this new attempt at interpretation). It is your subsequent post with your usual remarks about interpretation that I don't quite agree with.
Ok, fine with me, I know that you don't accept the minimal statistical interpretation, but I think it is important to clearly see that the claim that what's measured on a quantum system is always the q-expectation value is utterly wrong. Already the Stern-Gerlach experiment contradicts this (see my posting #25), and this is an empirical fact independent of any interpretation.

NB: The SGE at the time even was interpreted in terms of "old quantum theory", which by chance was correct due to compensating two errors, namely assuming the wrong gyrofactor 1 of the electron and, using modern terms, assuming spin 1 rather than spin 1/2, which latter's existence has been unknown at the time. What I never understood is the argument, why one wouldn't see 3 rather than 2 strips on the screen, because already in old QT for spin 1 one would expect three values for $\sigma_z$ rather than 0, namely $\pm \hbar$ and $0$.

#### vanhees71

Gold Member
No. I only claim approximate measurements, in this case with a big (and predictably big!) error, and I did not claim that the errors are normally or uniformly distributed as you seem to assume. In the present case, an analysis of the slow manifold of the measurement process would show that the errors have a higly peaked bimodal distribution centered at the two spots, just as what one gets when measuring a classical diffusion process in a double-well potential.

Note that random measurements at the two spots that one actually sees in an experiment are crude approximations of any point in between, in particular of the q-expectation. This can be seen by looking at the screen from very far away where one cannot resolve the two spots.

By the way, the quote you answered to was a comment to Heisenberg's statement, not to mine! You are welcome to read the whole context in which he made his statement! See also the Heisenberg quote in post #20.
It's of course always possible to make the resolution of the apparatus so bad that you don't resolve the two lines. Then you simply have a big blurr, but then I doubt very much that you always get the q-expectation values rather than an inconclusive determination of the value to be measured.

#### vanhees71

Gold Member
Well, I usually don't agree with Heisenberg's interpretations, and in the quote in #20 he deals with the continuous variables of position and momentum (of course, the correct discussion is about position and (canonical) momentum, not velocity, but that may be because Heisenberg wrote a popular (pseudo)science book rather than a physics book in this case). Heisenberg even got his own uncertainty relation wrong first and was corrected by Bohr, and I think Heisenberg's misunderstanding of the uncertainty relation is indeed closely related to the misunderstanding of basic principles of quantum mechanics leads to your claim that what's measured is always the q-expectation value rather than the value the observable takes when an individual system is measured.

To answer Heisenberg's question in the quote: Of course, you can always prepare a particle in a state with "pretty unprecise" position and a "pretty unprecise momentum". I don't understand, what Heisenberg is after to begin with, nor what this has to do with your thermal interpretation.

#### A. Neumaier

In which of my lecture notes and where have you interpreted this in? Of course, I never wanted to claim this nonsense. Maybe I was sloppy in some formulation that you could come to this conclusion.
In https://th.physik.uni-frankfurt.de/~hees/publ/quant.pdf from 2008 you write:
Hendrik van Hees said:
Wird nun an dem System die Observable $A$ gemessen, so ist das Resultat der Messung stets ein (verallgemeinerter) Eigenwert des ihr zugeordneten selbstadjungierten Operators $A$.
[Moderator note (for completeness, although identical with what is said anyway): If the observable $A$ is measured on the system, then the result of the measurement is always a (generalized) eigenvalue of the according self-adjoint operator $A$.]

which agrees with tradition and the Wikipedia formulation of Born's rule, and says that measurements produce always eigenvalues - hence never approximate eigenvalues. Later, on p.20 of https://th.physik.uni-frankfurt.de/~hees/publ/stat.pdf from 2019, you seem to have strived for more precision in the language and only require:
Hendrik van Hees said:
A possible result of a precise measurement of the observable $O$ is necessarily an eigenvalue of the corresponding operator $\mathbf O$.
You now 1. distinguish between the observable and the associated operator and 2. have the qualification 'precise', both not present in the German version.

Thus it was natural for me to assume that you deliberately and carefully formulated it in this way in order to account for the limited resolution of a measurement device, and distinguish between 'precise', idealized measurements that yield exact results and 'unprecise', actual measurements that yield approximate results, as you did in post #2 of the present thread:
Born's statistical interpretation does not claim that there is no measurement error but, as usual when formulating a theory, discusses first the case that the measurement errors are so small that they can be neglected. If you deal with unprecise measurements the analysis becomes much more complicated
It's of course true that any measurement device has some imprecision, but you can in principle measure as precisely as you want, and theoretical physics deals with idealized measurements. The possible outcomes of measurements are the eigenvalues of the operator of the measured observable.
Here you seem to refer again to idealized measurements when you make the final statement., as no measurement error is mentioned.

Thus you have hypothetical measurement results ('precise', idealized) representing the true, predictable possible values and actual measurement results ('imprecise') representing the unpredictable actual values of the measurements. Their relation is an unspecified approximation about which you only say
That cannot be theorized about but has to be treated individually for any experiment and is thus not subject of theoretical phsyics but part of a correctly conducted evaluation of experimental results for the given (preparation and) measurement procedure.
It is this postulated dichotomy that I analyzed in my post #21.

Last edited by a moderator:

#### A. Neumaier

It's of course always possible to make the resolution of the apparatus so bad that you don't resolve the two lines. Then you simply have a big blurr, but then I doubt very much that you always get the q-expectation values rather than an inconclusive determination of the value to be measured.
That's irrelevant.

The thermal interpretation never claims the caricature you take it to claim, namely that one always gets the q-expectation. It only claims that the measurement result one gets approximates the predicted q-expectation $\langle A\rangle$ with an error of the order of the predicted uncertainty $\sigma_A$. When the latter is large, as in the case of a spin measurement, this is true even when the q-expectation vanishes and the measured values are $\pm 1/2$!

#### vanhees71

Gold Member
In https://th.physik.uni-frankfurt.de/~hees/publ/quant.pdf from 2008 you write:

which agrees with tradition and the Wikipedia formulation of Born's rule, and says that measurements produce always eigenvalues - hence never approximate eigenvalues. Later, in 2019, you seem to have strived for more precision in the language and only require:

Thus it was natural to assume that you deliberately and carefully formulated it in this way in order to account for the limited resolution of a measurement device, and distinguish between 'precise', idealized measurements that yield exact results and 'unprecise', actual measurements that yield approximate results, as you did in post #2 of the present thread:

Here you seem to refer again to idealized measurements when you make the final statement., as no measurement error is mentioned.

Thus you have hypothetical measurement results ('precise', idealized) representing the true, predictable possible values of the measurements and actual measurement results ('imprecise') representing the unpredictable actual values of the measurements. Their relation is an unspecified approximation about which you only say

It is this postulated dichotomy that I analyzed in my post #21.
Ehm, where is the difference between the German and the English quote (I don't know, where the English one comes from). Of course the English one is a bit sloppy, because usually unbound operators have not only eigenvalues but also "generalized" eigenvalues (i.e., if one refers to a value in the continuous part of the spectrum of the observable).

As expected, in my writing I've not changed my opinion over the years that the established and empirically very satisfactory working QT were wrong in this point. I don't discuss instrumental uncertainties in my manuscript, because they are about theoretical physics. To analyze instrumental uncertainties cannot be done in a general sense but naturally depends on the to be analyzed individual measurement device. This in general has very little to do with quantum mechanics at all.

Of course the very definition of an observable implies that you have to be able to (at least in principle) measure it precisely, i.e., to make the instrumental uncertainties negligibly small. In this connection it is very important to distinguish the "preparation process" (defining the state of the quantum system) and the "measurement process" (defining observables). The uncertainty/precision of a measurement device is independent of the uncertainty/precision of the preparation.

E.g. if you prepare the momentum of an electron quite precisely, then due to the Heisenberg uncertainty relation its position is quite imprecisely determined. Despite this large uncertainty of the electron's position you can measure its precision as accurately as you want (e.g., by using a CCD camera of high resolution). For each electron you'll measure its position very accurately. The uncertainty of the position measurement is determined by the high resolution of the CCD cam, not by the position resolution of the electron's preparation (which is assumed to be much more uncertain than this resolution of the CCD cam). For the individual electron you'd in general you'd not measure the q-expectation value given by the prepared state (not even "approximately", whatever you mean by this vague statement).

#### vanhees71

Gold Member
That's irrelevant.

The thermal interpretation never claims the caricature you take it to claim, namely that one always gets the q-expectation. It only claims that the measurement result one gets approximates the predicted q-expectation $\langle A\rangle$ with an error of the order of the predicted uncertainty $\sigma_A$. When the latter is large, as in the case of a spin measurement, this is true even when the q-expectation vanishes and the measured values are $\pm 1/2$!
But this is what you said! The "beable" is the q-expectation value and not the usual definition of an observable. If this is now all of a sudden not true anymore, we are back to the very beginning, since now your thermal interpretation is again undefined! :-(

#### A. Neumaier

Ehm, where is the difference between the German and the English quote (I don't know, where the English one comes from).
I edited my post, which now gives the source and explicitly spells out the differences. Please read it again.

#### vanhees71

Gold Member
That's irrelevant.

The thermal interpretation never claims the caricature you take it to claim, namely that one always gets the q-expectation. It only claims that the measurement result one gets approximates the predicted q-expectation $\langle A\rangle$ with an error of the order of the predicted uncertainty $\sigma_A$. When the latter is large, as in the case of a spin measurement, this is true even when the q-expectation vanishes and the measured values are $\pm 1/2$!
That's simply not true! The precision of the measurement device is determined by the measurement device, not the standard deviation due to the preparation (i.e., the prepared state) of the measured system.

If you want to establish that the standard deviation of $A$ is $\sigma_A$ you have to measure with much higher precision than $\sigma_A$, and you have to use a sufficiently large ensemble to gain "enough statistics" to establish that the standard deviation due to the state preparation is $\sigma_A$.

Again for the SGE: If your resolution of the spin-$z$-component measurement resolves the measured values $\pm 1/2$ and the state is prepared such that $\langle s_z \rangle=0$, you never find the result $\langle s_z \rangle=0$ with some uncertainty but you find with certainty either $+1/2$ or $-1/2$, and to establish that the standard deviation is $\sigma_{\sigma_z}$ you have to simply do the measurement often enough on equally prepared spins to gain enough statistics to verify this prediction (at the given confidence level, which usually is $5 \sigma_{\text{measurement}}$ (where here $\sigma_{\text{measurement}}$ is the standard deviation of the measurement, not that of the quantum state!).

#### atyy

Ok, fine with me, I know that you don't accept the minimal statistical interpretation, but I think it is important to clearly see that the claim that what's measured on a quantum system is always the q-expectation value is utterly wrong.
I do accept the minimal statistical interpretation :) the same as Dirac, L&L, Messiah, Cohen-Tannoudji etc, but maybe we should discuss this elsewhere.

Yes, the thermal interpretation completely baffles me (including the part about the measured result being a q-expectation), but maybe I'm missing something because I haven't spent a lot of time studying it.

#### A. Neumaier

But this is what you said! The "beable" is the q-expectation value and not the usual definition of an observable. If this is now all of a sudden not true anymore, we are back to the very beginning, since now your thermal interpretation is again undefined! :-(
No; this was never true. You were misreading the concept of a beable. Maybe this is the source of our continuing misunderstandings.

According to Bell, a beable of a system is a property of the system that exists and is predictable independently of whether one measures anything. A measurement is the reading of a measurement value from a measurement device interacting with the system that is guaranteed to approximate the value of a beable of that system within the claimed accuracy of the measurement device.

In classical mechanics, the exact positions and momenta of all particles in the Laplacian universe are the beables. and a measurement device for a particular prepared particle is a macroscopic body coupled to this particle, with a pointer such that the pointer reading approximates some position coordinate in a suitable coordinate system. Clearly, any given measurement never yields the exact position but only an approximation of it.

In a Stern-Gerlach experiment with a single particle, the beables are the three real-valued components of the q-expectation $\bar S$ of the spin vector $S$, and the location of the spot produced on the screen is the pointer. Because of the semiclassical analysis of the experimental arrangement, the initial beam carrying the particle splits in the magnetic field into two beams, hence only two spots carry enough intensity to produce a response. Thus the pointer can measure only a single bit of $\bar S$. This is very little information, whence the error is predictably large.

The thermal interpretation predicts everything: the spin vector, the two beams, the two spots, and the (very low) accuracy with which these spots measure the beable $S_3$.

#### vanhees71

Gold Member
I do accept the minimal statistical interpretation :) the same as Dirac, L&L, Messiah, Cohen-Tannoudji etc, but maybe we should discuss this elsewhere.

Yes, the thermal interpretation completely baffles me (including the part about the measured result being a q-expectation), but maybe I'm missing something because I haven't spent a lot of time studying it.
Well, the problem is that the "thermal interpretation" is either wrong for very obvious reasons or not clearly defined yet since now again we learnt that the measured result is not the q-expectation value although we discuss precisely this earlier statement for weeks in several different forks of the initial thread. I'm also baffled, but for obviously different reasons.

#### A. Neumaier

you never find the result $\langle s_z \rangle=0$ with some uncertainty but you find with certainty either $+1/2$ or $-1/2$
If I find the result $+1/2$ or $-1/2$ with certainty, I can be sure that the measurement error according to the thermal interpretation is exactly $1/2$, since this is the absolute value of the difference between the measured value and the true value (defined in the thermal interpretation to be the q-expectation $0$). As a consequence, I can be sure that the standard deviation of the measurement results is also exactly $1/2$.

Last edited:

#### vanhees71

Gold Member
No; this was never true. You were misreading the concept of a beable. Maybe this is the source of our continuing misunderstandings.

According to Bell, a beable of a system is a property of the system that exists and is predictable independently of whether one measures anything. A measurement is the reading of a measurement value from a measurement device interacting with the system that is guaranteed to approximate the value of a beable of that system within the claimed accuracy of the measurement device.

In classical mechanics, the exact positions and momenta of all particles in the Laplacian universe are the beables. and a measurement device for a particular prepared particle is a macroscopic body coupled to this particle, with a pointer such that the pointer reading approximates some position coordinate in a suitable coordinate system. Clearly, any given measurement never yields the exact position but only an approximation of it.

In a Stern-Gerlach experiment with a single particle, the beables are the three real-valued components of the q-expectation $\bar S$ of the spin vector $S$, and the location of the spot produced on the screen is the pointer. Because of the semiclassical analysis of the experimental arrangement, the initial beam carrying the particle splits in the magnetic field into two beams, hence only two spots carry enough intensity to produce a response. Thus the pointer can measure only a single bit of $\bar S$. This is very little information, whence the error is predictably large.

The thermal interpretation predicts everything: the spin vector, the two beams, the two spots, and the (very low) accuracy with which these spots measure the beable $S_3$.
It's always dangerous to use philosophically unsharp definitions. If this is what Bell refers to as "beable" it's not subject of physics, because physics is about what's observed objectively in nature. Kant's "Ding an sich" is a fiction and not subject of physics!

In your 3rd paragraph you already redefine the word "beable" as to have the usual meaning of "observable". Why then not using the clearly defined word "observable".

All this contributes to the confusion rather than the clarification what your "thermal interpretation" really is meant to mean! If you are not able to express it in standard physics terms and always refer to unsharp notions of philosophy it's of course hard to ever come to a conclusion about it.

#### A. Neumaier

In your 3rd paragraph you already redefine the word "beable" as to have the usual meaning of "observable".
I don't understand. Where precisely do I do it?
Why then not using the clearly defined word "observable".
I cannot, because the word observable is loaded in quantum mechanics with the traditional interpretation.
If you are not able to express it in standard physics terms and always refer to unsharp notions of philosophy
The word 'observable' is also an unsharp and philosophical notion, with very different meanings in classical and quantum mechanics.

#### A. Neumaier

Kant's "Ding an sich" is a fiction and not subject of physics!
It is only as fictional as your idealized measurements from
It's of course true that any measurement device has some imprecision, but you can in principle measure as precisely as you want, and theoretical physics deals with idealized measurements. The possible outcomes of measurements are the eigenvalues of the operator of the measured observable.

#### bobob

I've read through your paper and it's not clear to me exactly what predictions you can make about experimental results. In particular, the thermal interpretation appears to hinge on the distinction of what you define as point causality, joint causality and extended causality.

Start with an example. Consider two antennas which the emission of radiation is spacelike and a point in the future in which I am going to measure the radiation arriving at my antenna. Do the amplitudes sum coherently or incoherently? Quantum mechanically, since phases are not measurable quantities, the difference is whether or not you can determine which wave corresponds to which source. The epr experiment is simply the same experiment except by exchanging past and future. The correlations at spacelike points only occur if the two photons have a common causal origin and behave like a single entity.

The Born rule is buried in the notion of extended causality. Since phases are not measurable quantities, you cannot specify a configuration of entities on a spacelike surface which meet the conditions required to be extendedly causal, since such a configuration would require specifying the phases of those entities. An intereference pattern that is actually deterministic would then require each point in the interference pattern (including the nulls) to be the coherent sum of of different extendedly causal configurations differing by phases. Instead of the Born rule being used to give a probability directly from the amplitudes associated with a past configuration in which all physical quantities are measurable, the Born rule in this case, gives a probability of which past configuration is being measured where each past configurtion differs by a quantity which is not measurable.

Since it's the unmeasurability of phases that gives rise to QFT, I'm not sure where that leaves the thermal interpreation in that regard. Phases are well defined classically, so I cannot see anyway around having to deal with an quantum mechanically unmeasurable quantity to obtain classical determinism unless you can define an experiment that makes that quantity measurable, in which case, you have actually surpassed quantum mechanics by making it truly classical.

The only out here for extended causality is the extended causality that results from a point causalty in the past of the extendedly causal entities, which leaves you back at standard quantum mechanics.

#### A. Neumaier

the thermal interpretation appears to hinge on the distinction of what you define as point causality, joint causality and extended causality.
Only with regard to explaining long-distance correlation experiments. In terms of prediction, it is identical to standard quantum mechanics, as it only imposes a different interpretation of it, rather than modifying it (as Bohmian mechanics or GWR).

#### PeterDonis

Mentor
Consider two antennas which the emission of radiation is spacelike
How can emission of radiation be spacelike?

#### bobob

How can emission of radiation be spacelike?
Phased array radar. Emission of photons from sources which are spacelike seperated. I could have worded that better as emission from sources which are spacelike seperated.

Last edited:

#### bobob

Only with regard to explaining long-distance correlation experiments. In terms of prediction, it is identical to standard quantum mechanics, as it only imposes a different interpretation of it, rather than modifying it (as Bohmian mechanics or GWR).
Right, but that is point here. Those notions of causality are ultimately just pointlike causality. There is no difference since the extended causality requires a common origin in a past point causality. The difference would only be meaningful if you could specify an extended causality completely and independently from one with a common origin.

#### PeterDonis

Mentor
Emission of photons from sources which are spacelike seperated
Ah, ok, that makes sense.

#### A. Neumaier

Those notions of causality are ultimately just pointlike causality. There is no difference since the extended causality requires a common origin in a past point causality. The difference would only be meaningful if you could specify an extended causality completely and independently from one with a common origin.
For independent point sources, point causality and etended causality are the same. For extended nonclassical sources not, because of the entanglement of their constituents.

### Want to reply to this thread?

"Measurement results and two conflicting interpretations"

### Physics Forums Values

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving