# Measurement results and two conflicting interpretations

TL;DR Summary
An illustration is given of the differences in the interpretation of measurement results in the thermal interpretation and in Born's statistical interpretation.
The following two examples illustrate the differences between the thermal interpretation
and Born's statistical interpretation for the interpretation of measurement results.

1. Consider some piece of digital equipment with 3 digit display measuring some physical quantity ##X## using ##N## independent measurements. Suppose the measurement results were 6.57 in 20% of the cases and 6.58 in 80% of the cases. Every engineer or physicist would compute the mean ##\bar X= 6.578## and the standard deviation ##\sigma_X=0.004## and conclude that the true value of the quantity ##X## deviates from ##6.578## by an error of the order of ##0.004N^{-1/2}##.

2. Consider the measurement of a Hermitian quantity ##X\in C^{2\times 2}## of a 2-state quantum system in the pure up state, using ##N## independent measurements, and suppose that we obtain exactly the same results. The thermal interpretation proceeds as before and draws the same conclusion. But Born's statistical interpretation proceeds differently and claims that there is no measurement error. Instead, each measurement result reveals one of the the eigenvalues ##x_1=6.57## or ##x_2=6.58## in an unpredictable fashion with probabilities ##p=0.2## and ##1-p=0.8##, up to statistical errors of order ##O(N^{-1//2})##. For ##X=\pmatrix{6.578 & 0.004 \cr 0.004 & 6.572}##, both interpretations of the results for the 2-state quantum system are consistent with theory. However, Born's statistical interpretation deviates radically from engineering practice, without any apparent necessity.

Clearly, the thermal interpretation is much more natural than Born's statistical interpretation, since it needs no other conventions than those that are valid for all cases where multiple measurements of the same quantity produce deviating results.

Gold Member
2022 Award
1. is of course correct and has nothing to do with QT to begin with. It describes the measurement device's function. It's obviously constructed such as to measure expecation values of ##X## on an ensemble of ##N## (hopefully uncorrelated but equally) prepared systems. Of course that then the measurement gives an expectation value including the standard deviation as an estimate for the statistical accuracy is a tautology, because that's how the measurement device has been constructed.

2. A quantity ##X## is not hermitean. It's something defined by a measurement. In the quantum formalism it is described by a self-adjoint (which is in finite-dimensional unitary spaces the same as Hermitean) operator ##\hat{X}##, which can be represented for a two-level system by a matrix in ##\mathbb{C}^{2 \times 2}##, given a basis of the 2D unitary vector space.

Born's statistical interpretation does not claim that there is no measurement error but, as usual when formulating a theory, discusses first the case that the measurement errors are so small that they can be neglected. If you deal with unprecise measurements the analysis becomes much more complicated since then you deal not only with the quantum statistics a la Born but also with the statistics (and systematics!) of the inaccurate measurement device. That cannot be theorized about but has to be treated individually for any experiment and is thus not subject of theoretical physics but part of a correctly conducted evaluation of experimental results for the given (preparation and) measurement procedure.

As already detailed in a posting in another thread (forking threads is of course also a good way to confuse a discussion), the thermal interpretation is NOT equivalent to the standard interpretation. It's only equivalent if you use a measurement device as you describe here. If you don't resolve individual accurate measurements but simply measure coarse-grained expectation values by construction of course you get these expectation values (with the usual statistics, supposed the errors are Gaussian distributed, as you seem to assume silently here). But whenever physicists are resolving individual measurement results the standard interpretation has not been falsified, i.e., you find the eigenvalues of the hermitean operator rather than q-expectation values, and the statistics is consistent with the predicted probabilities a la Born.

Just saying that sufficiently coarse-grained ensemble-averaged measurements deliver q-expectation values is not a new interpretation but a tautology! It doesn't solve any (real or philosophical) interpretational problem of QT!

(forking threads is of course also a good way to confuse a discussion)
I am forking specific problems from the discussion of generalities. The other thread is so long that specific special situations discussed get drowned, and searching for them becomes quite time-consuming.

It doesn't solve any (real or philosophical) interpretational problem of QT!
Of course not for you, since you always argue that there are no such problems. Nonexisting problems cannot be solved. I am surprised that you take part at all in discussions about the foundations, since you always insist that there is nothing to be discussed there.

The thermal interpretation is for those who, like me, find the standard interpretation unsatisfactory and look for solutions of problems you don't even perceive.

stevendaryl
Gold Member
2022 Award
But the thermal representation as presented in #1 of this thread is a tautology, because you assume to have a measurement device constructed to measure expectation values, which is of course sometimes possible but not what's the issue about "interpretational problems". Then it's just empty.

On the other hand, you claim that what's always measured are q-expectation values, but that's obviously contradicting observational practice in labs. The experimentalists are very well able to measure observables of individual quantum systems accurately, and what comes out so far is what's predicted by standard QT and not q-expectation values.

I'm taking part of discussions about socalled "interpretational problems of QT", because I find them interesting. It's a social phenomenon that's nearly incomprehensible for me. I can understand, why Einstein was fighting against the Copenhagen collapse doctrine of his time, because it directly contradicts fundamentally the relativistic causality principle. Since this issue, however, is resolved for quite a while by, e.g., using collapse-free interpretations like the minimal interpretation or many worlds, it's enigmatic to me, why there's still an issue.

Even Weinberg in his newest textbook tries to solve an apparent issue with measurement problems by showing that it's impossible to derive Born's rule from the other postulates of QT, but what is the very point of this task, is not clear to me either. Born's rule is simply a postulate of the theory, discovered early in the development of modern QT, because it's very naturally resolving the wave-particle paradox of the "old QT". That it's the ##|\psi|^2## rule that is correct providing probabilities is also established quite recently by precise experiments, ruling out higher-order contributions to "interference terms" like, e.g., in

https://arxiv.org/abs/1612.08563

For me the final killing stroke of the "measurement problem" is Bell's work about local deterministic HV models and the huge research work by both theorists and experimentalists following it. The many very accurate Bell tests so far clearly show that the inseparability predicted by QT due to the possibility of entanglement of far-distant parts of composite quantum systems are real. The stronger-than-classical far-distant correlations between (indetermined) observables for me are an empirically established fact.

Short: What QT in the minimal statistical interpretation predicts is describing correctly all so far established empirical facts about Nature with high precision. Thus there is no problem with the theory or the theory's interpretation (interpretation in the scientific meaning of providing empirically testable predictions) from a physics point of view.

Maybe there's a problem from some philosophical points of view, but on the other hand, why should Nature care about our philosophical prejudices about how she should behave ;-)).

Last but not least, the irony is that there indeed still is a real problem in modern physics, and that's the inconsistency of General Relativity and Quantum Theory, i.e., there's no working quantum theory of the gravitational interaction (which may well be mean that what's necessary is a quantum theory of spacetime, given the fact that it is pretty hard to find relativistic theories of gravitation not related to spacetime-geometrical issues). One might argue that it could well be that GR (or some modifications of it but still classical theories) is all there is concerning gravitation, but then there's the problem of the unavoidable singularities like black holes or the big-bang singularity of cosmology. Although my astro colleagues have just made a "picture of a black hole", that doesn't mean that we understand everything related to black holes. Particularly it seems as if classical relativistic theories of gravitation, including GR (which must be very close to any right theory, at least as far as classical theories can be, since there are more and more accurate evidences for it being right, particularly the celebrated EHT result is in accordance with the expectations of GR, and this is indeed in the strong-gravitation limit of a "mass monster" of 6.3 billion solar masses), anavoidably have this issue with the singularities, and there indeed these theories break down.

Another, maybe related, fundamental issue is the question of the nature dark energy. Recent new redshift-distance measurements based on the use of quasars as standard candles, which are in accordance with other standard candles like type 1a supernovae, but provide more precise data at larger red shifts, indicate that the simple assumption of a "cosmological constant" is probably wrong, which would also probably help to solve the puzzle about discrepancies between the values of the Hubble Constant when measured in different ways:

https://www.aps.org/publications/apsnews/201805/hubble.cfm
https://doi.org/10.1038/s41550-018-0657-z

I think these are the true scientific big problems in contemporary physics and not some philosophical quibbles with the interpretations of the very well established modern QT. Maybe these problems indeed lead to a modification and extension not only to the still classical spacetime description a la GR or variations thereoff but also to a modification and extension of QT. Who knows? But I don't think that we find a solution of these scientific problems by beating the dead horse of the socalled "measurement problem of" or "interpretational issues with" QT.

But I don't think that we find a solution of these scientific problems by beating the dead horse of the socalled "measurement problem of" or "interpretational issues with" QT.
But the horse never died and, as I already hinted at in my PF interview, the measurement problem may well be closely related to a solution of the quantum gravity problem. Indeed, these problems you mention all require that one considers the whole universe (or at least huge parts of it that allow astronomers to make break-through pictures from very far away) as a single quantum system - something the statistical interpretation cannot do. Unlike you, both Weinberg and Peres are well aware of this. For Peres see this post that you never commented.

you assume to have a measurement device constructed to measure expectation values
No. i didn't make this assumption, you interpolated it. From any measurement device giving different results upon repetition, one has always this conclusion. This can be learned in the first experimental physics exercises, where it is applied to repeatedly measuring mass or currents and asks for the best value - the value closest to the true one. A spin measurement repeated on an ensemble of identically prepared spins gives different 2-valued results upon repetition, hence is of this kind. The individual results are not reproducible, only the statistics is, just as with mass or current measurements. And the 2-valuedness may with good reasons be regarded as a property of the detector, not of the continuous spin! There is no stringent reason to assume that this law of statistics breaks down in microphysics.

A photodetector is also such a digital devise, since it can respond only in integer values, no matter how little intense the field is. But the intensity is a continuous parameter. Hence at low intensity, a photodetector necessarily stutters. See post #614 in the main thread.

On the other hand, you claim that what's always measured are q-expectation values, but that's obviously contradicting observational practice in labs. The experimentalists are very well able to measure observables of individual quantum systems accurately, and what comes out so far is what's predicted by standard QT and not q-expectation values.
No matter how accurately you measure the position of a dot created by a single event in a Stern-Gerlach experiment, it always gives only a very inaccurate measurement of the incident field intensity, which is given by a q-expectation.

Last edited:
charters
However, Born's statistical interpretation deviates radically from engineering practice, without any apparent necessity.

The first reason is the need to explain interference. If the uncertainty is just epistemic uncertainty due to measurement device error/resolution limits, then we would expect the 20-80 split between 6.57 and 6.58 will be insensitive to whether the device is placed in the near field or far field of the (beam) source. But in QM, it is possible for the eigenstates to interfere, and so the probability of measuring 6.57 or 6.58 can vary based on this path length (and varies with a high regularity that is very well predicted by summing over histories).

I don't question you already know this, but I think the reason you are untroubled by this relative to others is due to the superdeterminist underpinnings of the Thermal Interpretation, which have not been fully articulated. I think without being more explicit on this, you and others will just continue to talk past each other.

The second reason is the need to explain correlations between detectors.

Last edited by a moderator:
Mentor
No matter how accurately you measure a dot created by a single event in a Stern-Gerlach experiment, it always gives only a very inaccurate measurement of the incident field intensity, which is given by a q-expectation.

It is difficult to understand how the SG detectors can be so inaccurate as to wrongly report +1 or -1 when the true value is 0, yet at the same time so reliable that A) one or the other detector always misreports on every single experimental run and B) they never both misreport at the same time.

The discussion in this previous thread seems relevant:

https://www.physicsforums.com/threa...-interpretation-explain-stern-gerlach.969296/
[Edit: Several posts and portions of posts from this thread referring specifically to the SG experiment have been moved to the thread linked above. Please direct further replies regarding the SG experiment to that thread.]

Last edited:
The OP is incorrect. Standard QM does claim that even if there is no measurement error, results will be probabilistic, but it does not exclude other sources of uncertainty, including uncertainty in which observable was measured.

vanhees71
The first reason is the need to explain interference.

See this post.

The OP is incorrect. Standard QM does claim that even if there is no measurement error, results will be probabilistic, but it does not exclude other sources of uncertainty, including uncertainty in which observable was measured.
Which statement of the OP do you refer to? And howis it related to yur second sentence?

My statements in the OP refer to Born's rule in the form stated in Wikipedia (and in many textbooks).

It doesn't matter which observable was measured; the concrete matrix given was just an example that the numbers in the example are consistent with some observable.

Born's statistical interpretation does not claim that there is no measurement error but, as usual when formulating a theory, discusses first the case that the measurement errors are so small that they can be neglected. If you deal with unprecise measurements the analysis becomes much more complicated since then you deal not only with the quantum statistics a la Born but also with the statistics (and systematics!) of the inaccurate measurement device.

I don't agree with all of @vanhees71's post #2, but I do agree with his point that standard quantum theory allows measurement error.

I don't agree with all of vanhees71's post #2, but I do agree with his point that standard quantum theory allows measurement error.
Thus you disagree with the statement in Wikipedia and many other textbooks, e.g., Landau and Lifschitz? They say unanimously:
Wikipedia said:
if an observable corresponding to a self-adjoint operator ##A## with discrete spectrum is measured [...] the measured result will be one of the eigenvalues ##\lambda## of ##A##.
No approximation is implied; nothing indicates that this would only apply to idealized measurements.

Gold Member
2022 Award
The OP is incorrect. Standard QM does claim that even if there is no measurement error, results will be probabilistic, but it does not exclude other sources of uncertainty, including uncertainty in which observable was measured.
Indeed, and it claims that the results of measurements are the eigenvalues of the self-adjoint operators representing the observables and not the q-expectation values. The success of QT in describing what's really measured obviously shows that this is an empirically correct assumption, and the postulate of the "thermal interpretation" that the result of measurements are q-expectation values.

Gold Member
2022 Award
Thus you disagree with the statement in Wikipedia and many other textbooks, e.g., Landau and Lifschitz? They say unanimously:

No approximation is implied; nothing indicates that this would only apply to idealized measurements.
There is no contradiction. Of course, in theoretical physics one first discusses idealized measurements, where the error due to the construction of the measurement apparatus is negligible. Indeed, better and better measurement devices are better and better approximations to such idealized measurement devices, and no matter what interpretation you follow (or believe in since this seems to be a quasi-religious issue ;-)), it's an established fact since 1925 that the predictions of QT, namely that what you measure if you measure with sufficient precision, are the eigenvalues of the self-adjoint operators that represent the observables and not q-expectation values. Thus the crucial assumption, deviating in the thermal from the standard interpretation is unanimously empirically invalidated.

Gold Member
No approximation is implied; nothing indicates that this would only apply to idealized measurements
Is that not just typical textbook glossing over of experimental realities or are they actually making the literal claim that the eigenvalue directly results regardless of the precision of the device?

I would have assumed the former.

Gold Member
2022 Award
I don't agree with all of @vanhees71's post #2, but I do agree with his point that standard quantum theory allows measurement error.
I'm really surprised that you disagree with well-established empirical facts. Unfortunately you don't in detail tell with what of these you disagree and why.

Gold Member
2022 Award
Is that not just typical textbook glossing over of experimental realities or are they actually making the literal claim that the eigenvalue directly results regardless of the precision of the device?

I would have assumed the former.
Of course. In theory textbooks one starts with discussing general laws of nature as described by theories and one deals with idealized precise measurements.

The very important topic of analyzing measurement errors (statistical and (!) systematical) are the subject of experimental physics and is taught particularly in the introductory and advanced labs, where students really do experiments, and this is for a good reason. The errors of measurement devices depend on each individual measurement device, and you need both theoretical analyses and empirical tests with real-world devices to learn the art to do correct and complete measurements.

Of course to understand why the macroscopic matter around us seems to behave according to the laws of classical physics rather than quantum physics one has to understand coarse graining and how measurement devices (including our senses!) do the corresponding "averaging over many microscopic degrees of freedom", and why it is so difficult to observe genuine quantum behavior on macroscopic objects (decoherence).

DarMM
it's an established fact since 1925 that the predictions of QT, namely that what you measure if you measure with sufficient precision, are the eigenvalues of the self-adjoint operators that represent the observables and not q-expectation values.
Its a convention since 1927, when (as reported in Part I of my series of papers) Jordan, Dirac and von Neumann cast the findings of Born into the general statement today known as Born's rule. The established facts at the time were lots of spectral data and nothing else. Not the least mention of measuring with sufficient precision. Indeed, some passages like the following (translated from p.181f) of Heisenberg's 1927 paper on the uncertainty relation sound like the thermal interpretation:
Werner Heisenberg said:
When we want to derive physical results from that mathematical framework, then we have to associate numbers with the quantum-theoretical magnitudes -- that is, with the matrices (or 'tensors' in multidimensional space). [...] One can therefore say that associated with every quantum-theoretical quantity or matrix is a number which gives its 'value' within a certain definite statistical error.
His 'value' is the q-expectation, not an eigenvalue! And his statistical errors are the deviations from these, not the lack of precision in the sense of your claimed ''established fact since 1925''!
The very important topic of analyzing measurement errors (statistical and (!) systematical) are the subject of experimental physics and is taught particularly in the introductory and advanced labs, where students really do experiments, and this is for a good reason.
And for a good reason one learns there that measurement errors are always to be treated as in the first example of post #1, to get better accuracy through a repetition of the measurement.

Last edited:
Is that not just typical textbook glossing over of experimental realities or are they actually making the literal claim that the eigenvalue directly results regardless of the precision of the device?
Well, it is nowhere discussed. The story is complicated.

The early Copenhagen view (Bohr and Heisenberg end of 1927, Como and Solvay conferences) was that conserved quantities were beables that had exact values independent of measurement, and that states were eigenstates, with random transitions between them. Then Jordan, Dirac and von Neumann generalized the setting to arbitrary maximally commuting systems of observables, still talking about 'being' rather than 'measuring'. Then it was recognized that this is inconsistent since joint probabilities cannot be defined; in the early literature wrong statements and derivaions not taking this into account exist till at least 1928. Then people were forced to indicate the measurement context, and the modern formulation emerged.

Nobody ever seems to have felt the need to investigate measurement errors beyond Heisenberg's uncertainty relation, which was viewed by Heisenberg as a statistical uncertainty of deviation from the mean; see the quote in post #19. On p.77f of his 1972 book ''Der Teil und das Ganze'', Heisenberg still writes:
Werner Heisenberg said:
Kann man in der Quantenmechanik eine Situation darstellen, in der sich ein Elektron ungefähr -- das heißt mit einer gewissen Ungenauigkeit -- an einem gegebenen Ort befindet und dabei ungefähr -- das heißt wieder mit einer gewissen Ungenauigkeit -- eine vorgegebene Geschwindigkeit besitzt, und kann man diese Ungenauigkeiten so gering machen, daß man nicht in Schwierigkeiten mit dem Experiment gerät?
This shows that Heisenberg's informal intuition was, and remained throughout his life, that of the thermal interpretation, which provides an electron with a definite but uncertain world tube in phase space.

dextercioby and DarMM
Is that not just typical textbook glossing over of experimental realities or are they actually making the literal claim that the eigenvalue directly results regardless of the precision of the device?

I would have assumed the former.
If one assumes the former, i.e., that measurement results are not eigenvalues but only approximations to eigenvalues (the new convention presented explicitly by @vanhees71 in his lecture notes, which I have never seen elsewhere), one has other problems:

In this case one cannot even justify the interpretation of the q-expectation as an expectation value of measurement results - since the latter deviate arbitrarily (depending on the measurement accuracy) from the eigenvalues that figure in the derivation of the expectation value.

Instead, with the new convention of vanhees71, one would have to argue that there are additional, purely theoretical true values (the eigenvalues) that figure in the Born rule and take the place of the true positions and momenta in a classical description. But these theoretical true values are only approximately related to measurement, as approximately as the true positions and momenta in classical physics. Thus it would not make sense to refer in Born's rule to the eigenvalues as measurement results, since they are only theoretical constructs approximately related to measurement. As approximately as the true positions and momenta in classical physics, where one never refers to their measurement in the foundations.

The next problem is that with the new convention of vanhees71, the notion of measurement error becomes questionable. A spin measurement of an electron or silver atom is regarded, depending on who is talking about it, as an exact measurement of ##\pm 1##, ##\pm 1/2##, or (most properly) ##\pm \hbar/2##, even though until recently, ##\hbar## was a constant with an inaccurately known value. Measured is an inaccurate spot on a screen; the exactness results not from measurement but from theory. Similarly, in photon counting experiments, measured are inaccurate peaks of a photocurrent; the exactness results not from measurement but from theory, which (in idealized models predicting 100% efficiency) predicts these peaks to equal in number the integral number of photons arriving.

Worse, with the new convention of vanhees71, there is no longer a way to determine the accuracy of a measurement of quantities represented by an operator with continuous spectrum. Since there is no theory to tell which of the true values counts (as it is a random variable) the true value is completely unknown, hence the measurement error cannot even be quantified. The only quantifiable measurement error is the statistical error given by the deviation from the mean - the true value of the thermal interpretation, not that of the new convention of vanhees71.

Thus the merit of the new convention of vanhees71 is very questionable.

Last edited:
dextercioby
I'm really surprised that you disagree with well-established empirical facts. Unfortunately you don't in detail tell with what of these you disagree and why.

Sorry, post #2 is fine (though I exclude your comments on the thermal interpretation, on which I reserve judgement, since I haven't studied this new attempt at interpretation). It is your subsequent post with your usual remarks about interpretation that I don't quite agree with.

Gold Member
1. Consider some piece of digital equipment with 3 digit display measuring some physical quantity ##X## using ##N## independent measurements. Suppose the measurement results were 6.57 in 20% of the cases and 6.58 in 80% of the cases. Every engineer or physicist would compute the mean ##\bar X= 6.578## and the standard deviation ##\sigma_X=0.004## and conclude that the true value of the quantity ##X## deviates from ##6.578## by an error of the order of ##0.004N^{-1/2}##.
What if the digital equipment had 6 digit display instead of 3, with everything else being the same? What would every engineer and physicist do in that case? And what would the thermal interpretation say?

What if the digital equipment had 6 digit display instead of 3, with everything else being the same? What would every engineer and physicist do in that case? And what would the thermal interpretation say?
1. An engineer might first suspect that internally, some rounding is performed and would tentatively conclude that the last three digits are spurious. If on other occasions, the last three digits would be arbitrary, the engineer would be surprised and puzzled on the given occasion, just as the pioneers of quantum mechanics were, would consult the literature, and after reading some of it would conclude that he is measuring a quantum system with selection rules that forbids the intermediate values. The thermal interpretation would say that these selection rules are due to the system having stable slow modes at these two values, while being unstable in between.

2. The measurements would correspond exactly to Born's rule for a quantum system with a Hilbert space of at least 1001 dimensions, measuring an observable in a state for which only the first and the 1001st component in the corrsponding eigenbasis (ordered by eigenvalues) is occupied.

Gold Member
2022 Award
His 'value' is the q-expectation, not an eigenvalue! And his statistical errors are the deviations from these, not the lack of precision in the sense of your claimed ''established fact since 1925''!
Take again the Stern-Gerlach experiment as an example. If your claim that one always measures a q-expectation value were true, the SGE would always result in a single strip of Ag atoms on the screen, as predicted by classical physics, but already in 1923 Stern and Gerlach found two strips as predicted by "old quantum theory" (which is wrong in two ways, and by coincidence these two errors compensate). Modern quantum theory predicts, however, the correct result, namely in general two strips of Ag atoms on the screen (except for the case that the Ag atoms are prepared in a ##\sigma_z## eigenstate). That clearly proves your claim wrong, independent of any interpretation of QT.

Gold Member
2022 Award
If one assumes the former, i.e., that measurement results are not eigenvalues but only approximations to eigenvalues (the new convention presented explicitly by @vanhees71 in his lecture notes, which I have never seen elsewhere), one has other problems:
In which of my lecture notes and where have you interpreted this in? Of course, I never wanted to claim this nonsense. Maybe I was sloppy in some formulation that you could come to this conclusion.

It's of course true that any measurement device has some imprecision, but you can in principle measure as precisely as you want, and theoretical physics deals with idealized measurements. The possible outcomes of measurements are the eigenvalues of the operator of the measured observable.

If your claim that one always measures a q-expectation value were true, the SGE would always result in a single strip of Ag atoms on the screen, as predicted by classical physics
No. I only claim approximate measurements, in this case with a big (and predictably big!) error, and I did not claim that the errors are normally or uniformly distributed as you seem to assume. In the present case, an analysis of the slow manifold of the measurement process would show that the errors have a highly peaked bimodal distribution centered at the two spots, just as what one gets when measuring a classical diffusion process in a double-well potential.

Note that random measurements at the two spots that one actually sees in an experiment are crude approximations of any point in between, in particular of the q-expectation. This can be seen by looking at the screen from very far away where one cannot resolve the two spots.

By the way, the quote you answered to was a comment to Heisenberg's statement, not to mine! You are welcome to read the whole context in which he made his statement! See also the Heisenberg quote in post #20.

Last edited:
Gold Member
2022 Award
Sorry, post #2 is fine (though I exclude your comments on the thermal interpretation, on which I reserve judgement, since I haven't studied this new attempt at interpretation). It is your subsequent post with your usual remarks about interpretation that I don't quite agree with.
Ok, fine with me, I know that you don't accept the minimal statistical interpretation, but I think it is important to clearly see that the claim that what's measured on a quantum system is always the q-expectation value is utterly wrong. Already the Stern-Gerlach experiment contradicts this (see my posting #25), and this is an empirical fact independent of any interpretation.

NB: The SGE at the time even was interpreted in terms of "old quantum theory", which by chance was correct due to compensating two errors, namely assuming the wrong gyrofactor 1 of the electron and, using modern terms, assuming spin 1 rather than spin 1/2, which latter's existence has been unknown at the time. What I never understood is the argument, why one wouldn't see 3 rather than 2 strips on the screen, because already in old QT for spin 1 one would expect three values for ##\sigma_z## rather than 0, namely ##\pm \hbar## and ##0##.

Gold Member
2022 Award
No. I only claim approximate measurements, in this case with a big (and predictably big!) error, and I did not claim that the errors are normally or uniformly distributed as you seem to assume. In the present case, an analysis of the slow manifold of the measurement process would show that the errors have a higly peaked bimodal distribution centered at the two spots, just as what one gets when measuring a classical diffusion process in a double-well potential.

Note that random measurements at the two spots that one actually sees in an experiment are crude approximations of any point in between, in particular of the q-expectation. This can be seen by looking at the screen from very far away where one cannot resolve the two spots.

By the way, the quote you answered to was a comment to Heisenberg's statement, not to mine! You are welcome to read the whole context in which he made his statement! See also the Heisenberg quote in post #20.
It's of course always possible to make the resolution of the apparatus so bad that you don't resolve the two lines. Then you simply have a big blurr, but then I doubt very much that you always get the q-expectation values rather than an inconclusive determination of the value to be measured.

Gold Member
2022 Award
Well, I usually don't agree with Heisenberg's interpretations, and in the quote in #20 he deals with the continuous variables of position and momentum (of course, the correct discussion is about position and (canonical) momentum, not velocity, but that may be because Heisenberg wrote a popular (pseudo)science book rather than a physics book in this case). Heisenberg even got his own uncertainty relation wrong first and was corrected by Bohr, and I think Heisenberg's misunderstanding of the uncertainty relation is indeed closely related to the misunderstanding of basic principles of quantum mechanics leads to your claim that what's measured is always the q-expectation value rather than the value the observable takes when an individual system is measured.

To answer Heisenberg's question in the quote: Of course, you can always prepare a particle in a state with "pretty unprecise" position and a "pretty unprecise momentum". I don't understand, what Heisenberg is after to begin with, nor what this has to do with your thermal interpretation.

In which of my lecture notes and where have you interpreted this in? Of course, I never wanted to claim this nonsense. Maybe I was sloppy in some formulation that you could come to this conclusion.
In https://th.physik.uni-frankfurt.de/~hees/publ/quant.pdf from 2008 you write:
Hendrik van Hees said:
Wird nun an dem System die Observable ##A## gemessen, so ist das Resultat der Messung stets ein (verallgemeinerter) Eigenwert des ihr zugeordneten selbstadjungierten Operators ##A##.
[Moderator note (for completeness, although identical with what is said anyway): If the observable ##A## is measured on the system, then the result of the measurement is always a (generalized) eigenvalue of the according self-adjoint operator ##A##.]

which agrees with tradition and the Wikipedia formulation of Born's rule, and says that measurements produce always eigenvalues - hence never approximate eigenvalues. Later, on p.20 of https://th.physik.uni-frankfurt.de/~hees/publ/stat.pdf from 2019, you seem to have strived for more precision in the language and only require:
Hendrik van Hees said:
A possible result of a precise measurement of the observable ##O## is necessarily an eigenvalue of the corresponding operator ##\mathbf O##.
You now 1. distinguish between the observable and the associated operator and 2. have the qualification 'precise', both not present in the German version.

Thus it was natural for me to assume that you deliberately and carefully formulated it in this way in order to account for the limited resolution of a measurement device, and distinguish between 'precise', idealized measurements that yield exact results and 'unprecise', actual measurements that yield approximate results, as you did in post #2 of the present thread:
Born's statistical interpretation does not claim that there is no measurement error but, as usual when formulating a theory, discusses first the case that the measurement errors are so small that they can be neglected. If you deal with unprecise measurements the analysis becomes much more complicated
It's of course true that any measurement device has some imprecision, but you can in principle measure as precisely as you want, and theoretical physics deals with idealized measurements. The possible outcomes of measurements are the eigenvalues of the operator of the measured observable.
Here you seem to refer again to idealized measurements when you make the final statement., as no measurement error is mentioned.

Thus you have hypothetical measurement results ('precise', idealized) representing the true, predictable possible values and actual measurement results ('imprecise') representing the unpredictable actual values of the measurements. Their relation is an unspecified approximation about which you only say
That cannot be theorized about but has to be treated individually for any experiment and is thus not subject of theoretical physics but part of a correctly conducted evaluation of experimental results for the given (preparation and) measurement procedure.
It is this postulated dichotomy that I analyzed in my post #21.

Last edited by a moderator:
It's of course always possible to make the resolution of the apparatus so bad that you don't resolve the two lines. Then you simply have a big blurr, but then I doubt very much that you always get the q-expectation values rather than an inconclusive determination of the value to be measured.
That's irrelevant.

The thermal interpretation never claims the caricature you take it to claim, namely that one always gets the q-expectation. It only claims that the measurement result one gets approximates the predicted q-expectation ##\langle A\rangle## with an error of the order of the predicted uncertainty ##\sigma_A##. When the latter is large, as in the case of a spin measurement, this is true even when the q-expectation vanishes and the measured values are ##\pm 1/2##!

Gold Member
2022 Award
In https://th.physik.uni-frankfurt.de/~hees/publ/quant.pdf from 2008 you write:

which agrees with tradition and the Wikipedia formulation of Born's rule, and says that measurements produce always eigenvalues - hence never approximate eigenvalues. Later, in 2019, you seem to have strived for more precision in the language and only require:

Thus it was natural to assume that you deliberately and carefully formulated it in this way in order to account for the limited resolution of a measurement device, and distinguish between 'precise', idealized measurements that yield exact results and 'unprecise', actual measurements that yield approximate results, as you did in post #2 of the present thread:

Here you seem to refer again to idealized measurements when you make the final statement., as no measurement error is mentioned.

Thus you have hypothetical measurement results ('precise', idealized) representing the true, predictable possible values of the measurements and actual measurement results ('imprecise') representing the unpredictable actual values of the measurements. Their relation is an unspecified approximation about which you only say

It is this postulated dichotomy that I analyzed in my post #21.
Ehm, where is the difference between the German and the English quote (I don't know, where the English one comes from). Of course the English one is a bit sloppy, because usually unbound operators have not only eigenvalues but also "generalized" eigenvalues (i.e., if one refers to a value in the continuous part of the spectrum of the observable).

As expected, in my writing I've not changed my opinion over the years that the established and empirically very satisfactory working QT were wrong in this point. I don't discuss instrumental uncertainties in my manuscript, because they are about theoretical physics. To analyze instrumental uncertainties cannot be done in a general sense but naturally depends on the to be analyzed individual measurement device. This in general has very little to do with quantum mechanics at all.

Of course the very definition of an observable implies that you have to be able to (at least in principle) measure it precisely, i.e., to make the instrumental uncertainties negligibly small. In this connection it is very important to distinguish the "preparation process" (defining the state of the quantum system) and the "measurement process" (defining observables). The uncertainty/precision of a measurement device is independent of the uncertainty/precision of the preparation.

E.g. if you prepare the momentum of an electron quite precisely, then due to the Heisenberg uncertainty relation its position is quite imprecisely determined. Despite this large uncertainty of the electron's position you can measure its precision as accurately as you want (e.g., by using a CCD camera of high resolution). For each electron you'll measure its position very accurately. The uncertainty of the position measurement is determined by the high resolution of the CCD cam, not by the position resolution of the electron's preparation (which is assumed to be much more uncertain than this resolution of the CCD cam). For the individual electron you'd in general you'd not measure the q-expectation value given by the prepared state (not even "approximately", whatever you mean by this vague statement).