A. Neumaier said:
The meaning is enigmatic only when viewed in terms of the traditional interpretations, which look at the same matter in a very different way.$$\def\<{\langle} \def\>{\rangle}$$
Given a physical quantity represented by a self-adjoint operator ##A## and a state ##\rho## (of rank 1 if pure),
- all traditional interpretations give the same recipe for computing a number of possible idealized measurement values, the eigenvalues of ##A##, of which one is exactly (according to most formulations) or approximately (according to your cautious formulation) measured with probabilities computed from ##A## and ##\rho## by another recipe, Born's rule (probability form), while
- the thermal interpretation gives a different recipe for computing a single possible idealized measurement value, the q-expectation ##\<A\>:=Tr~\rho A## of ##A##, which is approximately measured.
- In both cases, the measurement involves an additional uncertainty related to the degree of reproducibility of the measurement, given by the standard deviation of the results of repeated measurements.
- Tradition and the thermal interpretation agree in that this uncertainty is at least ##\sigma_A:=\sqrt{\<A^2\>-\<A\>^2}## (which leads, among others, to Heisenberg's uncertainty relation).
- But they make very different assumptions concerning the nature of what is to be regarded as idealized measurement result.
Why do you ssay bullet 2 is different from bullet 1? I use the same trace formula of course. What else? It's the basic definition of an expectation value in QT, and it's the most general representation-free formulation of Born's rule. Also with the other 3 points, you say on the one hand the thermal interpretation uses the same mathematical formalism, but it's all differently interpreted. You even use specific probabilistic/statistical notions like "uncertainty" and define it in the usual statistical terms as the standard deviation/2nd cumulant. Why is it then not right to have the same heuristics about it in your thermal interpretation (TI) as in the minimal interpretation (MI)?
That quantities with large uncertainty are erratic in measurement is nothing special to quantum physics but very familiar form the measurement of classical noisy systems. The thermal interpretation asserts that all uncertainty is of this kind, and much of my three papers are about arguing why this is indeed consistent with the assumptions of the thermal interpretation.
But this overlooks that QT assumes that not all observables can have determined values at once. At best, i.e., if technically feasible for simple systems, you can only prepare a state such that a complete compatible set of observables takes determined values. All to this set incompatible observables (almost always) have indetermined values, and this is not due to unideal measurement devices but it's an inherent feature of the system.
Now it is experimentally undecidable what an ''idealized measurement result'' should be since measured are only actual results, not idealized ones.
What to consider as idealized version is a matter of interpretation. What one chooses determines what one ends up with!
As a result, the traditional interpretations are probabilistic from the start, while the thermal interpretation is deterministic from the start.
The thermal interpretation has two advantages:
- It assumes at the levels of the postulates less technical mathematics (no spectral theorem, no notion of eigenvalue, no probability theory).
- It allows to make definite statements about each single quantum system, no matter how large or small it is.
Well, using the standard interpretation it's pretty simple to state, what an idealized measurement is: Given the possible values of the observable (e.g., some angular momentum squared ##\vec{J}^2## and one component, usually ##J_z##) you perform an idealized measurement if the resolution of the measurement device is good enough to resolve the (necessarily discrete!) spectral values of the associated self-adjoint operators of this measured quantity. Of course, in the continuous spectrum you don't have ideal measurements in the real world, but also any quantum state predicts an inherent uncertainty given by the formula above. To verify this prediction you need an apparatur which resolves the measured quantity much better than this quantum-mechanical uncertainty.
You can of course argue against this very theroetical definition of "ideal measurements", because sometimes there are even more severe constraints, but these are also fundamental and not simply due to our inability to construct "ideal apparati". E.g., in relativistic QT there's a principle uncertainty for the localization of (massive) particles due to the uncertainty relation and the finiteness of the limit speed of light (Bohr and Rosenfeld, Landau). But here the physics is also clear, why it doesn't make sense to resolve the position better than this fundamental limit. It's because then rather than localizing (i.e. preparing the particle) better you produce more particles, and the same holds for measurement, i.e., the attempt of measuring the position much more precisely involves interactions with other particles leading again to the creation of more particles rather than a better localization measurement.
Of course, it is very difficult to consider in general terms all these subtle special cases.
But let's see, whether I understood the logic behind your TI now better. From #84 I understand that you start with defining the formal core just by the usual Hilbert-space formulation with a stat. op. representing the state and self-adjoint (or maybe even more general) operators representing the observables, but without the underlying physical probabilistic meaning of the MI. The measurable values of the observables are not the spectral values of the self-adjoint operators representing observables but the q-expectations abstractly defined by the generalized Born rule (the above quoted trace formula). At the first glance this makes a lot of sense. You are free to define a theory mathematically without a physical intuition behind it. The heuristics must come of course in the teaching of the subject but is not inherent to the "finalized" theory.
However, I think this interpretation of what's observable is flawed, because it intermingles the uncertainties inherent of the preparation of the system in a given quantum state with the precision of the measurement device. This is a common miconception leading to much confusion. The Heisenberg uncertainty relation (HUR) is not a description of the unavoidable perturbation of a quantum system, which can be made negligible in principle only for sufficiently large/macroscopic systems, but it's a description of the impossibility to prepare incompatible observables with a better common uncertainty than given by the HUR. E.g., having prepared a particle with a quite accurate momentum, its position has a quite large uncertainty, but nothing prevents you from measuring the position of the particle much more accurately (neglecting the above mentioned exception for relativistic measurements, i.e., arguing for non-relativistic conditions). Of course, there is indeed the influence of the measurement procedure on the measured system, but that's not described by the HUR. There's plenty of recent work on this issue (if I remember right, one of the key authors about these aspects is Busch).