# Quantum Physics via Quantum Tomography: A New Approach to Quantum Mechanics

**Estimated Read Time:**10 minute(s)

**Common Topics:**quantum, mechanics, density, measurements, operator

This Insight article presents the main features of a conceptual foundation of quantum physics with the same characteristic features as classical physics – except that the density operator takes the place of the classical phase space coordinates position and momentum. Since everything follows from the well-established techniques of quantum tomography (the art and science of determining the state of a quantum system from measurements) the new approach may have the potential to lead in time to a consensus on the foundations of quantum mechanics. Full details can be found in my paper

- A. Neumaier, Quantum mechanics via quantum tomography, Manuscript (2022). arXiv:2110.05294v3

This paper gives for the first time a formally precise definition of quantum measurement that

- is applicable without idealization to complex, realistic experiments;
- allows one to derive the standard quantum mechanical machinery from a single, well-motivated postulate;
- leads to objective (i.e., observer-independent, operational, and reproducible) quantum state assignments to all sufficiently stationary quantum systems.
- The new approach shows that the amount of objectivity in quantum physics is no less than that in classical physics.

The following is an extensive overview of the most important developments in this new approach.

$$

\def\<{\langle} % expectation \def\>{\rangle} % expectation

\def\tr{{\mathop{\rm tr}\,}}

\def\E{{\bf E}}

$$

Table of Contents

## Quantum states

The (Hermitian and positive semidefinite) density operator ##\rho## is taken to be the formal counterpart of the state of an arbitrary quantum source. This notion generalizes the polarization properties of light: In the case of the polarization of a source of light, the density operator represents a qubit and is given by a ##2\times 2## matrix whose trace is the intensity of the light beam. If expressed as a linear combination of Pauli matrices, the coefficients define the so-called Stokes vector. Its properties (encoded in the mathematical properties of the density operator) were first described by George Stokes (best known from the Navier-Stokes equations for fluid mechanics) who gave in 1852 (well before the birth of Maxwell’s electrodynamics and long before quantum theory) a complete description of the polarization phenomenon, reviewed in my Insight article ‘A Classical View of the Qubit‘. For a stationary source, the density operator is independent of time.

## The detector response principle

A **quantum measurement device** is characterized by a collection of finitely many **detection elements** labeled by labels ##k## that respond statistically to the quantum source according to the following **detector response principle (DRP)**:

- A detection element ##k## responds to an incident stationary source with density operator ##\rho## with a nonnegative mean rate ##p_k## depending linearly on ##\rho##. The mean rates sum to the intensity of the source. Each ##p_k## is positive for at least one density operator ##\rho##.

If the density operator is normalized to intensity one (which we shall do in this exposition) the response rates form a discrete probability measure, a collection of nonnegative numbers ##p_k## (the response probabilities) that sum to 1.

The DRP, abstracted from the polarization properties of light, relates theory to measurement. By its formulation it allows one to discuss quantum measurements without the need for quantum mechanical models for the measurement process itself. The latter would involve the detailed dynamics of the microscopic degrees of freedom of the measurement device – clearly out of the scope of a conceptual foundation on which to erect the edifice of quantum physics.

The main consequence of the DRP is the **detector response theorem**. It asserts that for every measurement device, there are unique operators ##P_k## which determine the rates of response to every source with density operator ##\rho## according to the formula

$$

p_k=\langle P_k\rangle:=\tr\rho P_k.

$$

The ##P_k## form a discrete quantum measure; i.e., they are Hermitian, positive semidefinite and sum to the identity operator ##1##. This is the natural quantum generalization of a discrete probability measure. (In more abstract terms, a discrete quantum measure is a simple instance of a so-called POVM, but the latter notion is not needed for understanding the main message of the paper.)

## Statistical expectations and quantum expectations

Thus a quantum measurement device is characterized formally by means of a discrete quantum measure. To go from detection events to measured numbers one needs to provide a **scale** that assigns to each detection element ##k## a real or complex number (or vector) ##a_k##. We call the combination of a measurement device with a scale a **quantum detector**. The statistical responses of a quantum detector define the **statistical expectation**

$$

\E(f(a_k)):=\sum_{k\in K} p_kf(a_k)

$$

of any function ##f(a_k)## of the scale values. As always in statistics, this statistical expectation is operationally approximated by finite sample means of ##f(a)##, where ##a## ranges over a sequence of actually measured values. However, the exact statistical expectation is an abstraction of this; it works with a nonoperational probabilistic limit of infinitely many measured values so that the replacement of relative sample frequencies by probabilities is justified. If we introduce the **quantum expectation**

$$

\langle A\rangle:=\tr\rho A

$$

of an operator ##A## and say that the detector **measures** the **quantity**

$$

A:=\sum_{k\in K} a_kP_k,

$$

it is easy to deduce from the main result the following version of **Born’s rule (BR)**:

- The statistical expectation of the measurement results equals the quantum expectation of the measured quantity.
- The quantum expectations of the quantum measure constitute the probability measure characterizing the response.

This version of Born’s rule applies without idealization to results of arbitrary quantum measurements.

(In general, the density operator is not necessarily normalized to intensity ##1##; without this normalization, we call ##\langle A\rangle## the **quantum value** of ##A## since it does not satisfy all properties of an expectation.)

## Projective measurements

The conventional version of Born’s rule – the traditional starting point relating quantum theory to measurement in terms of eigenvalues, found in all textbooks on quantum mechanics – is obtained by specializing the general result to the case of exact projective measurements. The spectral notions do not appear as postulated input as in traditional expositions, but as consequences of the derivation in a special case – the case where ##A## is a self-adjoint operator, hence has a spectral resolution with real eigenvalues ##a_k##, and the ##P_k## is the projection operators to the eigenspaces of ##A##. In this special case, we recover the traditional setting with all its ramifications together with its domain of validity. This sheds new light on the understanding of Born’s rule and eliminates the most problematic features of its uncritical use.

Many examples of realistic measurements are shown to be measurements according to the DRP but have no interpretation in terms of eigenvalues. For example, joint measurements of position and momentum with limited accuracy, essential for recording particle tracks in modern particle colliders, cannot be described in terms of projective measurements; Born’s rule in its pre-1970 forms (i.e., before POVMs were introduced to quantum mechanics) does not even have an idealized terminology for them. Thus the scope of the DRP is far broader than that of the traditional approach based on highly idealized projective measurements. The new setting also accounts for the fact that in many realistic experiments, the final measurement results are computed from raw observations, rather than being directly observed.

## Operational definitions of quantum concepts

Based on the detector response theorem, one gets an operational meaning for quantum states, quantum detectors, quantum processes, and quantum instruments, using the corresponding versions of quantum tomography.

In quantum state tomography, one determines the state of a quantum system with a ##d##-dimensional Hilbert space by measuring sufficiently many quantum expectations and solving a subsequent least squares problem (or a more sophisticated optimization problm) for the ##d^2-1## unknowns of the state. Quantum tomography for quantum detectors, quantum processes, and quantum instruments proceed in a similar way.

These techniques serve as foundations for far-reaching derived principles; for quantum systems with a low-dimensional density matrix, they are also practically relevant for the characterization of sources, detectors, and filters. A **quantum process** also called a linear quantum filter, is formally described by a completely positive map. The operator sum expansion of completely positive maps forms the basis for the derivation of the dynamical laws of quantum mechanics – the **quantum Liouville equation** for density operators, the conservative time-dependent **Schrödinger equation** for pure states in a nonmixing medium, and the dissipative **Lindblad equation **for states in mixing media – by a continuum limit of a sequence of quantum filters. This derivation also reveals the conditions under which these laws are valid. An analysis of the oscillations of quantum values of states satisfying the Schrödinger equation produces the **Rydberg-Ritz combination principle** underlying spectroscopy, which marked the onset of modern quantum mechanics. It is shown that in quantum physics, normalized density operators play the role of phase space variables, in complete analogy to the classical phase space variables position and momentum. Observations with highly localized detectors naturally lead to the notion of quantum fields whose quantum values encode the local properties of the universe.

Thus the DRP leads naturally to all basic concepts and properties of modern quantum mechanics. It is also shown that quantum physics has a natural phase space structure where normalized density operators play the role of quantum phase space variables. The resulting quantum phase space carries a natural Poisson structure. Like the dynamical equations of conservative classical mechanics, the quantum Liouville equation has the form of Hamiltonian dynamics in a Poisson manifold; only the manifold is different.

## Philosophical consequences

The new approach has significant philosophical consequences. When a source is stationary, response rates, probabilities, and hence quantum values, can be measured in principle with arbitrary accuracy, in a reproducible way. Thus they are operationally quantifiable, independent of an observer. This makes them **objective properties**, in the same sense as in classical mechanics, positions and momenta are objective properties. Thus quantum values are seen to be objective, reproducible **elements of reality** in the sense of the famous paper

- A. Einstein, B. Podolsky, and N. Rosen, Can quantum-mechanical description of physical reality be considered complete? Phys. Rev. 47 (1935), 777-781.

The assignment of states to stationary sources is as objective as any assignment of classical properties to macroscopic objects. In particular, probabilities appear – as in classical mechanics – only in the context of statistical measurements. Moreover, all probabilities are objective **frequentist probabilities** in the sense employed everywhere in experimental physics – classical and quantum. Like all measurements, probability measurements are of limited accuracy only, approximately measurable as observed relative frequencies.

Among all quantum systems, **classical systems** are characterized as those whose observable features can be correctly described by local equilibrium thermodynamics, as predicted by nonequilibrium statistical mechanics. This leads to a new perspective on the quantum measurement problem and connects to the **thermal interpretation** of quantum physics, discussed in detail in my 2019 book ‘Coherent Quantum Physics‘ (de Gruyter, Berlin 2019).

## Conclusion

To summarize, the new approach gives an elementary, and self-contained deductive approach to quantum mechanics. A suggestive notion for what constitutes a quantum detector and for the behavior of its responses leads to a definition of measurement from which the modern apparatus of quantum mechanics can be derived in full generality. The statistical interpretation of quantum mechanics is not assumed, but the version of it that emerges is discussed in detail. The standard dynamical and spectral rules of introductory quantum mechanics are derived with little effort. At the same time, we find the conditions under which these standard rules are valid. A thorough, precise discussion is given of various quantitative aspects of uncertainty in quantum measurements. Normalized density operators play the role of quantum phase space variables, in complete analogy to the classical phase space variables position and momentum.

There are implications of the new approach for the foundations of quantum physics. By shifting the attention from the microscopic structure to the experimentally accessible macroscopic equipment (sources, detectors, filters, and instruments) we get rid of all potentially subjective elements of quantum theory. There are natural links to the thermal interpretation of quantum physics as defined in my book.

The new picture is simpler and more general than the traditional foundations, and closer to actual practice. This makes it suitable for introductory courses on quantum mechanics. Complex matrices are motivated from the start as a simplification of the mathematical description. Both conceptually and in terms of motivation, introducing the statistical interpretation of quantum mechanics through quantum measures is simpler than introducing it in terms of eigenvalues. To derive the most general form of Born’s rule from quantum measures one just needs simple linear algebra, whereas even to write down Born’s rule in the traditional eigenvalue form, unfamiliar stuff about wave functions, probability amplitudes, and spectral representations must be swallowed by the beginner – not to speak of the difficult notion of self-adjointness and associated proper boundary conditions, which is traditionally simply suppressed in introductory treatments.

Thus there is no longer an incentive for basing quantum physics on measurements in terms of eigenvalues – a special, highly idealized case – in place of the real thing.

## Postscript

In the mean time I revised the paper. The new version new version is better structured and contains a new section on high precision quantum measurements, where the 12 digit accuracy determination of the gyromagnetic ration through the observation and analysis of a single electron in a Penning trap is discussed in some detail. The standard analysis assumes that the single electron is described by a time-dependent density operator following a differential equation. While in the original papers this involved arguments beyond the traditional (ensemble-based and knowledge-based) interpretations of quantum mechanics, the new tomography-based approach applies without difficulties.

Full Professor (Chair for Computational Mathematics) at the University of Vienna, Austria

Can you point to the report of another SG experiment that resolves single silver atoms?

Even then, one only measures atom position and computes from these measurements fairly crude approximations of ##\hbar##. It simply isn’t a projective (textbook) measurement.

You can understand the formalism by reading Sections 2 and 3 of my paper. I did not treat the Stern-Gerlach experiment in detail since a precise description is quite involved. But in Section 3 I discuss in some detail several other measurement situations, which should be enough to get a clear understanding of what the approach means.

Many papers and books using POVMs are quite abstract because they employ a measure-theoretic approach rather than simple quantum measures in the sense of my new paper. This is why my paper is a big step forward towards making the approach more understandable to everyone. Still, you need to do some reading to get the correct picture.

If your interpretation of the measurement results were correct, Stern and Gerlach could have deduced the value of ##\hbar## to infinite precision.

Instead I look at Figure 13 with the actual measurement results of Stern and Gerlach, and see (like Busch et al. in the quote and like everyone who can see) a large number of scattered dots, not two exact numbers involving ##\hbar##.

Clearly what was measured for each silver atom was position (with a continuous distribution), not spin.Tu turn these position measurements into a projective spin measurement of ##\hbar/2## or ##-\hbar/2## you need to invoke heavy idealization, including additional theory and uncontrolled approximations.

Which quote are you talking about? On p. 12 of your paper there’s the quote by Fröhlich:

with which I fully agree, of course, but that’s not referring to the SG experiment.

Of course, when you measure a spin component with the standard ideal SG setup, you don’t measure expectation values on a single silver atom but the spin component, which gives either ##\hbar/2## or ##-\hbar/2## as a result with a probability determined by the spin state ##\hat{\rho}## the silver atom is prepared in. When it comes from an oven as in the original experiment, this state is of course ##\hat{\rho}=\hat{1}/2##. It’s a paradigmatic example, for which a von Neumann filter measurement can be realized.

Invariance under general coordinate transformations is a consequence of Poincare invariance together with the gauge structure of massless spin 2 particles. This was already shown by Weinberg 1964. Thus no failure is expected, and no need to extend the causal formalism.

They are approximations emerging from the quantum fields under conditions corresponding to the validity of geometric optics; this makes them definitely not points. See the discussion in Section 7.1 of my paper (and far more details in my 2019 book on coherent quantum mechanics).

I haven’t seen work on such a relation but Tomonaga-Schwinger dynamics based on the perturbatively constructed fields should provide a connection.

for these, causal perturbation theory is not applicable.

This is built in into the causal approach.

Causal perturbation theory is consistent with the Wightman axioms. It constructs the Wightman N-point functions and field operators perturbatively in a mathematically rigorous way. The only missing thing to constructing Wightman fields is the lack of a rigorous nonperturbative resummation formula.

Stern and Gerlach obtained in their figure a huge number of distinct measurement outcomes, visible for everyone. Only idealization can reinterpret this as binary measurement outcomes 1 and -1.

By your reasoning, a low energy particle in a double well potential would only take two possible positions!!!

For the right – mathematically rigorous – way see this Insight article!

There are two distinct measurement outcomes predicted for a qubit and you are claiming the experimental result is a continuum. Therefore, you are claiming the QM prediction is wrong. It’s that simple.

I am claiming that the measurement results form a continuum and the binarization is an idealization. This is in agreement with experiment and with quantum mechanics.

Whether or not you are interested does not matter here.

Again, the mathematical description of the outcome is given by spin 1/2 qubit Hilbert space. If you disagree with that, then you are claiming QM is wrong and I am not interested.

Figure 13 in the reference you cited shows the Stern-Gerlach results. The picture agrees with the description in my quote: The split is

not into two separate thin lines at 1 and -1 as you claimbut into two broad overlapping lips occupying in each cross section a continuous range, which may be connected or seemingly disconnected depending on where you draw the intersecting line.Thus

the measurement results form a bimodal continuum with an infinite number of possible values.Here is what we cite https://plato.stanford.edu/entries/physics-experiment/app5.html ; it contains reproductions of SG figures and results. There is nothing that contradicts QM spin 1/2 Hilbert space predictions therein. No experiment that I have seen does so and everything that I’ve said here (as contained in our published papers https://www.mdpi.com/1099-4300/24/1/12 and https://www.nature.com/articles/s41598-020-72817-7) conforms to that fact. If you disagree with that, then you’re claiming QM is wrong.

No. The quote describes the experimental findings of the original paper by Stern and Gerlach. Nobody ever thought this would disagree with QM.

You probably never saw a discussion of the real experiment, only its heavily idealized caricature described in introductory textbooks on quantum mechanics!

It’s exactly true, it’s the expectation value for spin 1/2 measurements. I infer from the quote you reference that you therefore disagree with QM. I’m not doing that.

This was true in the old days before effective field theories were seriously studied.

But in modern terms, nonrenormalizable does no longer mean ”not renormalizable” but only ”renormalization defines an infinite-parameter family of theories”, while standard renormalizability means ”renormalization defines a finite-parameter family of theories”. For example, QED is a 2-dimensional family of QFTs parameterized by 2 parameters (electron mass and charge), while canonical quantum gravity defines an infinite-dimensional family of QFTs parameterized by infinitely many parameters (of which the gravitational constant is just the first) .

I gave detailed references here: https://www.mat.univie.ac.at/~neum/physfaq/topics/renQG.html

This is far from true. See the quote at the top of p.12 of the paper summarized by the Insight article, and the book from which this quote is taken.

The die is the counterpart of a "classical bit," we’re talking about the qubit, they differ precisely as I (and Koberinski & Mueller) pointed out. That is, it makes no sense to talk about measurements that you would expect to yield 1.5 or 2.3, etc., for a die. But, when measuring a qubit, the measurement configurations of a particular state vary continuously between that yielding +1 and that yielding -1, so one would expect those "in-between" measurements to produce something between +1 and -1, e.g., Stern-Gerlach spin measurements. Instead, you still obtain +1 and -1, but distributed so they average to the expected intermediate outcome, e.g., via vector projection for SG measurements. Your approach simply articulates that fact without offering any reason for why we don’t just get the expected outcome to begin with.

The renormalization problem is independent of gravity, and can be understood independent of it.

The only apparent problem with gravity is its apparent nonrenormalizability, but this is not a real problem as discussed in the link mentioned in post #27.

This is against the conventions for good scientific conduct. Hiding such information may be good in a game but not in scientific discourse. If you don’t want to name authors use your own words and speak in your own authority!

I find this as little surprising as the case of measuring the state of a die by looking at the number of eyes found at its top when the die comes to rest. Although the die moves continuously we always get a discrete integer between 1 and 6.

Similarly, the measurement of a qubit is – by definition – binary. Hence it can have only two results, though the control in the experiment changes continuously.

To understand the mystery of the qubit, consider a measurement of some state that results in outcome O1 every time. Then suppose you rotate your measurement of that same state and obtain outcome O2 every time. We would then expect that a measurement between those two should produce an outcome between O1 and O2, according to some classical model. But instead, we get a distribution of O1 and O2 that

averageto whatever we expected from our classical model. Here is how Koberinski & Mueller put it (as quoted in our paper https://www.mdpi.com/1099-4300/24/1/12):We suggest that (continuous) reversibility may be the postulate which comes closest to being a candidate for a glimpse on the genuinely physical kernel of “quantum reality”. Even though Fuchs may want to set a higher threshold for a “glimpse of quantum reality”, this postulate is quite surprising from the point of view of classical physics: when we have a discrete system that can be in a finite number of perfectly distinguishable alternatives, then one would classically expect that reversible evolution must be discrete too. For example, a single bit can only ever be flipped, which is a discrete indivisible operation. Not so in quantum theory: the state |0> of a qubit can be continuously-reversibly “moved over” to the state |1>. For people without knowledge of quantum theory (but of classical information theory), this may appear as surprising or “paradoxical” as Einstein’s light postulate sounds to people without knowledge of relativity.

So, your approach captures this averaging nicely and therefore will show how quantum results average to classical expectations for whatever experiment. But, it says nothing about why we don’t just get the value between O1 and O2 directly to begin with. That is what’s “surprising or ‘paradoxical’” about the qubit.

you can send him the link!

Renormalization does not go beyond the limits of quantum theory.

From a physics point of view, everything about renormalization is understood. The missing logical coherence (due to the lack of a rigorous nonperturbative version of renormalization) is a matter for the mathematicians to resolve.

and the other statements, including the first sentence?

If you write something without giving credits, everyone assumes it is your statement!

The semiclassical approximation of canonical quantum gravity is consistent with all we know. That it is aesthetically unsatisfying is not of true relevance.

Your deeper objection seems to have no substance that would one allow to make progress.

Whatever is taken as the currency in terms of which everything has to be explained or understood, it might be something effective due to an even deeper currency. We simply must start somewhere, and your deeper objection will always apply.

But according to current knowledge, quantum theory is a sufficient currency. Unlike in earlier ages, quantum theory explains the properties of physical reality (whatever it is, but in physics it certainly includes measurement devices!)

There are no experimental phenomena not accounted for by quantum physics, which can taken to be the physics of the standard model plus modifications due to neutrino masses and semiclassical gravity plus some version of Born’s rule, plus all approximation schemes used to derive the remainder of physics. Thus everything beyond that is just speculation without experimental support.

Yes, that’s what quantum tomography is about.

To accurately determine a momentum vector one also needs more than one measurement.

Thus I don’t see why your comment affects any of my claims.

It results in different emanating beams, though their properties are the same.

Its an equivalence class only in the same irrelevant sense as in the claim that ”momentum is an equivalence class of preparations of particles in a classical source”. Very different equipment can result in particles with the same momentum.

Using mathematical terminology to make such a simple thing complicated is quite unnecessary.

A single measurement of a system in local equilibrium leads to a fairly well-determined value for a current, say, and not to a random result.

Because my new approach goes beyond your minimal interpretation. You should perhaps first read the paper rather than base a discussion on just reading the summary exposition. There is a reason why I spent a lot of time to give detailed, physical arguments in the paper!

The point is the interpretation. In the latter formulation, that’s precisely what I mean when I say that ##\hat{\rho}## is an "equivalence class of preparation procedures". It’s an equivalence class, because very different equipment can result in the same "emanating beam".

This I don’t understand: A single measurement leads to some random result, but not the expectation value of these random results.

Now I’m completely lost again. In the usual formalism the statistical operator refers to the quantum state and not to an observable. To determine a quantum state you need more than one measurement (of a complete set of compatible observables). See Ballentine’s chapter (Sect. 8.2) on "state determination".

Not for a mathematician, who is familiar with measure theory and has mastered the subtleties of countable additivity….

But to a physics student you need to explain (and motivate in a physics context) the notions of a measure space, which is a lot of physically irrelevant overhead!

The German version of Wikipedia then simplifies to the case a discrete quantum measure, which is already everything needed to discuss measurement!

There are two senses: One as a formal mathematical construct, giving quantum expectations, and

the other in a theorem stating that when you do actual measurements, the limit of the sample means agree with these theoretical quantum expectations.

I derive the thermal interpretation from this new approach. See Section 7.3 of my paper, and consider the paper to be a much more understandable pathway to the thermal interpretation, where in my book I still had to postulate many things without being able to derive them.

The beginnings are not much different, but they are already simpler than the minimal statistical interpretation – which needs nontrivial concepts from spectral theory and a very nonintuitive assertion called Born’s rule.

Please look at my actual claims in the paper rather than judging from the summary in the Insiight article! EPR is discussed in Section 5.4. There I claim

elements of reality for quantum expectations of fields operators, not for Bell-local realistic theories! Thus Bell inequalities are irrelevant.No. A (clearly purely mathematical) construction of equivalence classes is not involved at all!

A quantum source is a piece of equipment emanating a beam – a particular laser, or a fixed piece of radioactive material behind a filter with a hole, etc.. Each quantum source has a time-dependent state ##\rho(t)##, which in the stationary case is independent of time ##t##.

The quantum state implies known values of all quantum expectations (N-point functions). This includes smeared field expectation values that are (for systems in local equilibrium) directly measurable without any statistics involved. It also includes probabilities for statistical measurements.

It takes a meaning independent of POVMs.

The German version is quite short, but it doesn’t seem to be too complicated.

It is a minimal well-motivated new foundation for quantum physics including its statistical interpretation, based on a new postulate from which POVMs and everything else can be derived. And it has consequences far beyond the statistical interpretation, see the key points mentioned in post #9.

The point is that there are two quantum generalizations of probability, the old (von Neumann) one based on PVMs (in the discrete case orthogonal projectors summing to 1) and the more recent (1970+),

far more generally applicable one, based on POVMs. See the Wikipedia article mentioned in post #14.Most things are worthless if you apply inadequate criteria for measuring their worth. The most expensive car is worthless if you want to climb a tree.

I didn’t set out to resolve what you regard here as a mystery. It is not needed for the foundations but a consequence of the general formalism once it has been derived.

I don’t see the qubit presenting a mystery. Everything about it was known in 1852, long before quantum mechanics got off the ground.

Yes, Wikipedia describes them (at the very top of the section headed ‘Definition’) as the simplest POVMs. But the general concept (as defined in the Definition inside this section of Wikipedia) is an abstract monster far too complicated for most physics students.

Yes, entangled states produce CM results on average, but that statement simply ignores their violation of the Bell inequality, which can also be couched as a statistical, empirical fact. Indeed, the mystery of entanglement can also be shown empirically in very small (non-statistica) samples of individual measurements. This approach is therefore worthless for resolving that mystery. It does however marry up beautifully with the reconstruction of QM via information-theoretic principles, which does resolve the mystery of the qubit and therefore entanglement.

https://en.wikipedia.org/wiki/POVM

In the Insight article and the accompanying paper I only use the notion of a

discrete quantum measure, defined as a finite family of Hermitian, positive semidefinite that sum to the identity.This is the quantum version of a discrete probability distribution, a finite family of probabilities summing to one. Thus on the level of foundations there is no need for the POVM concept.

The concept of POVMs is unnecessarily abstract, but there are simple POVMs equivalent to discrete quantum measures; see Section 4.1 of my paper.

The concept is nowhere needed in this approach to quantum mechanics, hence there is no mystery about it at this level.

Entanged states are just very special cases of density operators expressed in a very specific basis. They become a matter of curiosity only if one looks for extremal situations that can be prepared only for systems in which a very small number of degrees of freedom are treated quantum mechanically.

New compared to what?

For example, where did you know from what I said in the very first sentence about quantum phase space coordinates?

What I consider new for the general reader was specified at the beginning:

If you know how to do all this consistently you miss nothing. Otherwise you should read the full paper, where everything is argued in full detail, so that it cab be easily integrated into a first course on quantum mechanics.

A function from a measure space to the space of bounded operators on a Hilbert space.

Physics?

More seriously, I don’t know what the equation you wrote means, so I cannot say what do you miss.

As a layman in QM I looked up POVM and found a function ##\mu\, : \,\mathcal{A}\longrightarrow \mathcal{B(H)}## with ##0\leq \mu(A) \leq \operatorname{id}_{\mathcal{H}}## with self-adjoint operators as values. It seems to be the same

differenceas a distribution function is to a probability measure, i.e. more of a different wording than a different approach.Do I miss something?

newidea here? From this summary, which is written nicely and clearly, I have a feeling that I knew all this before. Do I miss something?That confirms my (still superficial) understanding that now I’m allowed to interpret ##\hat{\rho}## and the trace operation as expectation values in the usual statistical sense, and that makes the new approach much more understandable than what you called before "thermal interpretation". I also think that the entire conception is not much different from the minimal statistical interpretation. The only change to the "traditional" concept seems to be that you use the more general concept of POVM than the von Neumann filter measurements, which are only a special case.

The only objection I have is the statement concerning EPR. It cannot be right, because local realistic theories are not consistent with the quantum-theoretical probability theory, which is proven by the violation of Bell’s inequalities (and related properties of quantum-mechanically evaluated correlation functions, etc) through the quantum mechanical predictions and the confirmation of precisely these violations in experiments.

The upshot is: As quantum theory predicts, the outcomes of all possible measurements on a system, prepared in any state ##\hat{\rho}## (I take it that it is allowed also in your new conception to refer to ##\hat{\rho}## as the description of equivalence classes of preparation procedures, i.e., to interpret the word "quantum source" in the standard way) are not due to predetermined values of the measured observables. All the quantum state implies are the probabilities for the outcome of measurements. The values of observables are thus only determined by the preparation procedure if they take a certain value with 100% probability. I think within your conceptional frame work, "observable" takes a more general meaning as the outcome of some measurement device ("pointer reading") definable in the most general sense as a POVM.