A. Neumaier said:
So you are interpreting 'complete' not in the sense of Einstein (EPR, 1935) but in the sense of Born and Heisenberg (Como 1928). These two senses are the extreme poles between which the various interpretations must find a place.
My just finished paper '
Quantum mechanics via quantum tomography' reconciles the two extremes. Probability in quantum physics gets the same status as probability in classical physics. This justifies Einstein's position while not changing the core of the position of Born and Heisenberg that quantum mechanics is a complete modeling framework for physics. But Born's view on probability is modified to accommodate the techniques of modern quantum information theory.
But probability in quantum physics has not the same status as probability in classical physics, and I think that's the whole point of all these discussions about "interpretations".
Classical physics is a deterministic description of the phenomena, i.e., all observables by definition take always determined values, and the statistical/probabilistic description in classical statistical physics is just due to the ignorance about the details of a macroscopic system with very many degrees of freedom.
How much can be ignored is to some extend our choice of description. The first (and usually most important) step in describing a macroscopic system is the choice of the "relevant macroscopic observables" and at which level you want to describe them. Usually you start from the Liouville equation for the ##N##-particle distribution function, which is a complete description of the classical system which has to be reduced to the effective statistical description via the chosen "relevant observables" by treating the microscopic degrees of freedom statistically and then coarse grain. One way is to derive the Boltzmann equation for the one-particle phase-space distribution function. The equation of motion results from the Liouville equation but then contains the two-particle correlation function (the equation for the ##n##-particle distribution function contains the ##(n+1)##-particle distribution function, building up the BBGKY hierarchy). Then you truncate the BBGKY hierarchy by the molecular-chaos assumption, factorizing the two-particle distribution function and neglect the piece describing two-particle correlations. Whether or not this is a good description you can only decide for each system under consideration (e.g., it's not a good description for a plasma, where you need to take the long-ranged Coulomb interaction into account, leading to the Vlasov(-Boltzmann) equation).
In quantum theory the probabilities enter the description on the fundamental level, and they do not enter the description of Nature due to our ignorance of the determined values of the observables, but the observables do not necessarily take determined values at all. That this is a valid description is demonstrated by the clear observation of the violation of Bell's inequalities, which are predicted for local deterministic hidden-variable theories. Of course one way out might be a nonlocal deterministic hidden-variable theory, but I haven't seen any convincing one yet, at least not taking into account relativity. For non-relativistic QM Bohmian mechanics might be considered as one non-local realization of such a picture.
For macroscopic systems the derivation of effective statistical theories is pretty similar to the classical case, and it remedies some conceptual problems of the classical theory using the notion of indistinguishability (bosons/fermions in 3 spatial dimensions) as well as the Planck constant as a natural measure for phase-space volums resolving the problems with entropy (Gibbs paradox, statistical derivation of the 3rd Law).
I haven't yet read your new paper in detail, but I don't think that the use of the more general description of measurements with POVMs changes much on the fundamental content of quantum theory, at least not if I'm allowed to use it with the physical meaning it has, e.g., in Peres's texbook which uses the usual probabilistic meaning of quantum states represented by statistical operators. I think that I can understand also your paper in this sense without running into contradictions, but then again I don't understand what's your interpretation of the POVM formalism on the fundamental/axiomatic level if I'm not allowed to interpret the symbols and there manipulations (particularly building the usual trace for the "quantum expectation values" with the statistical operator in its usual probabilistic meaning). Otherwise I think the POVM formalism is indeed a way to describe the measurements with real-world equipment more realistically.
What I also do not see is that a TPC really contradicts the standard interpretation of the measurements of position and momentum. What you really measure are indeed the "pointer readings", i.e., the "electric current signals" due to gas discharges of the particle "along its track". There is a "track" in the same sense as there is one in a cloud chamber, and it's emergence is explained by standard quantum mechanics as detailed by Mott. What you measure are thus the positions and times of the "electric current signals", and positions (like "vertices of particle decays") are resolved within the resolution of the device, and energy and momenta are inferred from these "position-time measurements" from the curvature of the tracks due to the applied magnetic field.