A. Neumaier said:
My language is at least as standard as yours: Before you can apply the spectral theorem in some Hilbert space to some operator, you need definitions of both! I define an inner product on ##L^2(R)## and then the operators ##p## and ##q##, to get the necessary Hilbert space and two particular operators on it. Having these definitions, I don't need the spectral theorem at all - except when I need to define transcendental functions of some operator.The difference, given the position representation (or any other representation), is as follows:
What you call the minimal statistical or standard probabilistic interpretations uses this representation for defining irreducible probabilities of measurement in an ensemble of repeated observations, and thus introduces an ill-defined notion of measurement (and hence the measurement problem - though you close your eyes to it) into the very basis of quantum mechanics. It is no longer clear when something counts as a measurement (so that the unitary evolution is modified) and when the Schrödinger equation applies exactly; neither does it tell you why the unitary evolution of the big system consisting of the measured objects and the detector produces definite events. All this leads to the muddy reasoning visible in the literature on the measurement problem.
The thermal interpretation uses this representation instead to define the formal q-expectation of an arbitrary operator ##A## for which the trace in the formal Born rule can be evaluated. (There are many of these, including many nonhermitian ones and many Hermitian, non-selfadjoint ones.) This is the way q-expectations are used in all of statistical mechanics - including your slides. All this is on the formal side of the quantum formalism, with no interpretation implied, and no relation to observations. This eliminates the concept of probability from the foundations and hence allows progress to be made in the interpretation questions.
That's a cultural difference between physicists and mathematicians. While the mathematician can live with a set of rules (called axioms) without any reference to the "real world". Of course you can just start in the position representation and define a bunch of symbols calling them q-expectation and then work out the mathematical properties of this notion. The physicist however needs a relation of the symbols and mathematical notions to observations in the lab. That's what's called interpretation. As with theory and experiment (theory is needed to construct measurement devices for experiments, which might lead to observations that contradict the very theory; then the theory has to be adapted, and new experiments can be invented to test its consequences and consistency etc. etc.) also the interpretation is needed already for model building.
Now, I don't understand why I cannot interpret your q-expectations as usually as probabilistic expectation values. So the first very natural connection to experiments, which always need statistical arguments to make objective sense. For each measurement to be of credibility you need to repeat the experiment under the same circumstances (in q-language preparations of ensembles) and analyze the results both statistically as well as for systematical errors. The true art of experimenalists is not just to measure something but have a good handle about the errors, and statistics, based on mathematical probability theory is one of the basic tools of every physicist. This you get already in the first lesson of the introductory physics lab (to the dismay of most students, particularly the theoretically inclined, but it's indeed of vital importance particularly for them ;-)).
Concerning QT another pillar to make sense of the formalism, which is also already part of the interpretation, is to find the operators that describe the observables. The most convincing argument is to use the symmetries known from classical physics, defining associated conserved quantities via Noether's theorem. The minimal example for the first lessons of the QM1 lecture is the one-dimensional motion of a non-relativistic particle. There you have time-translation invariance leading to the time-evolution operator (in q-language called Hamiltonian) by finding the corresponding symmetry transformations (unitary for continuous smooth representations of Lie groups thanks to Wigner's theorem) and the generators defining the observable operators. I guess here comes the first place, where the observable operators should be represented by essentially self-adjoint operators, leading to the unitary representations of the (one-parameter) Lie symmetry groups. Then of course you also have momentum from translation invariance along the one direction the particle is moving and Galileo boosts to get also a position operator from the corresponding center-of-mass observable (I leave out the somewhat cumbersome discussion of mass in non-relativistic physics, which can fortunately postponed to the QM 2 lecture if you want to teach it at all ;-)).
Then you may argue to work in the position representation to begin with, and then the above considerations indeed lead to the operators of the "fundamental observables" position and momentum:
$$\hat{p} \psi(t,x) =-\mathrm{i} \partial_x \psi(t,x),$$
and the time-evolution equation (aka Schrödinger equation)
$$\mathrm{i} \partial_t \psi(t,x)=\hat{H} \psi(t,x).$$
Ok, but now if not having the Born interpretation (for the special case of pure states and precise measurements) at hand, I don't know, how to get the connection with real-world experiments.
It's an empirical fact that we can measure positions and momenta with correspondingly constructed macroscopic measurement devices. So we don't need to discuss the complicated technicalities of a particle detector which measures positions or a cloud chamber with a magnetic field to measure momenta and via the energy loss (also based on theory by Bethe and Bloch by the way) to have particle ID etc. etc.
However, I don't see how you make contact with these clearly existing macroscopic "traces" of the microworld, enabling to get quantitative knowledge about these microscopic entities we call, e.g., electrons, ##\alpha## particles etc. Having the statistical interpretation at hand, it's well known how the heuristics procedes, and as long as you don't insist that there is a "measurement problem" there is indeed none, because all I can hope from a theory, together with some consistent theoretical interpretation about its connection to these real-world observations, is to be consistent with these observations. You cannot expect it to satisfy your intuition from your macroscopic everyday experience which appears to be well-described by deterministic classical theories. The point is that this is also true for coarse-grained macroscopic observables, and this is in accordance with quantum statistics too. To coarse grain of course you need a description of the coarse grained observables, for which you need again statistics.
So the big for me still unanswered question is indeed this interpretive part of the "thermal interpretation". It's an enigma to me, how to make contact between the formalism (which includes also Ehrenfest's theorem which seems to be another corner stone of your interpretation too, but I don't see how it helps to make contact with the above described observations).
Then I note that the collection of all these q-expectations has a deterministic dynamics given by a Lie algebra structure, just as the collection of phase space functions in classical mechanics. In the thermal interpretation, the elements of both collections are considered to be beables.
Then I note that in statistical thermodynamics of local equilibrium, the q-expectations of the fields are actual observables, as they are the classical observables of fluid mechanics, whose dynamics is derived from the 1PI formalism - in complete analogy to your 2PI derivation of the Kadanoff-Baym equations. In practice one truncates to a deterministic dissipative theory approximating the deterministic dynamics of all q-expectations. This gives a link to observable deterministic physics - all of fluid mechanics, and thus provides an approximate operational meaning for the field expectations. This is not worse than the operational meaning of classical fields, which is also only approximate since one cannot measure fields at a point with zero diameter.
Yes, this is all very clear, as soon as I have the statistical interpretation and have extended it to "incomplete knowledge" and thus statstical operators to define non-pure states (i.e., states of non-zero entropy and thus implying incomplete knowledge). If I have just an abstract word like "q-expectations" there's no connection with classical (ideal or viscous) hydro. If I'm allowed to interpret "field expecations" in the usual way probabilistically, this is all well established. BTW. it's not a principle problem to use QFT instead of using the "first-quantization" formalism.
Then I prove that under certain other circumstances and especially for ideal binary measurements (rather than assume that always, or at least under unstated conditions), Born's interpretation of the formal Born rule as a statistical ensemble mean is valid. Thus I recover the probabilistic interpretation in the cases where it is essential, and only there, without having assumed it anywhere.
Well, but you need this probabilistic interpretation before you can derive hydro from the formalism. If not, I've obviously not realized, where and how this crucial step is done within your thermal interpretation.
What then is the meaning of the expectation in this case? It is just a formal q-expectation defined via the trace. Thus you should not complain about my notion!
Born's rule only enters when you interpret S-matrix elements or numerical simulation results in terms of cross sections.
It was about the Green's function in QFT or field correlators like $$\mathrm{i} G^{>}(x,y)=\mathrm{Tr} \hat{\rho} \hat{\phi}(x) \hat{\phi}(y)$$. Of course, that's not an expectation value of anyting observable. It's not forbidden to use such auxiliary functions in math to evaluate the observable quantities. Why should it be? As already Heisenberg learned from Einstein, the strictly positivistic approach (i.e., to work only with observable quantities) is neither necessary nor possible in theoretical physics. Also in classical electrodynamics you quite often work with the clearly unobservable potentials to derive the observable quantities (electromagnetic fields, or to be more precise the observable facts we understand as caused by the interaction of the charged matter building the detectors (e.g., our eyes) with the field in the standard interpretation of classical electromagnetism).