vanhees71 said:
Of course, QFT is not restricted to relativistic QFT but can be as well formulated for non-relativistic QM as well. In the cases, where you only deal with systems, where the conserved current has the meaning of a particle-number current, this QFT formulation of non-relativistic QM is equivalent to the first-qantization formulation since then you always stay in the subspace of the Fock space with the particle number that was fixed in the beginning.
The reason, why there is no viable 1st-quantization formulation for relativistic QT and interacting particles is that the conserved currents of relativistic wave equations, which follow from invariance under proper orthochronous Poincare transformations as the symmetry group of Minkowski space, provide no positive definite densities and thus are charge rather than particle-number densities. Further to ensure causality in terms of the microcausality constraint in connection with local realizations of the Poincare group you need both annihilation and creation operators in the mode expansion of the free fields, and in the interacting case not particle number but the charges are conserved, and thus you need the full Fock space to get a consistent description.
My perspective was more the foundational one - ie the issues of interpretation and mapping elements of theory to elements of "reality" (which by I mean elements of the the observer/agent) is existing already in normal QM. But the problems become more accentuated when you consider a quantum field theory.
Suppose you have something that a copenhagen interpretation of QM, what is the copenhagen interpretation of QFT?
I think a bit like this: If we think the quantum state, represents the "state of the knowledge" the observer has about a "particle", then the fock space corresponds to the "state of the knowledge" the observer has about the "field", but in "second quantization" the "field" is not a classical field, but reprents just one step in an induction chain - it's a higher level knowledge about the lower level knowlwedge; ie the observer starts (as part of internal processes, which are physical of course) to "reflect" over it's own information, noting that say probabilities are not conserved, and he lower level of logical processing seems to not be viable, so a higher level construction is to consider the "probability" for the "probability following from a simplere inference system" combined with a changed of dependent variables, which can be conceptually thought of as a change of coding/representation, into something more efficient (that is say easier to truncate). Change of variables from KG -> Dirac equation is I think an "example".
Once you get the point of this, there is notthing to stop you also from a third quantisation and thus n'th quantization; each level represents more complex inference system withint the agent, affecting also it's interaction properties. At with n does this stop? - possible when the additional computational complexity of higher orders stalls the agent, more than it helps it?
The conventional method that you get taught isn't like this, it's more
Often, the first step in introducing QFT is "trying" to just plug operators into the klein gordon equation in order to illustrate that, we get the strange negative energy solutions and conservation of probability just goes down the drain - in short, it does not seem to make any sense. So we make say make the ansatz of the linear dirac equation, "relabel" some of the "particle states" as one beeing the anti-particle etc.
But when doing this change of "dependent variable", even mixing it up with the prior spacetime dynamics - what is happening to the "interpretation" of the information of the agent?
I think the conventional method is to just conclude that the 1st quantization doesn't make sense, and neither does the above, so instead one just arbitrarily (by ansatz) introduces more dependent variables, whose "interpretation" to the prior non-viable level is ignored.
One can also just say that the "ansatz" of the original observer was "wrong". ie the observer was convinved it was a one-particle system, but as the predictions based one that ended up non-correctable by changing the intial conditions, he was "wrong" as it was a multi-particle system.
This disturbs me alot as it avoids the problems. I think one can not questions the observers "best knowledge" unless it's of the "ignorance/Bell type" the incompleteness that is due to that is seems physically impossible for an agent to infer with certainty, say initial conditions and the effective laws, will unavoidable I think give predictable consequences.
A. Neumaier said:
We just have a collection of ##N##-point functions for each Heisenberg state.
If by "we" here mean the "observer/agent", the question I ask is: How did the agent infer the baggage implied here, without adding in "external information" (which at least is what
I strive to avoid, but which is of course very difficult it not impossible).
Or if we by "we" instead mean the collection of all "classical" measurementdevices, distributed throughout the universer that can build the records post-aquistion, then that line of thinking deviates from the path I try to stay on
/Fredirk