Micha,
OK, the UTK web document you quoted is essentially a course version
that combines material covered in Peskin & Schroeder, ch2, with some
extra stuff.
In particular, UTK's eq(1.4.6) is the equal-time (ie special) case of D(x-y):
<br />
D(x-y) = \frac{m}{4\pi^2\sqrt{-(x-y)^2}} ~ K_1(m \sqrt{-(x-y)^2})<br />
This equal-time expression, involving the K_1 Bessel
function, is a special case of the more general expression I mentioned
in Scharf. But even in UTK's expression, you can see immediately that
it is 0 if m=0.
Regarding the specific quote you mentioned:
Again, causality is due to non-trivial interference between
positive-energy modes (particles) propagating in one direction (x -> y)
and negative-energy modes (anti-particles) propagating in the opposite
direction (y -> x).
The discussion surrounding this in the UTK document is a bit brief.
P&S give more (on pp 28-29).
My personal opinion is that the word "due" in the quote "causality is
due to non-trivial interference..." should be regarded as an
interpretation. In their approach, I would have said "causality is
recovered by appealing to non-trivial interference...".
To understand this, I'll summarize the standard (canonical)
approach to QFT (which is basically what UTK and P&S follow):
1) Start with a classical Lagrangian function over phase space.
Ensure it is a relativistic scalar. That is, ensure the Lagrangian
is compatible with (classical) special relativity.
2) "Quantize" it, which means "construct a mapping from functions
over phase space to operators on a Hilbert space". This is
non-trivial, but most textbooks do it very quickly by saying
"promote the classical field and its conjugate momentum to
operators, and impose canonical commutation relations between them".
Then check that all the relativity transformations that were
applicable on the classical phase space are correctly represented
by operators on the Hilbert space, satisfying the Poincare commutation
relations.
Following this path, one then discovers the puzzle of the Feynman
propagator being non-zero outside the lightcone, in general. However,
this embarassment can be interpreted away by appealing to real-world
measurements. Eg, P&S say on p28: "To really discuss causality,
however, we should ask not whether particles can propagate over
spacelike intervals, but whether a meaurement performed at one point
can affect a measurement at another point whose separation from the
first is spacelike." Then they go on to show that such a relationship
between two measurements doesn't occur. However, they have to broaden
the context of their discussion to complex Klein-Gordon fields and
talk about particles and anti-particles.
My take on all this is that it's no surprise that they can derive
the result of no-effect between measurements at spacelike intervals,
because that's just basic special relativity, which was a crucial
input to the whole theory right from the start.
The difference between the above, and what I described in my
earlier post, is that the above tries to quantize the whole classical
phase space, whereas I restricted it to a mass hyperboloid first.
In one case, we find puzzling issues about the Feynman propagator
being non-zero outside the light cone. In the other, we find really
weird expressions for position states. IMHO, neither of these
approaches is entirely satisfactory (I think it's because of the
way Fourier transforms are used with gay abandon). Hence my earlier
post about the Heisenberg-Poincare group, though the latter is
still a rather speculative research topic.
You also said:
It is interesting though, that the negative energy solutions
(or suppressing them) play an important role in your approach as well.
In both approaches, this is built-in from the start as axioms, in
that both approaches assume positive-energy - which is a
phenomenological expression of the fact that we don't experience
any form of backward time-travel. In my post, I used the
\Theta(E) to express this. In the canonical approach, this
is assumed implicitly in the way the Feynman propagator is chosen
(choosing which way to deform the energy integration contour).
You also asked about my "|X>" states:
This means we are simply not able to produce a fully localized
single particle state in QFT. Honestly, I do not have an idea yet, how
these |X> states look like.
The |X> states look quite horrible, and I don't think they're even
well-defined. For example, the Fourier transform of the
nastily-discontinuous \Theta(E) function is something like:
<br />
-\frac{1}{it\sqrt{2\pi}} + \sqrt{\frac{\pi}{2}}~\delta(t)<br />
and that's just the start of the nightmares in trying to find an
explicit expression for a general |X>.
That's why you hardly ever hear anything about such states in basic
textbooks. They're of no practical help when trying to derive
experimental consequences of QFT such as scattering cross-sections.
Unfortunately, that also encourages people to think that the ordinary
(x,t) of Minkowski are somehow physically meaningful in QFT, and then
they derive various embarrassing theorems (e.g: EPR, Reeh-Schlieder, etc)
which show that something is seriously wrong somewhere. In these situations,
it helps to think about the physically-more-relevant |X> states.