# Unitarity in field theory

What is the reasoning for saying that the scattering matrix in quantum field theory is unitary?

Take the initial state to be an electron and a positron. All sorts of crazy products can result in the final state, from photons to Z's to Higgs to an electron/positron with different momenta, to quark/gluon/hadron jets. Also, there is the greatest possibility, which is that the electron and positron completely miss each other and don't react. So all this sums up to 1?

What if there's a particle yet to be discovered that can be produced from electron/positron collision? Then that would mean that any proof that the Standard Model obeys unitarity would be false, because the Standard Model predicted a probability of 1, but with the extra particle, the probability would be more than 1.

Or was unitarity built in from the beginning, somewhere in the path integral (a normalization?)? The starting point for scattering calculations would be the calculation of $$W[J]=<0|0>_J$$ (the vacuum-to-vacuum expectation value in the presence of a source) and feeding W[J] into the LSZ-reduction formula? So somewhere at that juncture, unitarity was inserted by hand?

Ben Niehoff
Gold Member
If a new particle is discovered that can be produced by a given collision, then ipso facto the entire Standard Model is wrong (since it does not include said particle). Then the a new model, including the new particle, must be constructed such that it obeys unitarity.

Unitarity comes about because we divide out by the total partition function...the same way things work in statistical mechanics.

Unitarity comes about because we divide out by the total partition function...the same way things work in statistical mechanics.

So if there is a new particle, the Feynman rules for the old particles would be different?

Or does dividing out the total partition function manifest itself in renormalization?

Usually <0|0>, the vacuum-to-vacuum expectation value in the absence of a source, is normalized to 1. I think <0|0> would most resemble the partition function in statistical mechanics, as <0|0> has the exponential of the Hamiltonian sandwiched between there which is sort of like the canonical partition function. So is setting <0|0>=1 how you ensure unitarity? But does this really affect the Feynman rules for the rest of the particles?

Ben Niehoff
Gold Member
The path integral is (roughly) the same thing as a partition function.

If the new particle interacts with the old particles, then yes, you'd have to add Feynman rules that involve these new interactions.

In the quantum mechanics of particles, unitarity is ensured by making sure all additional interactions are written with a Hermitian Hamiltonian, for then the imaginary exponential is unitary.

I guess what I'm bothered by is that if there are new particles, then the probability that you calculated using the old particles should be reduced to make room for the new particles. For example, if the Higgs exist, then e+e- goes into $$\gamma \gamma$$ should be smaller because of the possibility of e+e- going into HH. But the Feynman rules seem to remain the same if you are just using the old particles and ignoring the possibility of HH. Maybe the probability change is too small to bother, or maybe it's accounted for already in the interactions (since you can't just turn off the interactions) by way of normalizing by dividing by the partition function.

Ben Niehoff
Gold Member
"You can't just turn off interactions" is exactly the point I'm trying to make. A model with Particle X and a model without Particle X are NOT the same model. One includes e+e- going to XX, and the other does not.

Say model A contains (e+e- to XX), and model B does not. In the context of model A, the probabilities in model B really mean conditional probabilities; i.e., "probability of (e+e- to $\gamma\gamma$) given that (e+e- to XX) does not happen". It is perfectly sensible (in fact required) that all the conditional probabilities given "not (e+e- to XX)" add up to 1. Likewise, all the TOTAL probabilities in model A must also add up to 1.

Haelfix
The proof of the unitarity of the SMatrix is included in Weinberg vol 1. In many ways, its there by construction of the in,out formalism, but in any event its written out explicitly.

"You can't just turn off interactions" is exactly the point I'm trying to make. A model with Particle X and a model without Particle X are NOT the same model. One includes e+e- going to XX, and the other does not.

Say model A contains (e+e- to XX), and model B does not. In the context of model A, the probabilities in model B really mean conditional probabilities; i.e., "probability of (e+e- to $\gamma\gamma$) given that (e+e- to XX) does not happen". It is perfectly sensible (in fact required) that all the conditional probabilities given "not (e+e- to XX)" add up to 1. Likewise, all the TOTAL probabilities in model A must also add up to 1.

Makes sense now. Thanks.

I think things get slightly more complicated when considering renormalization, because the couplings that you measure are the result of all interactions, known or unknown, and then you plug these couplings into your model which only contains the known part. But then like you said, you'll suspect something is amiss, that your theory isn't right and you'll construct a new theory that will make everything right.

The proof of the unitarity of the SMatrix is included in Weinberg vol 1. In many ways, its there by construction of the in,out formalism, but in any event its written out explicitly.

I'll go through the in-out formalism again (the LSZ reduction). I guess I didn't go through it carefully enough the first time, but it seemed like hand-waving at the time. I read Srednicki's book, not Weinberg's though, and I don't recall Srednicki ever pointing out in his derivation of the in-out formalism that unitarity was built in. That would seem like something important to point out to a student.