# Rigorous Quantum Field Theory.

by DarMM
Tags: field, quantum, rigorous, theory
PF Patron
Emeritus
P: 8,837
 Quote by meopemuk If "rigorous QFT" is just a mathematical exercise, then you don't need to answer my questions. However, if it pretends to be a physically relevant theory then I would like to see connections to experimentally measured stuff at each step along the way.
This looks like a very strange requirement. I would say that a theory of physics is defined by a set of axioms that tells us how to associate probabilities with possible results of experiments. So I don't think we need "connections to experimentally measured stuff at each step along the way". Why isn't it sufficient that the end result is a theory that tells us how to calculate probabilities of possibilities?
P: 1,742
 Quote by Fredrik This looks like a very strange requirement. I would say that a theory of physics is defined by a set of axioms that tells us how to associate probabilities with possible results of experiments. So I don't think we need "connections to experimentally measured stuff at each step along the way". Why isn't it sufficient that the end result is a theory that tells us how to calculate probabilities of possibilities?
It would be OK to have some abstract axioms if they allowed us to calculate all possible physical results in a consistent manner. Then there will be no urgency in understanding the physical meaning of these axioms. However the problem is that modern QFT is far from being a successful theory like that. There are numerous problems and inconsistencies in QFT (ultraviolet divergences, the lack of finite time evolution operator, to name a few). Even proponents of "rigorous QFT" would agree that their rigorous approach works only in toy model theories.

I think in order to move forward we need to understand exactly what we are doing in QFT. It would be nice to revisit (Wightman's) axioms to see what is their physical meaning (if any). For example, one axiom postulates how quantum fields transform with respect to inertial frame changes (see my post #227). DarMM has agreed with me that quantum fields are not directly observable objects/properties. This means that the mentioned transformation law cannot be verified in experiments even in principle. So, this transformation law is simply an unjustified assumption. There is a good chance that this assumption is just wrong. Then no matter how "rigorous" our math is, we'll not get anything useful from a wrong axiom.

Eugene.
PF Patron
Emeritus
P: 8,837
 Quote by meopemuk It would be OK to have some abstract axioms if they allowed us to calculate all possible physical results in a consistent manner. Then there will be no urgency in understanding the physical meaning of these axioms.
Are you saying that a successful rigorous QED in 3+1 dimensions wouldn't associate probabilities with possible results of experiments in a consistent manner? Probably not, but if that's not what you're saying, I really don't know what your argument is. It sounds like you're just saying that there's still work to be done in rigorous QFT, and that we shouldn't be doing that work because it hasn't been done already.

 Quote by meopemuk However the problem is that modern QFT is far from being a successful theory like that. There are numerous problems and inconsistencies in QFT (ultraviolet divergences, the lack of finite time evolution operator, to name a few).
You're describing the problems with non-rigorous QFT. Isn't this precisely what rigorous QFT is trying to do something about?

People were probably saying the same thing about the Dirac delta in 1930. Do you also think that the "inconsistencies" of the delta "function" made it pointless to develop distribution theory?

 Quote by meopemuk Even proponents of "rigorous QFT" would agree that their rigorous approach works only in toy model theories.
So? To me that sounds like a good reason to continue with this, and not at all like a reason to give up. I know how to prove that the group of transition functions between inertial coordinate systems in 1+1-dimensional SR is either the Galilei group or isomorphic to the Poincaré group, given a few reasonable assumptions about the properties of those functions. But I haven't been able to do it in 3+1 dimensions. Does the fact that I've only been able to prove it for a "toy model" mean that the whole idea is flawed? (It certainly doesn't).

 Quote by meopemuk So, this transformation law is simply an unjustified assumption.
The time when we could make progress by only trying out assumptions that had already been verified by experiments (like the invariance of the speed of light) is long gone. We have no choice but to make "unjustified" assumptions and see what theories we end up with. And the specific assumption you mention, isn't that a formula that shows up in all the non-rigorous QFTs that make absurdly accurate predictions about results of experiments? I'm having a hard time imagining a better justification than that.

 Quote by meopemuk There is a good chance that this assumption is just wrong. Then no matter how "rigorous" our math is, we'll not get anything useful from a wrong axiom.
Technically all axioms in all theories are wrong, but I guess you mean that this one could be so wrong that the theory it produces will make predictions that are clearly inconsistent with the results of experiments. That's a possibility, but there's no way to know unless we actually find the theory first so that we can see what it's predictions are.
P: 1,472
 Quote by DarMM In the $$H_{I}(\phi_{0})$$ term we have: $$H_{I}^{\Lambda}(\phi_{0}) = \int_{\Lambda}{\phi_{0}^{4}dx}$$ The issue here is $$\phi_{0}^{4}$$. The free field $$\phi_{0}$$ is an OVD, like anything distributional its fourth power isn't clearly defined. Can we find a meaning for the fourth power? By a meaning for the fourth power I mean: 1. Is well-defined as an OVD. 2. When integrated solely in space it leads to a well-defined operator. The first condition can be satisfied quite easily. Wick has discovered the correct definition of powers of the free-field with the normal ordering prescription. Hence instead of $$\phi_{0}^{4}$$, we use $$:\phi_{0}^{4}:$$. This results in a well-defined OVD.
I don't understand how normal-ordering $:\phi_{0}^{4}:$ gives a well-defined OVD,
since there's still quartic products of creation operators therein. What am I missing?
P: 278
 Quote by strangerep I don't understand how normal-ordering $:\phi_{0}^{4}:$ gives a well-defined OVD, since there's still quartic products of creation operators therein. What am I missing?
Two ways of seeing it:
1. If you integrate $:\phi_{0}^{4}:$ against a test function it always results in a densely-defined operator. This is in constrast to $\phi_{0}^{4}$, which after smearing does not give a densely-defined operator
2. From a Feynman graph point of view, any graphs associated with $:\phi_{0}^{4}:$ do not contain tadpole loops. Tadpole loops are the only ultraviolet divergent loops in 2D, so it is ultraviolet finite.

However when I say a well defined OVD, I mean (1.). (2.) is just for perturbative intuition.
P: 1,742
 Quote by Fredrik Are you saying that a successful rigorous QED in 3+1 dimensions wouldn't associate probabilities with possible results of experiments in a consistent manner? Probably not, but if that's not what you're saying, I really don't know what your argument is. It sounds like you're just saying that there's still work to be done in rigorous QFT, and that we shouldn't be doing that work because it hasn't been done already. You're describing the problems with non-rigorous QFT. Isn't this precisely what rigorous QFT is trying to do something about?
I believe that people doing "rigorous QFT" are trying to solve these problems (e.g., time evolution and renormalization). I wish them well. However, in my personal (uneducated) opinion, they chose a wrong (formalistic) approach. I think one can also try an alternative approach which pays more attention to the physical meaning of theoretical constructions.

 Quote by Fredrik And the specific assumption you mention, isn't that a formula that shows up in all the non-rigorous QFTs that make absurdly accurate predictions about results of experiments? I'm having a hard time imagining a better justification than that.
Yes, the formula for field transformations is a necessary ingredient of all quantum field theories. However, note that this forrmula applies to non-interacting fields only. Actually, according to Weinberg, non-interacting fields are specifically defined in such a way that this "Lorentz" transformation law is valid. The reason given by Weinberg is that if we build interactions as products as thus defined fields, then the theory becomes Poincare-invariant and cluster separable automatically.

Wightman's axioms go beyond that and postulate that the same transformation law should be valid for interacting fields as well. As far as I know, there is no justification for this requirement. Moreover, Haag's theorem (in the formulation given by Greenberg) says that if interacting fields transform like that (plus some other conditions, which I find reasonable and therefore omit) then the theory must be equivalent to the non-interacting one.

I have a strong feeling that if one succeeds in constructing the interacting field operators in QED (which is an absurdly accurate theory, as you say) one would find that the "Lorentz" transformation law does not apply to them. Unfortunately, as far as I know, nobody was able to construct interacting fields in QED in any reasonable approximation and study their inertial transformations. However, this kind of study has been performed in a simple model example.

H. Kita, "A non-trivial example of a relativistic quantum theory of particles without divergence difficulties", Progr. Theor. Phys., 35 (1966), 934.

 Quote by Fredrik Technically all axioms in all theories are wrong, but I guess you mean that this one could be so wrong that the theory it produces will make predictions that are clearly inconsistent with the results of experiments. That's a possibility, but there's no way to know unless we actually find the theory first so that we can see what it's predictions are.
The answer is given in Haag's theorem: If the "Lorentz" transformation condition is postulated for interacting fields, then there can be no interaction. So a theory having this postulate is simply inconsistent. (DarMM will tell you that there CAN be interaction, but in a different Hilbert space. That's something I can't comprehend.)

Eugene.
P: 278
 Quote by meopemuk I believe that people doing "rigorous QFT" are trying to solve these problems (e.g., time evolution and renormalization). I wish them well. However, in my personal (uneducated) opinion, they chose a wrong (formalistic) approach. I think one can also try an alternative approach which pays more attention to the physical meaning of theoretical constructions.
Yes, but remember that the rigorous approach is the only which has accomplished this goal nonperturbatively in any model. And they have done so in several models in two and three dimensions. They haven't accomplished four dimensions yet, but they have a better track record than approaches which have accomplished nothing nonperturbatively.

 Wightman's axioms go beyond that and postulate that the same transformation law should be valid for interacting fields as well. As far as I know, there is no justification for this requirement.
No justification is a bit of a stretch. Let me list the theories where it is known to be true:
1. All pure scalar theories in 2D
2. All pure scalar theories in 3D
3. All Yukawa theories in 2D
4. All Yukawa theories in 3D
5. Yang-Mills in 2D
6. The Abelian Higgs-Model in 2D and 3D
7. The Gross-Neveu model in 2D and 3D
8. The Thirring model
and finally
9. All scalar theories in 4D.

The caveat on (9.) is that the only purely scalar theory which exists in 4D is probably the trivial one. However any field theory which exists has been proven to have this transformation property.
This list is basically every single theory we have constructed and understood nonperturbatively. So for every theory we have nonperturbative knowledge of, the transformation law holds.

The list of theories which exist nonperturbatively and don't obey the transformation law is an empty list. Hence I would say the assumption is justified, or at least far more justified than its negation.

 The answer is given in Haag's theorem: If the "Lorentz" transformation condition is postulated for interacting fields, then there can be no interaction. So a theory having this postulate is simply inconsistent.
I don't know how many times I can repeat this, that is not what Haag's theorem says. Not even Shirokov, in the paper you quoted, mentions this. To transcribe what Haag's theorem says, again, into language you might understand:

Haag's theorem says that if the theory lives in the same Hilbert space as the free theory and obeys relativistic transformations and is translationally invariant, then it is free.

That is it says:
(Same Hilbert space) + (Normal transformation law) + (Translationally invariance) => Non-interacting

It does not say:
(Normal transformation law) => Non-interacting.

 DarMM will tell you that there CAN be interaction, but in a different Hilbert space. That's something I can't comprehend.
It doesn't matter if you can't comprehend it or that I'm saying it. It is true and has been known to be true since 1969. I have even left references to papers which prove it in this thread, including in my two-part post above. It's perfectly fine if you can't imagine it, but it is true.
P: 1,742
 Quote by DarMM That is it says: (Same Hilbert space) + (Normal transformation law) + (Translationally invariance) => Non-interacting
Agreed.

Eugene.
P: 1,472
 Quote by DarMM If you integrate $:\phi_{0}^{4}:$ against a test function it always results in a densely-defined operator. This is in contrast to $\phi_{0}^{4}$, which after smearing does not give a densely-defined operator.
Could you please give me a specific reference where these statements are derived rigorously?
(Or are they easy to derive but I'm still missing something?)

-------
[Edit: I sense a note of frustration in your post #241, so I just like to say two things:

a) THANK YOU for going to the effort in those earlier posts, and THANK YOU in advance
for (hopefully) future episodes of the climbing-the-ladder saga.

b) I do want to understand these things rigorously, including how one goes about
proving convergence since (among other things) acquiring such functional-analytic
skill is clearly valuable in any other non-Wightman approach that one might wish to
investigate.
-------
P: 278
 Quote by strangerep Could you please give me a specific reference where these statements are derived rigorously? (Or are they easy to derive but I'm still missing something?)
Oh, they're certainly not easy to derive. The fact that Wick products give densely defined operators was first proved by Jaffe in 1966 [1]. However I personally fined a later derivation by Segal in 1967 to be much clearer [2]. Segal has a very erudite way of writing, which you will either love or find very difficult to read.

I should also say the theorem is much harder to prove in the Hamiltonian approach that I'm discussing. In the Functional-Integral (Path-Integral) approach it's just a matter of evaluating a single Feynman diagram. See Glimm and Jaffe's book Section 8.5, Proposition 8.5.1.

[1] Jaffe, A. : Wick polynomials at a fixed time. J. Math. Phys. 7, 1250 — 1255

[2] Segal, I. "Notes toward the construction of nonlinear relativistic quantum
fields, I. The Hamiltonian in two space-time dimensions as the generator
of a C*-automorphism group." Proc. Natl. Acad. Sci. U. S. 57, p.1178—1183

 [Edit: I sense a note of frustration in your post #241, so I just like to say two things: a) THANK YOU for going to the effort in those earlier posts, and THANK YOU in advance for (hopefully) future episodes of the climbing-the-ladder saga. b) I do want to understand these things rigorously, including how one goes about proving convergence since (among other things) acquiring such functional-analytic skill is clearly valuable in any other non-Wightman approach that one might wish to investigate.
You're more than welcome, I will be glad to continue the series of posts.
P: 278
 Quote by DrFaustus DarMM -> Have more than one question, but will limit myself to a quick one for now. From your post it is clear that the infrared problem is the crucial one in 2D. How does such a construction come across in the algebraic framework where the IR and UV problems are disentangled? And, perhaps even more importantly, why would such an algebraic construction not be feasible in higher dimensions?
In a purely algebraic approach, this whole construction is quite easy to carry out. The C*-algebra of observables for the finite and infinite volume theories are exactly the same. The only difference is the representation of the algebra.

Let's say the representation of the algebra which gives you the finite volume theory is $$\rho_{\Lambda}$$. All $$\rho_{\Lambda}$$ are unitarily equivalent, only the infinite volume theory $$\rho_{\infty}$$ is unitarily inequivalent. Also, something which allows making estimates and bound easier, the $$\rho_{\Lambda}$$ are all unitarily equivalent to the Fock/Free rep $$\pi$$.

So the entire construction of the theory is "merely" a matter of passing from one rep to another.

In higher dimensions though things are not so easy. As I will explain in detail, in three dimensions due to ultraviolet divergences one must renormalize. In the algebraic approach this shows up in the fact that the ultraviolet cutoff theory and the theory with no UV cutoff have the same C*-algebra, but different reps. Put another way, even though the algebra is again unchanged, the finite volume reps $$\rho_{\Lambda}$$ are not unitarily equivalent to the Fock/Free rep $$\pi$$

Also unlike the 2D case $$\rho_{\Lambda}$$ and $$\rho_{\Lambda'}$$ for $$\Lambda \neq \Lambda'$$ are unitarily inequivalent.

Let me sum up. In the Algebraic approach, ultraviolet divergences associated with mass and vacuum renormalization show up as changes in representations as you take some limit.

In the 2D case there is only ever one change in rep. If you take the UV limit, the rep stays the same. When you then take the infinite volume limit the rep change only shows up in the limit.

In the 3D case there is a change of rep in the UV limit. Then there is a change of rep for every single value of $$\Lambda$$ in the ultraviolet limit.

In the 4D case things become incredibly difficult, unlike all previous cases the algebra itself changes as you take the UV limit. It's not just a rep change. It's difficult enough to control the reps, but controlling the algebra is something truely difficult. The change in the algebra itself is associated with coupling constant renormalization.

(If anybody is curious, Field Strength renormalization is associated with something you can't really see in the Algebraic approach. I'll explain it when I do my post on the 4D field.)
 P: 399 from the book of Zeidler http://www.flipkart.com/book/quantum...dge/3540853766 i heard that all the 'divergent' quantities were encoded in the linear combination of dirac delta funciton $$\sum_{n\ge 0}c_{n}\delta ^{n} (x)$$ so when taken x=0 the expression was divergent. As far as i know Epstein-Glasser method allowed you to recover the Scattering S-matrix perturbatively plus a distributional contribution involivng dirac derivatives, also the fact that '2 distributions can not be multiplied' avoided us from getting finite result could anyone give a lazyman intro to Epstein-Glasser theory ??
 P: 90 DarMM, or anyone else for that matter, I'm trying to figure out the rigorous construction of the $$\varphi_2^4$$ and I'm reading Glimm and Jaffe, "Quantum field theory and Statistical mechanics - Expositions". Problem I find it a rather hard nut to crack. Tons of technicalities and I'm also failing to grasp the big picture, i.e. how are all the technicalities supposed to fit together. So question is, do yo know of any "pedagogical" account on the rigorous construction of $$\varphi_2^4$$ in a Minkowski setting? That is, no Haag-Kastler nor Osterwalder-Schrader. Would really appreciate any refrences. DarMM, you mentioned you found your notes... I'm guessing they're not in electronic format, are they? zetafunction -> I did not use the Epstein-Glaser approach, so this is just the idea of how it works. Essentially, if you know the time ordered product of one Wick monomial, then by causality you know the TOP of 2 Wick monomials. And if you know the TOP of 2 WM, then you know the TOP of 3 WM. And so on. Here, when I say you know I mean "you can construct". For instance, ion the case of the usual $$\varphi^4$$ theory, causality will allow you to construct the following chain of TOP $$T[:\varphi^4:] \longrightarrow T[:\varphi^4::\varphi^4:] \longrightarrow T[:\varphi^4::\varphi^4::\varphi^4:] \longrightarrow \dots$$. Double dots denote normal ordering and the fields are free fields. Now, the problem with the above chain is that you have products of distributions which are generally ill defined for coinciding points. The extension of the TOP of 2 or more WM to the diagonal, i.e. to coinciding points, then amounts to renormalization. And the extension is also not unique, which corresponds to the usual renormalization ambiguities. Note that there are no divergencies here, everything's finite.

 Related Discussions General Physics 9 Quantum Physics 6 Quantum Physics 3 Quantum Physics 16