A Constructive QFT - current status

DarMM

Science Advisor
Gold Member
1,999
1,017
Ok, what different derivation do you have in mind? Are there papers/books understandable to the usual mortal QFT practitioner?
To be frank, no. It would require a good deal of advanced operator and measure theory and the end result for you would just be "Oh perturbation theory is fine".

Haag's theorem just implies the usual derivation using the unitary time evolution operator in the interacting picture isn't valid.
 

atyy

Science Advisor
13,500
1,609
Haag's theorem just implies the usual derivation using the unitary time evolution operator in the interacting picture isn't valid.
Why does the wrong derivation work (I think I've read that it reproduces the right derivation term by term)?

I've also heard that Fell's theorem explains why the wrong derivation works. Is there any substance to that?
 

A. Neumaier

Science Advisor
Insights Author
6,716
2,674
Unless we have something bizarre like this ?

"The standard approach of trying to gain insight into such models by solving numerically for larger and larger lattice sizes is doomed to failure; the system could display all the features of a gapless model, with the gap of the finite system decreasing monotonically with increasing size. Then, at some some threshold size, it may suddenly switch to having a large gap."
Undecidability is irrelevant in practice. Many interesting systems of diophantine equations are known to have solutions or to be unsolvable although the general problem is undecidable.
 

DarMM

Science Advisor
Gold Member
1,999
1,017
Why does the wrong derivation work (I think I've read that it reproduces the right derivation term by term)?

I've also heard that Fell's theorem explains why the wrong derivation works. Is there any substance to that?
Let's just look at the wrong derivation. I'll use the path integral approach where Haag's theorem becomes Nelson's theorem since it is easier to discuss.

We have the path integral:
$$\int{\mathcal{O}\left(\phi\right)d\nu}$$

We then separate the interacting measure into two components, the free measure ##d\mu## and an exponential ##e^{-S_{I}}## to get:
$$\int{\mathcal{O}\left(\phi\right)e^{-S_{I}}d\mu}$$

If we expand the exponential we then get an asymptotic series:
\begin{align*}
\int{\mathcal{O}\left(\phi\right)d\nu} & \approx \int{\mathcal{O}\left(\phi\right)d\mu} \\
& + \int{\mathcal{O}\left(\phi\right)\left(-S_{I}\right)d\mu} \\
& + \dots
\end{align*}

This asymptotic relation is valid in the continuum, it's simply that in the continuum ##d\nu \neq e^{-S_{I}}d\mu##, i.e. it's not the free measure times a function. That's Nelson's theorem, the path integral version of Haag's theorem. So that part of the derivation doesn't work.

However the derivation holds at every finite lattice spacing and thus the asymptotic relation holds at all lattice spacings as well. You can take the continuum limit on both sides of the relation and show it continues to hold in that limit and thus the perturbative series is valid in the continuum.

So one can consider the usual derivation to be a short hand. Introduce a cutoff, then expand the measure, get the asymptotic relation and take the continuum limit on both sides. You just can't use that expansion method directly in the continuum. If you want to prove the relation directly in the continuum there are other methods but they are much more mathematically involved.

Haag/Nelson's theorem just tells you the free and interacting theory are disjoint in the continuum. It doesn't however change the fact that the terms in the expansion in the coupling constant of the interacting theory's moments can be calculated with the free (Gaussian in path integral) theory.

So expanding the moments:
$$\mathcal{W}\left(x_1,...,x_n,\lambda\right) \rightarrow \sum_{n}\lambda^{n}\mathcal{G}_{n}\left(x_1,...,x_n\right)$$

The ##\mathcal{G}_{n}\left(x_1,...,x_n\right)## functions can be computed from Fock space/the Gaussian measure.

There is one side effect of the fact that they are disjoint that shows up when using the free theory to compute the terms. The need to renormalize the terms.

The perturbative series ends up being only asymptotic of course, not convergent. Though that happens in NRQM as well. In lower dimensions for some theories you can use the Borel transform to sum the series and thus existence of the interacting theory can be proved directly from perturbation theory.

In 4D but also for Yang Mills in lower dimensions there are poles in the Borel plane preventing resummation. The poles mean one has to take a contour around them to obtain the interacting theory, but there are infinite such contours introducing an ambiguity of order ##\mathcal{O}\left(e^{-\frac{1}{\lambda}}\right)##. Some are from instantons and others are from renormalons. Renormalons are finite terms resulting from coupling constant renormalization that cause the perturbative series to have an extra ##n!## growth term that leads to poles in the summed Borel series.
 

vanhees71

Science Advisor
Insights Author
Gold Member
12,971
4,997
Another simple argument given in

A. Duncan, The conceptual framework of quantum field
theory, Oxford University Press, Oxford (2012).

uses a finite volume with periodic spatial boundary conditions and works in momentum space. Then the infinite-volume limit is taken at the very end for the transition rates.

"Regularizations" like this or "latticizing" the theory etc. physicists intuitively do in a naive way. It's of course good to know, that one can explain, why this finally works, (more) rigorously.
 
615
372
Why does the wrong derivation work (I think I've read that it reproduces the right derivation term by term)?
It works heuristically, but not necessarily mathematically.
You just can't use that expansion method directly in the continuum. If you want to prove the relation directly in the continuum there are other methods but they are much more mathematically involved.
Being constructively inclined, I prefer those other methods, i.e. a non-perturbative analysis. I would even go as far to say that mathematically speaking a non-perturbative analysis is necessary in order to prove existence at all since perturbation theory is known to break down for many classes of problems, including many which are asymptotic, not convergent.

The failure of those doing the perturbative expansion is then essentially caused by them not realizing that they are expanding the power series by assuming ad hoc that the independent variable is fixed purely in order to be able to make an empirical comparison i.e. making a mathematically illegitimate assumption which is in a specific sense completely independent of experiment!
There is one side effect of the fact that they are disjoint that shows up when using the free theory to compute the terms. The need to renormalize the terms.

The perturbative series ends up being only asymptotic of course, not convergent. Though that happens in NRQM as well. In lower dimensions for some theories you can use the Borel transform to sum the series and thus existence of the interacting theory can be proved directly from perturbation theory.

In 4D but also for Yang Mills in lower dimensions there are poles in the Borel plane preventing resummation. The poles mean one has to take a contour around them to obtain the interacting theory, but there are infinite such contours introducing an ambiguity of order ##\mathcal{O}\left(e^{-\frac{1}{\lambda}}\right)##.
The 'some theories' for which this can be proved require both linearity of the space of solutions as well as linearity of the equations; if one or both of these assumptions fail, then perturbation theory - beyond an initial small semi-accurate range of validity - will quickly fail once the independent variables aren't ad hoc assumed to be fixed anymore. In this sense, perturbation theory is obviously just a more sophisticated version of a heuristic technique such as the small angle approximation.
 
615
372
Another simple argument given in

A. Duncan, The conceptual framework of quantum field
theory, Oxford University Press, Oxford (2012).

uses a finite volume with periodic spatial boundary conditions and works in momentum space. Then the infinite-volume limit is taken at the very end for the transition rates.

"Regularizations" like this or "latticizing" the theory etc. physicists intuitively do in a naive way. It's of course good to know, that one can explain, why this finally works, (more) rigorously.
Using periodic boundary conditions, either before or after a Fourier transform, is an implicit importation of topological phase space analysis, which severely complicates the issue because it introduces new existence and uniqueness issues of the periods whose resolution requires the full arsenal of nondimensionalization, bifurcation theory and index theory.

As Duncan makes explicitly clear in his masterful book, there is good reason to be suspect of the ultimate validity of perturbative analysis, either as non-optimized perturbation theory in which case the regularization based on the Borel transform is generally quite fragile, or optimized perturbation theory which generally isn't useful in the context of field theory.

From applied mathematics, all of these issues are already well known - with physicists often merely introducing novel verbiage which unnecessarily complicate these purely mathematical issues and which tend to be ultimately unjustifiable (e.g. wanting to make a comparison with empirical phenomenology) - which is exactly why non-perturbative analysis was invented in the first place.

Even in physics this is old news; already during the 60s-70s, the recognition that perturbative arguments were unjustifiable lead to a split in the QFT community into field theorists and S-matricists, as Shankar describes during his time under S-matrix purist Geoff Chew at Berkeley. The immediate recognition of the extremely contingent nature of renormalization by constructive mathematicians (and constructively inclined physicists) directly led to the constructive QFT programme, which as we can still see is nowhere near completion.
 

Jimster41

Gold Member
724
78
Thanks, now it's intuitive to me too. :smile:
That's yet another demonstration that Tong's lectures are really great.
I’m enjoying the lecture in the link but it’s already confusing when he says “little things affect big things. Big things don’t affect little things”. Well, big things certainly affect big things don’t they? Else classical mechanics was useless. Obviously false. But there are no big things not made of little things. Granted. So big things affect big things but only through little things. So wouldn’t it have been better to say that the map between all things big and little is little. Or something like that. The real effect being that you have to allow for big things to affect little things as the big things interact with each other. Co-evolution of phenotypes via the genome is a pretty important example of how it happens at least in the evolution of one regime of the physical world. What precludes the possibility it is more common. To me it seems potentially relevant to the subsequent question of what causes such things as discontinuous phase changes in continuously evolving systems - like some back reaction, some big thing shaking the almost boiling pot.

Moreover, given how confusing that was I am distracted when he then goes off into the (somewhat familiar now) partition function statistics based description of the Ising model and its free energy. I am distracted by that because it already assumes a notion of entropy maximization, the natural Hamiltonian as it were, classical a-priori probability - which, as sensible as it is, seems to me a puzzle that shouldn’t be given as axiomatic if the question is how to understand what makes-up microscopic space-time. I get that we started with an entropy gradient in our universe but there seems to be this weird way that always gets invoked as natural and then taken as an input to models. I get this is super practical and not wrong but doesn’t it potentially confuse the question of how that entropy gradient is managed microscopically, how it relates to the evolution of real things in “proper time” and therefore to the puzzle of differential microscopic GR (twin age difference).
 

Jimster41

Gold Member
724
78
The lecture is definitely more interesting now getting into universality classes.
 

Jimster41

Gold Member
724
78
Today I killed a mosquito, so the second statement is clearly wrong. :oldbiggrin:

okay... just to be clear. I was quoting the lecturer. P.3
And I agree, as I said in the next sentence, I don’t know why he said that. It was distracting and seems wrong.

Also, the QM ensemble associated with you, (there is no other you) killed that poor mosquito’s ensemble. Or perhaps you two are now a tiny tiny bit entangled. ☹
 
Last edited:

Want to reply to this thread?

"Constructive QFT - current status" You must log in or register to reply here.

Related Threads for: Constructive QFT - current status

Replies
18
Views
3K
Replies
4
Views
1K
Replies
24
Views
6K
Replies
10
Views
3K
Replies
1
Views
818
  • Posted
Replies
3
Views
2K
  • Posted
Replies
8
Views
14K
  • Posted
Replies
7
Views
998

Physics Forums Values

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving

Hot Threads

Top