Insights Mathematical Quantum Field Theory - Renormalization - Comments

Urs Schreiber

Science Advisor
Insights Author
Gold Member
573
669

Attachments

Thank you for this, Urs. Is this the last chapter? Do you have also an ending (conclusions) prepared?
From what I understand Urs has a few more planned but will take a little break. I agree it's an incredible resource!
 

Urs Schreiber

Science Advisor
Insights Author
Gold Member
573
669
Right, sorry, I should say something about it.

I did plan and am still planning to have at least one chapter that discusses the application of everything developed to QED, and hopefully another chapter with something on QCD and maype also on perturbative quantum gravity.

But this material I have not typed up yet, and now it looks like I am being distracted by some other tasks for the moment. Therefore this will take a while.

For this reason I have removed, for the time being, the announcement of the next chapter. But I gather not just me but also my readers might appreciate a little break, so I suggest we get back to the series in a little while.

As for conclusion, the series so far has laid out, in a hopefully fairly self-contained way, the present state of the rigorous no-tricks understanding of QFT, developed by the school of Brunetti, Duetsch, Fredenhagen et al., which has remained something like a public secret. I am hoping the series contributes to raising awareness that a non-black-magic formulation of pQFT exists, where vexing questions can be made precise, transparent and be unambiguously agreed on.

One example where this plays a role is the debate about the cosmological constant, which is being discussed in another thread here. Using the perturbative algebraic quantum field theory that the series has been meaning to lay out, one can go and rigorously work out what is going on here. The answer happens to be in contrast to the informal folklore, hence we learn something (see Hack 15, section 3.2.1).
 
Last edited:
9,030
1,966
From what I understand Urs has a few more planned but will take a little break. I agree it's an incredible resource!
To put it mildly.

Stretches me to my limit but is amazing.

Thanks
Bill
 
9,030
1,966
Have had a quick glance.

I only ever really understood BPH re-normalization and counter terms.

Conventional re-normalization described in say the following made me wince:
https://arxiv.org/pdf/1208.4700.pdf
'In other words, the bare mass has to diverge in such a way that its divergence cancels the divergent loop correction to yield a finite result. It amounts to shuffling the infinities to unobservable quantities like the bare mass. This is the part in renormalization theory which is very difficult to comprehend at the first sight'

It just left me cold - I thought and still think - what a load of bollocks

Your article may help me understand this a lot better.

Just a quick question - is your view similar to making the vacuum energy zero by normal ordering?.

Thanks
Bill


.
 

Urs Schreiber

Science Advisor
Insights Author
Gold Member
573
669
Conventional re-normalization described in say the following made me wince:
https://arxiv.org/pdf/1208.4700.pdf
'In other words, the bare mass has to diverge in such a way that its divergence cancels the divergent loop correction to yield a finite result. It amounts to shuffling the infinities to unobservable quantities like the bare mass.'
What words like this are trying to describe in prose is the simple mathematical statement of prop. 16.23:

This says that given a UV cutoff in the form of a sequence of non-singular distributions ##\Delta_{F,\Lambda}## that approximate the true Feynman propagator ##\Delta_F## in that ##\Delta_F = \underset{\Lambda \to \infty}{\lim} \Delta_{F,\Lambda} ## (def. 16.20) then

1. while the limit as ##\Lambda \to \infty## of the corresponding effective S-matrices ##\mathcal{S}_\Lambda(g S_{int}) := \exp_{F,\Lambda}\left(\tfrac{1}{i\hbar}( g S_{int})\right)## (268) need not exist (so we "have a divergence")...

2. ...there is a way to re-adjust, depending on ##\Lambda##, the interaction action functional ##g S_{int}## by adding higher order "counter terms" ##g^2 S_{counter,\Lambda} := \mathcal{Z}_\Lambda(g S_{int}) -g S_{int}## (remark 16.24) such that the limit of the effective S-matrices applied to the re-defined coupling does exist

$$
\mathcal{S}(g S_{int})
:=
\underset{\Lambda \to \infty}{\lim} \mathcal{S}_{\Lambda} ( \underset{\mathcal{Z}_{\Lambda}(g S_{int})}{\underbrace{g S_{int} + g^2 S_{counter,\Lambda}}} )
$$

The proof of prop. 16.23 makes transparent how this works, inductively: given counterterms ##Z_{\leq n,\Lambda}## for ##k \leq n## vertices then one looks at the difference between a true S-matrix ##\mathcal{S}(g S_{int})## at the next order ##n+1## and the effective S-matrix at order ##n+1## applied to the re-defined interaction at order ##n##. The local part of that difference has to be the next counterterm ##Z_{n+1,\Lambda}##, while the non-local part has to vanish with ##\Lambda \to \infty## (equation 272).

(By the way, it seems that at this moment my notes are the only place where this proof is actually written out. Previously there was just the faint hints offered in DFKR14, A.1)

In this way the counterterms manifestly absorb in each order the failure of the effective S-matrix ##\mathcal{S}_\Lambda## to converge to an actual S-matrix ##\mathcal{S}##. Conversely one may like to state this in prose as: "their own divergence cancels the divergence of the loop corrections", where the intuition expressed is that also the limit as ##\Lambda \to \infty## of the counterterms ##\mathcal{Z}_\Lambda## alone does not exist, it's only the combination ## \mathcal{S}_\Lambda \circ \mathcal{Z}_\Lambda## of S-matrix applied to interaction with counterterms whose limit exists.

##\,##

Generally, there is nothing in the notes that is fundamentally different from the traditional informal story, it's just that each ingredient of the traditional story is turned into something that makes sense.

For instance UV cutoffs are traditionally discussed in terms of a would-be path integral of the free theory. Since that does not exist, here one asks: What is it that the intuition of the path integral really tries to do for us when we speak of UV cutoffs? And one realizes: It just means to turn the resulting Feynman propagator ##\Delta_F## from a distribution with singularities on the light cone into a non-singular distribution ##\Delta_{F,\Lambda}## (by the evident momentum cutoff, example 16.21) just not done "in the path integral" where it makes not sense, but right there in the formula for the Feynman propagator). And so instead of building up mystery by insisting on a non-existent path integral imagery, we instead admit to reality and simply define a UV cutoff to be an approximation of the Feynman propagator by non-singular distributions ##\Delta_{F,\Lambda}## as ##\Delta_F = \underset{\Lambda \to \infty}{\lim} \Delta_{F,\Lambda}## (Duetsch 10, section 4). With that, everything becomes clear.

This is just the same way of proceeding as what Epstein-Glaser did in 1973 (following Bogoliubov and Stueckelberg). Instead of insisting that the S-matrix comes from a would-be integration over field histories, they simply said: Let's look for what the tangible outcome suggested by that imagery is. And they saw that all it means to do for us is to understand that an S-matrix scheme should satisfy causal factorization. This is simple and well defined and all one actually needs to construct pQFT (remark 15.16).
 
Last edited:

Urs Schreiber

Science Advisor
Insights Author
Gold Member
573
669
is your view similar to making the vacuum energy zero by normal ordering?
Yes, renormalization of time-ordered products is a phenomenon directly analogous to that of normal ordering in the Wick algebra.

(I wish this were "my" view, as you suggest, but instead this is what we learn from Brunetti-Duetsch-Fredenhagen et al.)

In both cases we have a product of observables that is fixed when all arguments are regular observables (def. 7.13) and we ask for the freedom of extending that product to all observables.

For the Wick algebra (chapter 14, section 1), which on regular observables may be given as the star product ##\star_{\tfrac{i}{2}\Delta}## of the causal propagator, for its extension to the diagonal to exist we may add a symmetric contribution ##H## to ##\Delta## (by prop. 13.6) and the choice that works gives the star-product of the Wightman propagator, and that is the Wick algebra product. The role of ##H## is precisely to subtract those contractions that give normal ordering. (The formula for the star product of the Wightman propagator is the algebraic manifestation of Wick's lemma.)

Similarly, the time-ordered products are uniquely fixed on regular observables to be given by the star product of the Feynman propagator (chapter 4, section 2). Again there is a choice to be made as one extends from there to the full algebra of (microcausal) observables, and that choice now is called renormalization (prop. 16.1)

The close analogy between these two cases is meant to be summarized by that table which is shown at the beginning of chapter 14. (see also the text right above, there):

WickAlgebraAndTimeOrderedProductsTable.png
 

Attachments

vanhees71

Science Advisor
Insights Author
Gold Member
11,603
4,228
I hope, you'll also put a pdf version of the entire book (I'd call this great work a book, no matter that it appears as a series of Insight articles) somewhere. I'm of the old-fashioned kind who prefer to read such a work on paper rather than on the screen.
 

Urs Schreiber

Science Advisor
Insights Author
Gold Member
573
669
I hope, you'll also put a pdf version of the entire book
A single-file version was identified here. It takes a minute or two to load. Best viewed with Firefox, to avoid slow MathJax rendering. Then print to file as desired.
 

Want to reply to this thread?

"Mathematical Quantum Field Theory - Renormalization - Comments" You must log in or register to reply here.

Physics Forums Values

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving
Top