Insights Causal Perturbation Theory - Comments

  • #51
A. Neumaier said:
It is a physically motivated renormarization scheme replacing old-fashioned QFT, making regularization unnecessary. Why should one still want to regularize?
Well, in a way you also regularize by using the "smeared" operators, and that's very physical. Already in classical electrodynamics plane waves are mode functions, i.e., a calculational tool to solve the Maxwell equations for physically realistic fields, i.e., for fields with finite total energy, momentum, and angular momentum. It's just using generalized eigenfunctions of the appropriate self-adjoint operators (in this case the d'Alembert opertor).
Indeed, of all the attempts to make QFT mathematically rigorous this causal-perturbation-theory approach with "smearing" the distribution-like operators is the most physically plausible, and I have no quibbles with it in principle. I only don't see, what one gains from it physics wise, i.e., to calculate physically observable quantities, which cannot be calculated within the standard scheme. In standard PT dim. reg. is very convenient as a regularization scheme, and the renormalized theory is anyway independent of the regularization scheme. You can also do the subtraction in a BPHZ-like manner without intermediate regularization, but that can be tricky, and dim. reg. is just more convenient.
 
Physics news on Phys.org
  • #52
vanhees71 said:
So everything from QFT is within the standard formalism of thermal correlation functions. There's no need to physically interpret transient states all "particles" observed are calculated in the sense of the usual concept of asymptotic free states.

The description of the bulk medium is in terms of semi-classical transport theories, behind which when looked at them from the point of view of many-body QFT, also is the interpretation of "particles" in terms of asymptotic free states.
Transient states are important but they are states of the interacting quantum field, not particle states. Particle states are only meaningful as asymptotic states. I didn't claim any relation of field operators to an unphysical interpretation of transient states as particle states but are mor complex objects.

But I claimed that N-particle states other than those occurring in the textbook description of QFT are relevent. You confirm this by referrring to retarded correlation functions of currents:
vanhees71 said:
The QFT observable here is the thermal electromagnetic-current autocorrelation function, i.e., the "retarded" expectation value of
These are not covered by the BPHZ approach to renormalization
vanhees71 said:
in a way you also regularize by using the "smeared" operators
No; these are just arbitrary functions, dummy parameters in the causal approach, analogous to the ##x## in a perturbative calculation of ##e^x##.
vanhees71 said:
In standard PT dim. reg. is very convenient as a regularization scheme, and the renormalized theory is anyway independent of the regularization scheme.
It is a physically meaningless regularization scheme, as ##4-\epsilon## dimensional space is unphysical, and not even mathematically well-defined. Moreover, the independence of the regularization scheme is an assumption, not something proved - on the contrary, there are disputes since in certain situations there seem to be disagreements.
 
  • #53
At least in equilibrium many-body QFT you can show that order by order all you need to renormalize are the vacuum pieces of the proper vertex functions. In this sense BPHZ is sufficient.

Do you have an example, where two properly applied regularization schemes lead to different results for the renormalized quantities?

I thought it's self-evident that with fixed renormalization conditions the proper vertex functions are unique and then the physical quantities like S-matrix elements, pole masses defining physical masses of particles/resonances, etc. are independent of this scheme. The reason is that you can use, in principle, some subtraction scheme a la BPHZ without any intermediate regularization, using just the renormalization conditions for the proper vertex functions.
 
  • #54
vanhees71 said:
At least in equilibrium many-body QFT you can show that order by order all you need to renormalize are the vacuum pieces of the proper vertex functions. In this sense BPHZ is sufficient.

Do you have an example, where two properly applied regularization schemes lead to different results for the renormalized quantities?
It depends on the meaning of the undefined term 'properly applied'. By convention, regularization schemes are deemed properly applied when they lead to the same results for the renormalized quantities. This is the only known criterion.

Sometimes one gets different results, however...

Then one has to investigate why they differ and which result (if any) is to be trusted. I believe that I read a couple of papers with examples where dimensional regularization was thought to be in error. But I need more time to check this and to retrieve the references.

vanhees71 said:
I thought it's self-evident that with fixed renormalization conditions the proper vertex functions are unique and then the physical quantities like S-matrix elements, pole masses defining physical masses of particles/resonances, etc. are independent of this scheme.
It is self-evident only until counterexamples are found, which force one to be more specific about how to ''properly apply'' the technique beyond what can be found in standard sources.

It would be self-evident if the schemes were rigorously derived from an undisputed rigorous definition of the theory. But the latter does not exixt yet.

It is similar to what happens with self-adjointness. You acknowledge in your lecture notes that it is a necessary property of Hamiltonians. But you seem to take it as self-evident that all Hamiltonians actually used have this property. At least you never give sufficient conditions that would allow readers to check for themselves the self-adjointness of Hamiltonians written down formally. Usually, the property holds, but there are exceptions, and they are heuristically recognized only by producing faulty results. Few physicists care about giving proof of the self-adjointness of the Hamiltonians they use. I even wonder how you would check one for self-adjointness.
 
Last edited:
  • #55
Well yes, we are physicists, not mathematicians.

One example, where you find pretty often wrong statements in introductory textbooks is the box with rigid boundary conditions and the claim there were a momentum operator. ;-).

I still don't know, which examples you have in mind, where the standard regularization techniques lead to errorneous results.

One example may be the ##\gamma^5## problem in dim reg and the chiral anomaly, but this has to be solved anyway by arguing which current has to be anomalously broken and which one must stay conserved.
 
  • #56
vanhees71 said:
I still don't know, which examples you have in mind, where the standard regularization techniques lead to erroneous results.
It is called more politely "renormalization ambiguities".
Wu said:
The conventional scale-setting procedure assigns an arbitrary range and an arbitrary systematic error to fixed-order pQCD predictions. In fact, this ad hoc procedure gives results which depend on the choice of the renormalization scheme, and it is in conflict with the standard scale-setting procedure used in QED. Predictions for physical results should be independent of the choice of scheme or other theoretical conventions.
This is a quote from the abstract of https://arxiv.org/pdf/1302.0599.pdf. It says explicitly that this is a desirable assumption, not an achieved fact. See also
Even when it can be done, showing equivalence of two renormalization schemes is usually a highly nontrivial matter leading to a publication. This means that assessing whether a renormalization procedure is "properly applied" is in all but the simplest cases more an art than a science.
 
  • Like
  • Informative
Likes dextercioby and vanhees71
  • #57
Yes, sure, that's a much more puzzling problem than what I had in mind with my statement above. What I meant is that for a given Feynman diagram for a proper vertex function you get a unique answer given a renormalization scheme usually involving a renormalization scale, independent of the intermediate regularization you use. So these proper vertex functions and the S-matrix elements within the chosen renormalization scheme are independent from a chosen regularization but of course dependent on the chosen renormalization scheme and dependent on the renormalization scale.

The S-matrix is of course only independent on the renormaliation scale to the order of the expansion parameter (couplings or ##\hbar##, number of loops...) taken into account, and one can resum the leading logarithms by using RG equations to define the running couplings to minimize that dependence.

The problem of the uncertainty concerning the dependence on the renormalization scheme and the dependence on the corresponding renormalization scale is of course also not strictly solved (as isn't the problem how to define an exact QFT in 1+3 dimensions). To estimate the uncertainty there are some hand-waving rules, e.g., by setting the scale around the energy scale of the experiment you want to describe and varying the renormalization scale by some factors around this scale, leading to some "uncertainty band".

In thermal pQCD often the scale is chosen at ##2\pi T## (##T## temperature) and varied around this value by a factor 2 up and down. It's also a problem that even for the bulk thermodynamical quantities and the equation of state the perturbative series (as far as it can be evaluated anyway) is "not well convergent".

All these problems are far from being solved. Has "causal perturbation theory" new ansatzes for this problem? That would of course be very interesting, but at the end don't you just get the proper vertex functions of the standard formalism within a special way to regularize or a special renormalization scheme somehow defined by the "smearing procedure"?

Perhaps I should have a closer look at Scharf's book again. When I looked at it the last time some years ago, I had the impression that it's just another technique to get rid of the problems of UV divergences leading at the end to the same results as standard physicists' methods, which are a lot simpler to use.
 
  • #58
Does the careful splitting of causal distributions explained in chapter 3 of Scharf's book involve the same dispersion relations and analytic continuation techniques (given as a reference to the vol. 2 of Reed and Simon Methods of mathematical physics) as those used for local quantum fields in the Wightman axioms and also in Green's functions to guarantee positive energy under time reversal?
 
  • Like
Likes vanhees71
  • #59
Tendex said:
Does the careful splitting of causal distributions explained in chapter 3 of Scharf's book involve the same dispersion relations and analytic continuation techniques (given as a reference to the vol. 2 of Reed and Simon Methods of mathematical physics) as those used for local quantum fields in the Wightman axioms and also in Green's functions to guarantee positive energy under time reversal?
As far as I can tell, yes. These techniques are very general.
 
  • Like
Likes vanhees71
  • #60
But then it's indeed equivalent to the standard techniques for evaluating proper vertex functions within a given renormalization scheme.
 
  • #61
vanhees71 said:
But then it's indeed equivalent to the standard techniques for evaluating proper vertex functions within a given renormalization scheme.
In practice, that's what it looks like to me, just more mathematically sophisticated in that it perturbatively avoids the interaction picture and gets rid of the UV divergences in a more elegant way if you like. I found this video that talks about this equivalence stressing the renormalization and distributional issues.
 
  • Like
Likes A. Neumaier and vanhees71
  • #62
Tendex said:
In practice, that's what it looks like to me, just more mathematically sophisticated in that it perturbatively avoids the interaction picture and gets rid of the UV divergences in a more elegant way if you like. I found this video that talks about this equivalence stressing the renormalization and distributional issues.
The moral of this primarily historical and conceptually not demanding lecture on the causal approach is given in minute 47:34:
Michael Miller said:
There are compelling reasons to adopt an effective field interpretation of QFT, but providing the only available solution to the UV problems of the theory is not one of them.
(since causal perturbation theory settles these in a more convincing manner)
 
Last edited:
  • Like
Likes vanhees71
  • #63
Tendex said:
I found this video
What a weird, empty talk. It's "conceptually not demanding" because it's almost totally content-free. That's an hour that would have been better spent studying Scharf's textbook.
 
  • Like
Likes dextercioby
  • #64
strangerep said:
What a weird, empty talk. It's "conceptually not demanding" because it's almost totally content-free. That's an hour that would have been better spent studying Scharf's textbook.
Well, it is not quite empty. It explains how nonlinear operations with distributions force naively infinite constants and properly done lead to undetermined coefficients, already in simpler situations than quantum field theory. Thus it explains why the well-known distributional nature of quantum fields (even free ones) must run into difficulties and the need for renormalization.

By the way, the author is a philosopher with a PhD in physics.
 
  • Like
Likes dextercioby and Tendex
  • #65
A. Neumaier said:
Well, it is not quite empty. It explains how nonlinear operations with distributions force naively infinite constants [...]
I suspect you see those explanations in the talk because you've already been thinking about the subject for ages.

By the way, the author is a philosopher with a PhD in physics.
Yes, I was aware. To me, that fact explained his tendency to waffle on for quite a while without saying very much. :oldwink:
 
  • Like
Likes vanhees71 and weirdoguy
  • #66
The talk was given in the context of a Philosophy of physics meeting and the audience was not just theoretical physicists but also philosophers so that is the level and "style" it was geared towards. I agree with Neumaier that some important points about distributions and renormalization were clearly made.

I must say, though, that I much prefer the directly non-perturbative way to deal with these renormalization need related concepts, namely the Källén-Lehmann spectral representation and the explanations in the talk enticed me because they were general enough to relate to free fields, distributions and the constraints imposed in non-perturbative interacting fields and not just to the particular perturbative strategy of Epstein-Glaser that must ultimately be justified by the former to exist.
 
Last edited:
  • #67
Demystifier said:
Epstein-Glaser?
I don't know very much on these topics, so please enlighten me if I'm wrong. But if I understand correctly, Epstein-Glaser and the like merely describe in precise terms how renormalization is to be done, without making any attempt to justify the procedure "from first principles". They don't describe what it is that we are trying to calculate using these prescriptions. They provide an answer, but not the question!

Indeed, this will be true of any formalism that begins with a Lagrangian. Renormalization does not allow us to calculate the observable values of masses and coupling constants from the Lagrangian, meaning that predicting physical values requires some additional input. A properly specified theory would include "fundamental" parameters that eventually fix the measured values. If all we have is a Lagrangian, we simply do not know what we are calculating.

Another aspect of the same thing (I believe) is that Epstein-Glaser is fundamentally perturbative - that is, the power series is the only output; there is no function that the series is intended to approximate!

So I'm not sure these methods bring us any closer to explaining what we actually mean by QFT interaction terms...
 
  • Like
Likes Tendex
  • #68
maline said:
this will be true of any formalism that begins with a Lagrangian.
Causal perturbation theory does not work with Lagrangians!

maline said:
So I'm not sure these methods bring us any closer to explaining what we actually mean by QFT interaction terms...
In the causal approach, the meaning is precisely given by the axioms for the parameterized S-matrix. The construction is at present perturbative only. Missing are only suitable summation schemes for which one can prove that their result satisfies the axioms nonperturbatively. This is a nontrivial and unsolved step but not something that looks completely hopeless.
 
  • #69
Tendex said:
In your insights article you write: "To define particular interacting local quantum field theories such as QED, one just has to require a particular form for the first order approximation of S(g). In cases where no bound states exist, which includes QED, this form is that of the traditional nonquadratic term in the action, but it has a different meaning."
Exactly in what way is the meaning different from the one in the traditional action? Does the approximation follow an local action principle or not?
It looks to me like it just uses a renormalized Lagrangian instead of the usual bare one since it changes the moment when the renormalizations is performed to a previous step instead of the usual latter one. But the local action is still there in the background, just more rigurously renormalized from the start.
One can force causal perturbation theory into a Lagrangian framework then it looks like this.

But nothing in causal perturbation theory ever makes any use of Lagrangian formalism or Lagrangian intuition. No action principle is visible in causal perturbation theory; it is not even clear how one should formulate the notion of an action!

Instead, causal perturbation theory starts with a collection of well-informed axioms for the parameterized S-matrix (something not at all figuring in the Lagrangian approach) and exploits the relations that follow from a formal expansion of the solution of these equations around a free quantum field theory. The latter need not be defined by a Lagrangian either but can be constructed directly from irreducible representations of the Poincare group, as in Weinberg"s book (where Lagrangians are introduced much later than free fields).
 
  • #70
Tendex said:
To define particular interacting local quantum field theories such as QED, one just has to require a particular form for the first order approximation of S(g). In cases where no bound states exist, which includes QED, this form is that of the traditional nonquadratic term in the action, but it has a different meaning.
This statement needs to be corrected. It indicates that the S-matrix for QED includes a term of first order in ##e##, when in fact the first term is of order ##e^2##. There are no one-vertex processes, because it isn't possible for all three particles (two fermions and a photon) to be on-shell.

To see this, assume WLOG that the photon is outgoing, and consider the energy in the rest frame of the incoming fermion.
 
  • #71
maline said:
in fact the first term is of order ##e^2##. There are no one-vertex processes, because it isn't possible for all three particles (two fermions and a photon) to be on-shell.
This is true but irrelevant for causal perturbation theory.

First order means first order in the function ##g## in terms of which the expansion is made, not first order in coupling constants. Your argument does not apply since the parameterized S-matrix ##S(g)## is just a generating function, not an S-matrix in the physical sense. Only the adiabatic limit where ##g\to 1##, has such an interpretation. I added a corresponding remark to my Insight article.
 
  • #72
A. Neumaier said:
First order means first order in the function g in terms of which the expansion is made, not first order in coupling constants.
##g## is the test function that switches on the interaction, correct? So anything first-order in ##g## is also first-order in ##e##.
A. Neumaier said:
Your argument does not apply since the parameterized S-matrix S(g) is just a generating function, not an S-matrix in the physical sense.
Oh, I think I see. You are saying that since ##g(x)## is not translation-invariant, energy and momentum are not conserved by ##S(g)##, and so first-order processes are indeed possible. These terms will then vanish in the IR limit.
 
  • Like
Likes A. Neumaier
  • #73
maline said:
##g## is the test function that switches on the interaction, correct? So anything first-order in ##g## is also first-order in ##e##.

Oh, I think I see. You are saying that since ##g(x)## is not translation-invariant, energy and momentum are not conserved by ##S(g)##, and so first-order processes are indeed possible. These terms will then vanish in the IR limit.
Yes. This is also the point where the attempt of a Lagrangian interpretation breaks down.
 
  • #74
maline said:
since g(x) is not translation-invariant, energy and momentum are not conserved by S(g), and so first-order processes are indeed possible.
But this raises another issue: Per the axioms, ##S(g)## should not take us out of single-particle subspaces. But without 4-momentum conservation, won't a single fermion have an amplitude to spontaneously emit photons?
 
  • #75
maline said:
But this raises another issue: Per the axioms, ##S(g)## should not take us out of single-particle subspaces. But without 4-momentum conservation, won't a single fermion have an amplitude to spontaneously emit photons?
That's only in the infinite volume limit g=1, the perturbative construction won't reach it.
Then again I feel the departure from the Lagrangian naive picture is obtained basically because CP theory exploits this perturbative artifact for its own benefit, kind like a sanitized version of Feynman's quick and dirty diagrams where in the latter since the renormalization is deferred to a later stage you get off-shell internal lines as artifacts of perturbation theory instead, to its mathematical detriment. It's all easily seen as a perturbative trade-off between 4-momentum conservation versus "on-shellness" where keeping the latter by using renormalized distributions is more mathematically elegant .
 
  • #76
Tendex said:
That's only in the infinite volume limit g=1, the perturbative construction won't reach it.
What? I am talking about processes like ##e \rightarrow e+ \gamma.## When ##g=1## this cannot happen because of conservation laws, but otherwise it should occur already at first order in perturbation theory.
 
  • #77
maline said:
What? I am talking about processes like ##e \rightarrow e+ \gamma.## When ##g=1## this cannot happen because of conservation laws, but otherwise it should occur already at first order in perturbation theory.
Sure, they occur. My comment was about you using axioms for ##S(g)## when you ought to use those for ##g## formal power series only. Then there is no issue that I see. If you are asking about spontaneous emission as a non-perturbative effect that's out of the scope of causal perturbative theory, and you don't want to use the off-shell argument from the traditional perturbative approach since avoiding it is mainly what brings us to CPT.
In fact, in the 50 years gone by from Epstein-Glaser formulation this causal approach has not led to a single clue about a non-perturbative theory to interacting fields.
 
  • #78
maline said:
But this raises another issue: Per the axioms, ##S(g)## should not take us out of single-particle subspaces. But without 4-momentum conservation, won't a single fermion have an amplitude to spontaneously emit photons?
You are right. My axioms, inferred from the first (1989) edition of Scharf's QED book, were too strong; I corrected them by weakening the two stability axioms. The second (1995) edition silently corrected this mistake and only assume it (quite implicitly) in the adiabatic limit ##g\to 1##, discussed there in Section 3.11 and later 4.1. I had not noticed that before since I didn't reread all the detail in the second edition. I added detailed references to the insight article indicating where the axioms are stated or used.
Tendex said:
My comment was about you using axioms for ##S(g)## when you ought to use those for ##g## formal power series only.
The extraction of the nonperturbative axioms from Scharf was not done by Scharf but by me in the insight article. These axioms make sense nonperturbatively, even though the construction based on it is only perturbative. Presumably some resummation scheme may turn the latter into a provably full construction, though how to do this is presently unsolved.
Tendex said:
In fact, in the 50 years gone by from Epstein-Glaser formulation this causal approach has not led to a single clue about a non-perturbative theory to interacting fields.
This cannot be held against it. Other approaches also did not lead to a single clue.
 
Last edited:
  • Like
Likes dextercioby and maline
  • #79
Of course, also axiomatic QFT cooks only with water and you have to take the proper limit of the regularization for the renormalized quantities as in the more sloppy standard approaches.
 
  • Like
Likes Tendex
  • #80
vanhees71 said:
Of course, also axiomatic QFT cooks only with water and you have to take the proper limit of the regularization for the renormalized quantities as in the more sloppy standard approaches.
Not quite. There is no UV regularization in causal perturbation theory. The only water in it is the lack of convergence of the asymptotic series for ##S(g)##. Thus without exact resummation one has no clue what the nonperturbative ##S(g)## would be.
 
Last edited:
  • #81
What else is the "smearing" of the operators than a "regularization of the distributions"?
 
  • Like
Likes Tendex
  • #82
In the article, you mention the hope that a "suitable summation scheme" will be found for Causal Perturbation Theory, thus proving the rigorous existence of ##S(g)##. To me this hope seems unsupported and wildly optimistic. Remember that this is a power series in ##g##, so we need is a summation that works all the way up to ##g=1##. Furthermore, the summation must work for all possible energies of the incoming particles! This despite the fact that for large energies, even the lowest few term in the series will grow wildly rather than shrinking.

Also note that the summation scheme will probably have to be a relatively "tame" one like Borel summation. "Wilder" ideas like zeta function regularization are unlikely to give an ##S(g)## satisfying the axioms.
 
  • #83
vanhees71 said:
What else is the "smearing" of the operators than a "regularization of the distributions"?
In which sense could the occurrence of a test function in the definition $$\int g(x)\delta(x)dx=g(0)$$ of the Dirac delta distribution be a regularization of the latter?

The use of test functions is inherent in the definition of a distribution. Thus using test functions does not regularize the distribution in any meaningful sense. Nothing is smeared.
 
  • Like
Likes maline
  • #84
Sure, you call it not that, but that's what's behind it physically.
 
  • Like
Likes Tendex
  • #85
maline said:
In the article, you mention the hope that a "suitable summation scheme" will be found for Causal Perturbation Theory, thus proving the rigorous existence of ##S(g)##. To me this hope seems unsupported and wildly optimistic.
maline said:
Probably only because you haven't thought enough about this matter.
Remember that this is a power series in ##g##, so we need is a summation that works all the way up to ##g=1##.
One has to sum a power series in a single variable ##\tau##, introduced by replacing ##g## with ##\tau g##. The resummed expression at ##\tau=1## will be a function of ##g## that can be analyzed for the adiabatic limit ##g\to 1## in the same way as one now analyzes this limit for the few loop contributions.
maline said:
Furthermore, the summation must work for all possible energies of the incoming particles! This despite the fact that for large energies, even the lowest few term in the series will grow wildly rather than shrinking.
The whole point of resummation is that it includes important contributions from all energies. The size of the terms in the power series is completely irrelevant for the behavior of the resummed formulas.
maline said:
Also note that the summation scheme will probably have to be a relatively "tame" one like Borel summation. "Wilder" ideas like zeta function regularization are unlikely to give an ##S(g)## satisfying the axioms.
Borel summation is not sufficient because of the appearance of renormalon contributions. The promising approach is via resurgent transseries, an approach much more powerful than Borel summation.
 
Last edited:
  • Like
Likes maline
  • #86
A. Neumaier said:
One has to sum a power series in a single variable $\tau$, introduced by replacing $g$ with $\tau g$. The resummed expression at ##\tau=1## will be a function of ##g## that can be analyzed for the adiabatic limit $\g\to 1$ in the same way as one now analyzes this limit for the few loop contributions.

The whole point of resummation is that it includes important contributions from all energies. The size of the terms in the power series is completely irrelevant for the behavior of the resummed formulas.

Borel summation is not sufficient because of the appearance of renormalon contributions. The promising approach is via resurgent transseries, an approach much more powerful than Borel summation.
Thank you for this, I will need to try to absorb this material before responding.
 
  • #87
vanhees71 said:
Sure, you call it not that, but that's what's behind it physically.
No. In physics, regularization always involves changing a problem to a nearby less singular problem and restoring the original problem later by taking a limit. Nothing like this happens in causal perturbation theory in the UV, i.e., regarding the treatment of the distributions. Note that the adiabatic limit ##g\to 1## is not needed for the construction of the local field operators and hence for the perturbative construction of the quantum field theory in terms of formally local operators in a Hilbert space.

On the perturbative level, the adiabatic limit is the only limit appearing in the causal approach, needed for the recovery of the IR regime, including the physical S-matrix. The lack of convergence of the perturbative series also has nothing to do with regulaization.
 
Last edited:
  • #88
I don't think that we discuss the lack of convergence of the perturbative series. That's another issue. It's an asymptotic series not a convergent one already in simply QM problems.

In my understanding, the problem with not taking the said adiabatic limit however seems to be that you loose Poincare invariance then. In that sense also this approach to renormalization is just another type of regularization with the necessity of the limit to be taken at the end in order to have a Poincare invariant/covariant scheme.
 
  • Like
Likes Tendex
  • #89
vanhees71 said:
In my understanding, the problem with not taking the said adiabatic limit however seems to be that you lose Poincare invariance then.
The field operators defined by functional differentiation with respect to he test fuctions ##g## satisfy causal commutation rules and transform in a Poincare covariant way. For this to work, ##S(g)## is just a formal object without any pretense of being an S-matrix. No adiabatic limit is involved here.
vanhees71 said:
In that sense also this approach to renormalization is just another type of regularization with the necessity of the limit to be taken at the end in order to have a Poincare invariant/covariant scheme.
The adiabatic limit is only needed to recover the Poincare invariant S-matrix. However, this limit is a long distance (low energy) IR limit.

On the other hand, conventional regulaization schemes such as dimensional regularization or procedures with a cutoff regularize instead the short distance (high energy) UV behavior. The latter would correspond to requiring somewhere in causal peturbation theory a limit where ##g## approaches a delta function. But such a limit is never even contemplated in the literature on the causal approach.
 
  • #90
I actually see the analogy as looking at the CPT treatment of distributions kind like an IR regularization, not taking the usual UV regularization limit to ##g## in CPT which it obviously doesn't.
 
Last edited:
  • #91
A. Neumaier said:
The field operators defined by functional differentiation with respect to he test fuctions ##g## satisfy causal commutation rules and transform in a Poincare covariant way. For this to work, ##S(g)## is just a formal object without any pretense of being an S-matrix. No adiabatic limit is involved here.

The adiabatic limit is only needed to recover the Poincare invariant S-matrix. However, this limit is a long distance (low energy) IR limit.

On the other hand, conventional regulaization schemes such as dimensional regularization or procedures with a cutoff regularize instead the short distance (high energy) UV behavior. The latter would correspond to requiring somewhere in causal peturbation theory a limit where ##g## approaches a delta function. But such a limit is never even contemplated in the literature on the causal approach.
Now I'm puzzled. I thought the entire business of the causal PT approach is the usual UV regularization. I guess, I have to study this approach in more detail to understand what's behind it.
 
  • #92
vanhees71 said:
Now I'm puzzled. I thought the entire business of the causal PT approach is the usual UV regularization. I guess, I have to study this approach in more detail to understand what's behind it.
The causal approach achieves UV renormalization without any regularization. But to be able to work with free fields it regularizes the physical S-matrix in the IR by means of test functions with compact support (rather than arbitrary smooth ones), which amounts to switching off the interaction at large distances. The adiabatic limit restores the long distance interactions.

This is fully analogous to truncating short range potentials in quantum mechanics in order to be able to use free particles at large negative and positive times to of obtain an S-matrix without any limit. In quantum mechanics, the adiabatic limit restores the original potential. The mathematically proper treatment has to introduce a Hilbert space of asymptotic states and a Möller operator that transforms from infinite time to finite time. This makes the whole procedure less intuitive and requires more machinery from functional analysis, described rigorously in the 4 mathematical physics volumes of Reed and Simon.
 
  • #93
vanhees71 said:
Now I'm puzzled. I thought the entire business of the causal PT approach is the usual UV regularization. I guess, I have to study this approach in more detail to understand what's behind it.
There are no UV divergences in CPT since it uses renormalized distributions, so no UV regularization is needed. The causal approach with ##S(g)## is based on switching the interaction in a finite spacetime region featured by the function ##g(x)## of which ##S(g)## is the functional. The physical S-matrix corresponds to the limit when the spacetime volume goes to infinity. So the adiabatic limit is an IR limit.
 
  • #94
A. Neumaier said:
There is no regularization in causal perturbation theory.
A. Neumaier said:
But to be able to work with free fields it regularizes the physical S-matrix in the IR by means of test functions with compact support (rather than arbitrary smooth ones), which amounts to switching off the interaction at large distances.
I guess in the first sentence you meant UV regularization then. I thought vanhees was all along talking about the obvious IR regularization in CPT. In any case there is some kind of regularization always involved.
 
  • #95
Tendex said:
I guess in the first sentence you meant UV regularization then.
Yes, corrected. In the above context, I was referring to regularization in the UV sense, like vanhees7. I became more precise when it was clear that misunderstandings resulted.
 
  • Like
Likes Tendex
  • #96
A. Neumaier said:
The causal approach achieves UV renormalization without any regularization. But to be able to work with free fields it regularizes the physical S-matrix in the IR by means of test functions with compact support (rather than arbitrary smooth ones), which amounts to switching off the interaction at large distances. The adiabatic limit restores the long distance interactions.

This is fully analogous to truncating short range potentials in quantum mechanics in order to be able to use free particles at large negative and positive times to of obtain an S-matrix without any limit. In quantum mechanics, the adiabatic limit restores the original potential. The mathematically proper treatment has to introduce a Hilbert space of asymptotic states and a Möller operator that transforms from infinite time to finite time. This makes the whole procedure less intuitive and requires more machinery from functional analysis, described rigorously in the 4 mathematical physics volumes of Reed and Simon.
But renormalization has nothing to do with regularization. Regularization in the usual approach is just to get finite quantities to be able to calculate the "unrenormalized quantities" before you renormalize them, i.e., to express the unobservable "infinite constants of the theory" in terms of "measuarable finite ones".

BPHZ-like schemes directly define the renormalized proper vertex functions in a given scheme without previous regularization.

As I said, I guess I've simply not really understood, how this special scheme of causal PT works to regularize/renormalize the UV divergences, and it must somehow handle this first, before one can address the IR/collinear divergences, which only occur in theories with massless fields.
 
  • #97
vanhees71 said:
Regularization in the usual approach is just to get finite quantities to be able to calculate the "unrenormalized quantities" before you renormalize them, i.e., to express the unobservable "infinite constants of the theory" in terms of "measurable finite ones". [...] I've simply not really understood, how this special scheme of causal PT works to regularize/renormalize the UV divergences,
Such "infinite constants of the theory" or ''UV divergences'' nowhere arise in the causal approach, hence no regularization is needed. Even the word ''re''normalization is a misnomer in this approach, since from the start only physical parameters appear.

The only remnant of the traditional renormalization approach is due to subtracted dispersion relations, which introduces at each order some constants. But these are fixed immediately by relations that lead to a unique distribution splitting.
 
  • #98
So what's the strategy to get rid of the usual infinities and where comes renormalization in in the scheme of causal perturbation theory. The point is that you need renormalization no matter whether you have infinities or not. If you don't have infinities of course you don't need regularization.

The reason why I never was much interested in the book by Scharf was that my (maybe too superficial) glance over it I had indeed the impression that it's not more than the use of subtracted dispersion relations. This is of course also a way to renormalize in the standard approach, as shown in Landau Lifshitz vol. IV. I'm only not so sure, whether it's practical for higher than one-loop calculations.
 
  • #99
vanhees71 said:
So what's the strategy to get rid of the usual infinities
The strategy is to never introduce them. The distributions used have the mathematically correct singularities, and these distributions are manipulated in a mathematically well-defined way. Thus infinities cannot appear by design.
vanhees71 said:
where comes renormalization in in the scheme of causal perturbation theory.
Only in the fact that the final results agree with the results of conventional renormalization schemes. The starting point (i.e., the axioms and the first order ansatz) does not refer to anything that would need renormalization.
vanhees71 said:
The point is that you need renormalization no matter whether you have infinities or not.
This is a wrong, unsupported claim. One needs it only if one starts with the ill-defined Dyson series.
vanhees71 said:
I'm only not so sure, whether it's practical for higher than one-loop calculations.
How many loops are you using for your QCD calculations?
 
Last edited:
  • #100
The recent book
  • Michael Dütsch, From Classical Field Theory to Perturbative Quantum Field Theory, Birkhäuser 2019
treats causal perturbation theory in a different way than Scharf, using off-shell deformation quantization rather than Fock space as the starting point. From the preface:
Michael Dütsch said:
the aim of this book is to give a logically satisfactory route from the fundamental principles to the concrete applications of pQFT, which is well intelligible for students in mathematical physics on the master or Ph.D. level. This book is mainly written for the latter; it is made to be used as basis for an introduction to pQFT in a graduate-level course.
[...]
This formalism is also well suited for practical computations, as is explained in Sect. 3.5 (“Techniques to renormalize in practice”) and by many examples and exercises.
[...]
The observables are constructed as formal power series in the coupling constant and in ##\hbar##.
[...]
This book yields a perturbative construction of the net of algebras of observables (“perturbative algebraic QFT”, Sect. 3.7), this net satisfies the Haag–Kastler axioms [93] of algebraic QFT, expect that there is no suitable norm available on these formal power series.
In contrast to Scharf, he often uses renormalization language. However, he also writes (p.165, his italics):
Michael Dütsch said:
However, we emphasize: Epstein–Glaser renormalization is well defined without any regularization or divergent counter terms. We introduce these devices only as a method for practical computation of the extension of distributions (see Sect. 3.5.2 about analytic regularization), or to be able to mimic Wilson’s renormalization group (see Sect. 3.9).
 
Last edited:

Similar threads

Back
Top