Causal Perturbation Theory - Comments

In summary, A. Neumaier's new PF Insights post discusses causal perturbation theory, which is a method for constructing relativistic quantum field theories without a UV cutoff in finite volume. This approach is based on distribution splitting techniques borrowed from microlocal analysis and is manifestly covariant. While it handles UV problems, it still faces the usual IR problems that must be handled by coherent state techniques. The causal approach is not fully rigorous, as it relies on a mathematically ill-defined notion of time ordering, but Epstein and Glaser's contributions have made it more rigorous. However, it has not been fully incorporated with other rigorous approaches, such as the Kulish-Faddeev paper, which has settled the QED infrared problems on
  • #36
Tendex said:
isn't referring to it as a "Causal theory" a bit of a misnomer?
It is generally used; I am not responsible for the name. I think it is called causal since causality dictates the axioms and the derived conditions for the distribution splitting.
Tendex said:
validity only term by term is quite a "truncation" of the theory
Of course; I never claimed anything else. By making approximations, any computational scheme necessarily truncates the theory and hence violates locality. This even holds in 2D and 3D, where local QFTs have been constructed rigorously but computations still need to employ approximations.

My emphasis was on that all truncations are covariant and hence relativistic in the standard sense of the word. Only locality is slightly violated.
Tendex said:
not constructing any theory that gets us closer to the nonperturbative local QFT.
The complete, infinite order construction satisfies all axioms, and hence the local commutation rules, in the sense of formal power series. In this sense it is closer to nonperturbative local QFT.
Tendex said:
if one assumes from the start the existence of the non-perturbative local QFT(this is what Dyson and Feynman did), this seems to me like an empty exercise in rigor and old perturbation theory was fine
Old perturbation theory only defines the perturbative S-matrix, but not the operators, and hence not the finite time dynamics. Thus it lacks much of what makes a quantum theory well-defined.
Tendex said:
Perhaps an appropriate mathematical analogy is with numerical brute force searches of Riemann hypothesis zeros outside the critical line, that always remain equally infinitely far from confirming the hypothesis.
This is a valid comparison. Indeed, the Riemann hypohesis, global existence of solutions of the the Navier-Stokes equations, and the construction of an interacting local QFT in 4D are three of the 6 open MIllnium problems. They share the fact that numerically, everything of interest is established without doubt but the mathematical techniques to produce rigorous arguments are not sufficiently developed to lead to success on this level.
 
Last edited:
  • Like
Likes Tendex
Physics news on Phys.org
  • #37
Then I have to ask again: What else do you gain in (vacuum) QFT than the S-matrix?

In the standard approach, the S-matrix is what describes the observable effects like cross sections and decay rates. It deals with transition-probability rates between asymptotic free initial states (where you have a physical definition of the states as "particles") and asymptotic free final states.

Is there an idea, that there are physically observable interpretations of states defined by the "transient" field operators and if so what is it and how can it be measured?
 
  • #38
vanhees71 said:
What else do you gain in (vacuum) QFT than the S-matrix?
I had already answered this:

BPHZ perturbation theory, say, only defines the perturbative S-matrix, but neither operators nor Wightman N-point functions, and hence no finite time dynamics. Moreover nothing for states different from the ground states. Thus it lacks much of what makes a quantum theory well-defined. Quite independent of what can be measured.

For comparison, if all that ordinary quantum mechanics could compute for a few particle system were its ground state and the S-matrix, we would have the status quo of 1928, very far from the current state of the art in few particle quantum mechanics.
 
  • #39
Tendex said:
not an exactly or completely causal (in the sense of microcausal) theory due to its perturbative limitations, isn't referring to it as a "Causal theory" a bit of a misnomer? I know there are historical reasons starting with the work of Bogoliubov and I guess that it refers to the global causality of asymptotic states rather than to the "in principle" exact (micro)causality usually explained in regular QFT textbooks when explaining the locality axiom
I think the name ''microcausality'' for the spacelike commutation rule, though quite common, is the real misnomer. The commutation rule is rather characteristic of locality (''experiments at the same time but different places can be independently prepared'') , as indicated by the title of Haag's book. Locality is intrinsically based on spacelike commutation and cannot be discussed without it or directly equivalent properties.

On the other hand, the relation between causality and spacelike commutation is indirect, restricted to relativistic QFT. Moreover, the relation works only in one direction since spacelike commutation requires a notion of causality for its definition, while causality can be discussed easily without spacelike commutation.

Indeed, causality (''the future does not affect the past'') is conceptually most related to dispersion relations (where causal arguments enter in an essential way throughout quantum mechanics, even in the nonrelativistic case) and to Lorentz invariance (already in classical mechanics where ''microcausality'' is trivially valid). Both figure very prominently in causal perturbation theory.
Tendex said:
maybe that's why the title of Scharf's book uses first the less confusing phrase "Finite QED".
Scharf's QED is finite order by order, just as causal perturbation theory is causal order by order. Since Scharf does not produce finite results in the limit of infinite order you should complain against the appropriateness of the label ''finite'' with the same force as you complain against the label ''causal'' in causal perturbation theory.
 
Last edited:
  • Like
Likes dextercioby
  • #40
A. Neumaier said:
I had already answered this:

BPHZ perturbation theory, say, only defines the perturbative S-matrix, but neither operators nor Wightman N-point functions, and hence no finite time dynamics. Moreover nothing for states different from the ground states. Thus it lacks much of what makes a quantum theory well-defined. Quite independent of what can be measured.

For comparison, if all that ordinary quantum mechanics could compute for a few particle system were its ground state and the S-matrix, we would have the status quo of 1928, very far from the current state of the art in few particle quantum mechanics.
Yes, that's formally clear. What I don't see is what you gain physics wise. What is the physical observable related to finite-time dynamics. I also don't understand, why you say that in standard PT you don't get Wightman functions. Of course you can calculate them perturbatively within the usual formalism.
 
  • Like
Likes Tendex
  • #41
vanhees71 said:
Yes, that's formally clear. What I don't see is what you gain physics wise. What is the physical observable related to finite-time dynamics.
What do you man? The Hamiltonian together with the Schrödinger equation tell how a state changes in a finite time. But in the textbook formalism, the Hamiltonian comes out infinite and cannot be used.
vanhees71 said:
I also don't understand, why you say that in standard PT you don't get Wightman functions. Of course you can calculate them perturbatively within the usual formalism.
The textbook formalism only gives the time-ordered N-point functions. How do you time-unorder them?
 
  • #42
The question is, what I gain from finite-time states in QFT. The problem is first, how to interpret them. All you need are transition probabilities between observable states, which have a physical meaning. The closest to something finite in time are, e.g., long-base-line experiments for neutrinos, which you handle with the usual S-matrix theory using wave packets as asymptotic initial an final states or coincidence measurements of multi-photon states. Also here you need the corresponding correlation functions, which can be evaluated perturbatively, and afaik that's all that's needed in quantum optics to describe the observables.

The "fixed-ordered Wightman functions" should be calculable by deriving the corresponding Feynman rules for them, but again my question: to calculate which observable predictions do you need them for?
 
  • Like
Likes Tendex
  • #43
vanhees71 said:
The question is what I gain from finite-time states in QFT.
If you are only interested in the interpretation of collision experiments, nothing. But if you want to do simulations in time of what happens, you need it. You are doing such simulations, so it is strange why you ask.

As you well know, this goes far beyond BPHZ, which is the textbook material that was under discussion above. Simulations in time are usually done with the CTP formalism, which can produce all required information in a nonperturbative way, given some pertubatively computed input.

The latter must be renormalized, which is done in an a nonrigorous hoc way. Presumably it can be placed on a more rigorous basis by using the causal techniques.

This may even resolve some of the causality issues reported in the CTP literature. (This was maybe around 10 years ago; I haven't followed it up, are these problems satisfactorily resolved by now?)
 
  • #44
vanhees71 said:
The "fixed-ordered Wightman functions" should be calculable by deriving the corresponding Feynman rules for them
But there is no Dyson series for them, so Feynman rules cannot be derived in the textbook way.

And CTP only produces 2-point Wightman functions, but not the ##N##-point functions for ##N>2##.
 
  • #45
A. Neumaier said:
I think the name ''microcausality'' for the spacelike commutation rule, though quite common, is the real misnomer. The commutation rule is rather characteristic of locality (''experiments at the same time but different places can be independently prepared'') , as indicated by the title of Haag's book. Locality is intrinsically based on spacelike commutation and cannot be discussed without it or directly equivalent properties.

On the other hand, the relation between causality and spacelike commutation is indirect, restricted to relativistic QFT. Moreover, the relation works only in one direction since spacelike commutation requires a notion fo causality for its definition, while causality can be discussed easily without spacelike commutation.

Indeed, causality (''the future does not affect the past'') is conceptually most related to dispersion relations (where causal arguments enter in an essential way throughout quantum mechanics, even in the nonrelativistic case) and to Lorentz invariance (already in classical mechanics where ''microcausality'' is trivially valid). Both figure very prominently in causal perturbation theory.
I basically agree with the above. There is a degree of mixing between the different terms in certain contexts though. In the free theory like in the classical you mention there is a causality and microcausality trivial overlapping, that relativistically is reinforced by the symmetries that include time reversals and spacetime translations, the latter is also in the putative non-perturbative local QFT. There you have the triad of Poincaré covariance, locality and unitarity inextricably united through the analytic dispersion relations extended from the Kramers relations to the whole complex plane.

Of course in the perturbative theory this has to be spoiled a little, and (always assuming the existence of the non-perturbative theory it approximmates which is what allows the necessary deformation of the free fields) locality splits from the other two.

Scharf's QED is finite order by order, just as causal perturbation theory is causal order by order. Since Scharf does not produce finite results in the limit of infinite order you should complain against the appropriateness of the label ''finite'' with the same force as you complain against the label ''causal'' in causal perturbation theory.
Well, I think I complained about the concepts behind it(which is what I care about) enough in other threads to get my point across. My comment as I said was purely about the word "causal" and its connotations and conflation with locality and (micro)causality that are known to create endless confusions in Bell-like discussions.
 
  • #46
Wightman propagators obviously belong to the axiomatic QFT proposal, I guess by what you get with "the standard perturbative theory" vanhees71 refers to the plain Feynman propagator.
 
  • #47
Tendex said:
Wightman propagators obviously belong to the axiomatic QFT proposal, I guess by what you get with "the standard perturbative theory" vanhees71 refers to the plain Feynman propagator.
Yes, but he had asked what can be computed from approximate field operators that are not accessible by standard perturbative theory. I gave as examples the unordered N-point correlation functions. These are defined independent of the Wightman axioms; the latter just specify their desired covariance and laocality conditions.
 
  • #48
Since CPT is UV-complete, does it suggest a physically motivated regularization scheme for old-fashioned QFT?
 
  • #49
HomogenousCow said:
Since CPT is UV-complete, does it suggest a physically motivated regularization scheme for old-fashioned QFT?
It is a physically motivated renormarization scheme replacing old-fashioned QFT, making regularization unnecessary. Why should one still want to regularize?
 
  • #50
A. Neumaier said:
If you are only interested in the interpretation of collision experiments, nothing. But if you want to do simulations in time of what happens, you need it. You are doing such simulations, so it is strange why you ask.

As you well know, this goes far beyond BPHZ, which is the textbook material that was under discussion above. Simulations in time are usually done with the CTP formalism, which can produce all required information in a nonperturbative way, given some pertubatively computed input.

The latter must be renormalized, which is done in an a nonrigorous hoc way. Presumably it can be placed on a more rigorous basis by using the causal techniques.

This may even resolve some of the causality issues reported in the CTP literature. (This was maybe around 10 years ago; I haven't followed it up, are these problems satisfactorily resolved by now?)
Yes, I'm doing simulations what's going on in heavy-ion collisions, but it's far from the claim that you have a physical interpretation of "transient states". Take, e.g., the calculations I've done for dilepton production, i.e., the production of ##\text{e}^+ \text{e}^-##- and ##\mu^+ \mu^-##-pairs from a hot and dense strongly interacting medium.

The medium itself is described by either a fireball ("blastwave") parametrization of a hydrodynamic medium or in a coarse-graining approach with a relativistic transport-model simulation. In any case one maps the many-body system's evolution to a local-thermal-equilibrium situation.

For the dilepton-production rates we use spectral functions from an equilibrium-qft calculation. The QFT observable here is the thermal electromagnetic-current autocorrelation function, i.e., the "retarded" expectation value of ##\hat{j}^{\mu}(x) \hat{j}^{\nu}(y)## wrt. the grand-canonical ensemble statistical operator. This can be evaluated either in terms of the Matsubara formalism and then analytically continued to the corresponding retarded two-point function or you directly use the Schwinger-Keldysh real-time formalism to directly evaluate this retarded two-point function. What enters the dilepton-production-rate formula is the imaginary part of the retarded two-point function, the famous McLerran formula.

So everything from QFT is within the standard formalism of thermal correlation functions. There's no need to physically interpret transient states all "particles" observed are calculated in the sense of the usual concept of asymptotic free states.

The description of the bulk medium is in terms of semi-classical transport theories, behind which when looked at them from the point of view of many-body QFT, also is the interpretation of "particles" in terms of asymptotic free states.
 
  • #51
A. Neumaier said:
It is a physically motivated renormarization scheme replacing old-fashioned QFT, making regularization unnecessary. Why should one still want to regularize?
Well, in a way you also regularize by using the "smeared" operators, and that's very physical. Already in classical electrodynamics plane waves are mode functions, i.e., a calculational tool to solve the Maxwell equations for physically realistic fields, i.e., for fields with finite total energy, momentum, and angular momentum. It's just using generalized eigenfunctions of the appropriate self-adjoint operators (in this case the d'Alembert opertor).
Indeed, of all the attempts to make QFT mathematically rigorous this causal-perturbation-theory approach with "smearing" the distribution-like operators is the most physically plausible, and I have no quibbles with it in principle. I only don't see, what one gains from it physics wise, i.e., to calculate physically observable quantities, which cannot be calculated within the standard scheme. In standard PT dim. reg. is very convenient as a regularization scheme, and the renormalized theory is anyway independent of the regularization scheme. You can also do the subtraction in a BPHZ-like manner without intermediate regularization, but that can be tricky, and dim. reg. is just more convenient.
 
  • #52
vanhees71 said:
So everything from QFT is within the standard formalism of thermal correlation functions. There's no need to physically interpret transient states all "particles" observed are calculated in the sense of the usual concept of asymptotic free states.

The description of the bulk medium is in terms of semi-classical transport theories, behind which when looked at them from the point of view of many-body QFT, also is the interpretation of "particles" in terms of asymptotic free states.
Transient states are important but they are states of the interacting quantum field, not particle states. Particle states are only meaningful as asymptotic states. I didn't claim any relation of field operators to an unphysical interpretation of transient states as particle states but are mor complex objects.

But I claimed that N-particle states other than those occurring in the textbook description of QFT are relevent. You confirm this by referrring to retarded correlation functions of currents:
vanhees71 said:
The QFT observable here is the thermal electromagnetic-current autocorrelation function, i.e., the "retarded" expectation value of
These are not covered by the BPHZ approach to renormalization
vanhees71 said:
in a way you also regularize by using the "smeared" operators
No; these are just arbitrary functions, dummy parameters in the causal approach, analogous to the ##x## in a perturbative calculation of ##e^x##.
vanhees71 said:
In standard PT dim. reg. is very convenient as a regularization scheme, and the renormalized theory is anyway independent of the regularization scheme.
It is a physically meaningless regularization scheme, as ##4-\epsilon## dimensional space is unphysical, and not even mathematically well-defined. Moreover, the independence of the regularization scheme is an assumption, not something proved - on the contrary, there are disputes since in certain situations there seem to be disagreements.
 
  • #53
At least in equilibrium many-body QFT you can show that order by order all you need to renormalize are the vacuum pieces of the proper vertex functions. In this sense BPHZ is sufficient.

Do you have an example, where two properly applied regularization schemes lead to different results for the renormalized quantities?

I thought it's self-evident that with fixed renormalization conditions the proper vertex functions are unique and then the physical quantities like S-matrix elements, pole masses defining physical masses of particles/resonances, etc. are independent of this scheme. The reason is that you can use, in principle, some subtraction scheme a la BPHZ without any intermediate regularization, using just the renormalization conditions for the proper vertex functions.
 
  • #54
vanhees71 said:
At least in equilibrium many-body QFT you can show that order by order all you need to renormalize are the vacuum pieces of the proper vertex functions. In this sense BPHZ is sufficient.

Do you have an example, where two properly applied regularization schemes lead to different results for the renormalized quantities?
It depends on the meaning of the undefined term 'properly applied'. By convention, regularization schemes are deemed properly applied when they lead to the same results for the renormalized quantities. This is the only known criterion.

Sometimes one gets different results, however...

Then one has to investigate why they differ and which result (if any) is to be trusted. I believe that I read a couple of papers with examples where dimensional regularization was thought to be in error. But I need more time to check this and to retrieve the references.

vanhees71 said:
I thought it's self-evident that with fixed renormalization conditions the proper vertex functions are unique and then the physical quantities like S-matrix elements, pole masses defining physical masses of particles/resonances, etc. are independent of this scheme.
It is self-evident only until counterexamples are found, which force one to be more specific about how to ''properly apply'' the technique beyond what can be found in standard sources.

It would be self-evident if the schemes were rigorously derived from an undisputed rigorous definition of the theory. But the latter does not exixt yet.

It is similar to what happens with self-adjointness. You acknowledge in your lecture notes that it is a necessary property of Hamiltonians. But you seem to take it as self-evident that all Hamiltonians actually used have this property. At least you never give sufficient conditions that would allow readers to check for themselves the self-adjointness of Hamiltonians written down formally. Usually, the property holds, but there are exceptions, and they are heuristically recognized only by producing faulty results. Few physicists care about giving proof of the self-adjointness of the Hamiltonians they use. I even wonder how you would check one for self-adjointness.
 
Last edited:
  • #55
Well yes, we are physicists, not mathematicians.

One example, where you find pretty often wrong statements in introductory textbooks is the box with rigid boundary conditions and the claim there were a momentum operator. ;-).

I still don't know, which examples you have in mind, where the standard regularization techniques lead to errorneous results.

One example may be the ##\gamma^5## problem in dim reg and the chiral anomaly, but this has to be solved anyway by arguing which current has to be anomalously broken and which one must stay conserved.
 
  • #56
vanhees71 said:
I still don't know, which examples you have in mind, where the standard regularization techniques lead to erroneous results.
It is called more politely "renormalization ambiguities".
Wu said:
The conventional scale-setting procedure assigns an arbitrary range and an arbitrary systematic error to fixed-order pQCD predictions. In fact, this ad hoc procedure gives results which depend on the choice of the renormalization scheme, and it is in conflict with the standard scale-setting procedure used in QED. Predictions for physical results should be independent of the choice of scheme or other theoretical conventions.
This is a quote from the abstract of https://arxiv.org/pdf/1302.0599.pdf. It says explicitly that this is a desirable assumption, not an achieved fact. See also
Even when it can be done, showing equivalence of two renormalization schemes is usually a highly nontrivial matter leading to a publication. This means that assessing whether a renormalization procedure is "properly applied" is in all but the simplest cases more an art than a science.
 
  • Like
  • Informative
Likes dextercioby and vanhees71
  • #57
Yes, sure, that's a much more puzzling problem than what I had in mind with my statement above. What I meant is that for a given Feynman diagram for a proper vertex function you get a unique answer given a renormalization scheme usually involving a renormalization scale, independent of the intermediate regularization you use. So these proper vertex functions and the S-matrix elements within the chosen renormalization scheme are independent from a chosen regularization but of course dependent on the chosen renormalization scheme and dependent on the renormalization scale.

The S-matrix is of course only independent on the renormaliation scale to the order of the expansion parameter (couplings or ##\hbar##, number of loops...) taken into account, and one can resum the leading logarithms by using RG equations to define the running couplings to minimize that dependence.

The problem of the uncertainty concerning the dependence on the renormalization scheme and the dependence on the corresponding renormalization scale is of course also not strictly solved (as isn't the problem how to define an exact QFT in 1+3 dimensions). To estimate the uncertainty there are some hand-waving rules, e.g., by setting the scale around the energy scale of the experiment you want to describe and varying the renormalization scale by some factors around this scale, leading to some "uncertainty band".

In thermal pQCD often the scale is chosen at ##2\pi T## (##T## temperature) and varied around this value by a factor 2 up and down. It's also a problem that even for the bulk thermodynamical quantities and the equation of state the perturbative series (as far as it can be evaluated anyway) is "not well convergent".

All these problems are far from being solved. Has "causal perturbation theory" new ansatzes for this problem? That would of course be very interesting, but at the end don't you just get the proper vertex functions of the standard formalism within a special way to regularize or a special renormalization scheme somehow defined by the "smearing procedure"?

Perhaps I should have a closer look at Scharf's book again. When I looked at it the last time some years ago, I had the impression that it's just another technique to get rid of the problems of UV divergences leading at the end to the same results as standard physicists' methods, which are a lot simpler to use.
 
  • #58
Does the careful splitting of causal distributions explained in chapter 3 of Scharf's book involve the same dispersion relations and analytic continuation techniques (given as a reference to the vol. 2 of Reed and Simon Methods of mathematical physics) as those used for local quantum fields in the Wightman axioms and also in Green's functions to guarantee positive energy under time reversal?
 
  • Like
Likes vanhees71
  • #59
Tendex said:
Does the careful splitting of causal distributions explained in chapter 3 of Scharf's book involve the same dispersion relations and analytic continuation techniques (given as a reference to the vol. 2 of Reed and Simon Methods of mathematical physics) as those used for local quantum fields in the Wightman axioms and also in Green's functions to guarantee positive energy under time reversal?
As far as I can tell, yes. These techniques are very general.
 
  • Like
Likes vanhees71
  • #60
But then it's indeed equivalent to the standard techniques for evaluating proper vertex functions within a given renormalization scheme.
 
  • #61
vanhees71 said:
But then it's indeed equivalent to the standard techniques for evaluating proper vertex functions within a given renormalization scheme.
In practice, that's what it looks like to me, just more mathematically sophisticated in that it perturbatively avoids the interaction picture and gets rid of the UV divergences in a more elegant way if you like. I found this video that talks about this equivalence stressing the renormalization and distributional issues.
 
  • Like
Likes A. Neumaier and vanhees71
  • #62
Tendex said:
In practice, that's what it looks like to me, just more mathematically sophisticated in that it perturbatively avoids the interaction picture and gets rid of the UV divergences in a more elegant way if you like. I found this video that talks about this equivalence stressing the renormalization and distributional issues.
The moral of this primarily historical and conceptually not demanding lecture on the causal approach is given in minute 47:34:
Michael Miller said:
There are compelling reasons to adopt an effective field interpretation of QFT, but providing the only available solution to the UV problems of the theory is not one of them.
(since causal perturbation theory settles these in a more convincing manner)
 
Last edited:
  • Like
Likes vanhees71
  • #63
Tendex said:
I found this video
What a weird, empty talk. It's "conceptually not demanding" because it's almost totally content-free. That's an hour that would have been better spent studying Scharf's textbook.
 
  • Like
Likes dextercioby
  • #64
strangerep said:
What a weird, empty talk. It's "conceptually not demanding" because it's almost totally content-free. That's an hour that would have been better spent studying Scharf's textbook.
Well, it is not quite empty. It explains how nonlinear operations with distributions force naively infinite constants and properly done lead to undetermined coefficients, already in simpler situations than quantum field theory. Thus it explains why the well-known distributional nature of quantum fields (even free ones) must run into difficulties and the need for renormalization.

By the way, the author is a philosopher with a PhD in physics.
 
  • Like
Likes dextercioby and Tendex
  • #65
A. Neumaier said:
Well, it is not quite empty. It explains how nonlinear operations with distributions force naively infinite constants [...]
I suspect you see those explanations in the talk because you've already been thinking about the subject for ages.

By the way, the author is a philosopher with a PhD in physics.
Yes, I was aware. To me, that fact explained his tendency to waffle on for quite a while without saying very much. :oldwink:
 
  • Like
Likes vanhees71 and weirdoguy
  • #66
The talk was given in the context of a Philosophy of physics meeting and the audience was not just theoretical physicists but also philosophers so that is the level and "style" it was geared towards. I agree with Neumaier that some important points about distributions and renormalization were clearly made.

I must say, though, that I much prefer the directly non-perturbative way to deal with these renormalization need related concepts, namely the Källén-Lehmann spectral representation and the explanations in the talk enticed me because they were general enough to relate to free fields, distributions and the constraints imposed in non-perturbative interacting fields and not just to the particular perturbative strategy of Epstein-Glaser that must ultimately be justified by the former to exist.
 
Last edited:
  • #67
Demystifier said:
Epstein-Glaser?
I don't know very much on these topics, so please enlighten me if I'm wrong. But if I understand correctly, Epstein-Glaser and the like merely describe in precise terms how renormalization is to be done, without making any attempt to justify the procedure "from first principles". They don't describe what it is that we are trying to calculate using these prescriptions. They provide an answer, but not the question!

Indeed, this will be true of any formalism that begins with a Lagrangian. Renormalization does not allow us to calculate the observable values of masses and coupling constants from the Lagrangian, meaning that predicting physical values requires some additional input. A properly specified theory would include "fundamental" parameters that eventually fix the measured values. If all we have is a Lagrangian, we simply do not know what we are calculating.

Another aspect of the same thing (I believe) is that Epstein-Glaser is fundamentally perturbative - that is, the power series is the only output; there is no function that the series is intended to approximate!

So I'm not sure these methods bring us any closer to explaining what we actually mean by QFT interaction terms...
 
  • Like
Likes Tendex
  • #68
maline said:
this will be true of any formalism that begins with a Lagrangian.
Causal perturbation theory does not work with Lagrangians!

maline said:
So I'm not sure these methods bring us any closer to explaining what we actually mean by QFT interaction terms...
In the causal approach, the meaning is precisely given by the axioms for the parameterized S-matrix. The construction is at present perturbative only. Missing are only suitable summation schemes for which one can prove that their result satisfies the axioms nonperturbatively. This is a nontrivial and unsolved step but not something that looks completely hopeless.
 
  • #69
Tendex said:
In your insights article you write: "To define particular interacting local quantum field theories such as QED, one just has to require a particular form for the first order approximation of S(g). In cases where no bound states exist, which includes QED, this form is that of the traditional nonquadratic term in the action, but it has a different meaning."
Exactly in what way is the meaning different from the one in the traditional action? Does the approximation follow an local action principle or not?
It looks to me like it just uses a renormalized Lagrangian instead of the usual bare one since it changes the moment when the renormalizations is performed to a previous step instead of the usual latter one. But the local action is still there in the background, just more rigurously renormalized from the start.
One can force causal perturbation theory into a Lagrangian framework then it looks like this.

But nothing in causal perturbation theory ever makes any use of Lagrangian formalism or Lagrangian intuition. No action principle is visible in causal perturbation theory; it is not even clear how one should formulate the notion of an action!

Instead, causal perturbation theory starts with a collection of well-informed axioms for the parameterized S-matrix (something not at all figuring in the Lagrangian approach) and exploits the relations that follow from a formal expansion of the solution of these equations around a free quantum field theory. The latter need not be defined by a Lagrangian either but can be constructed directly from irreducible representations of the Poincare group, as in Weinberg"s book (where Lagrangians are introduced much later than free fields).
 
  • #70
Tendex said:
To define particular interacting local quantum field theories such as QED, one just has to require a particular form for the first order approximation of S(g). In cases where no bound states exist, which includes QED, this form is that of the traditional nonquadratic term in the action, but it has a different meaning.
This statement needs to be corrected. It indicates that the S-matrix for QED includes a term of first order in ##e##, when in fact the first term is of order ##e^2##. There are no one-vertex processes, because it isn't possible for all three particles (two fermions and a photon) to be on-shell.

To see this, assume WLOG that the photon is outgoing, and consider the energy in the rest frame of the incoming fermion.
 

Similar threads

Replies
3
Views
1K
  • Quantum Physics
Replies
14
Views
1K
  • Quantum Physics
Replies
13
Views
1K
Replies
108
Views
14K
  • Quantum Physics
Replies
3
Views
1K
Replies
18
Views
2K
  • Quantum Physics
Replies
7
Views
2K
Replies
1
Views
898
Replies
1
Views
713
  • Quantum Physics
Replies
5
Views
1K
Back
Top