arnold

Arnold Neumaier Interview: QFT Foundations & Thermal

📖Read Time: 13 minutes
📊Readability: Advanced (Technical knowledge needed)
🔖Core Topics: quantum, theory, qft, field, work

Table of Contents

Please give us a brief background on yourself

Arnold NeumaierOver 40 years ago, I studied mathematics, physics and computer science in Freiburg and Berlin (Germany). My Ph.D. thesis was in pure mathematics, but I was later drawn more and more into applied mathematics. Since 1994 I have been a full professor of mathematics at the University of Vienna (Austria), where my official subject is optimization and numerical analysis. Since these are central subjects wherever mathematics is applied, I am interested in much more. Concerning applications in physics, I worked among others:

My scientific reputation is primarily based on my mathematical work. My research in physics therefore has the advantage that, unlike full-time physicists, I do not feel the pressure to publish incremental work in a competitive environment. Instead, I can spend my free time on unpopular or hard problems where good work does not immediately lead to results but needs long gestation periods of seemingly very little or no progress before things fall into place.

I often feel like working on a puzzle with 5,000 pieces: I assemble larger and larger, seemingly disconnected patches forming islands in my quest to understand the mathematical and philosophical foundations of physics. From time to time I see how two of these patches suddenly want to coalesce. This leads to breathtaking moments where continuous hard work culminates in exciting insights into the deeper workings of physics.

The joy of increased understanding is not diminished by the realization that, usually, I later find I was not the first to have had these insights. One day I hope to see the whole puzzle finished. From the parts I have already seen and understood I can tell that it will surely be a very beautiful whole.

Thus I sympathize very much with sentiments expressed repeatedly by John Archibald Wheeler, such as:

Someday, surely, we will see the principle underlying existence as so simple, so beautiful, so obvious that we will all say to each other, “Oh, how could we all have been so blind, so long.” (from “A Journey Into Gravity and Spacetime”, Scientific American Library, 1990)

You seem to effortlessly balance the theoretical with the practical. How do you see these two as a relationship now and going forward in science? Where does such diversity of interest in both come from?

Two years after my Ph.D., my formerly atheistic worldview changed and I became a Christian. I became convinced that there is a very powerful God who created the Universe, who controls what appears to us as chance, and who is interested in each of us individually. I understood (with Galilei, and later Newton and Maxwell) that God had written the book of nature in the language of mathematics. As a result of these insights, one of my life goals became to understand all the important applications of mathematics in other fields of science, engineering, and ordinary life. It is a challenge that keeps me learning all my life.

Theory and application prosper best together: to do the best in an application one needs to be familiar with (or help create) the best theoretical tools available. There is nothing more practical than a good theory. Though this saying was created in 1951 by the social scientist Kurt Lewin, it applies equally to natural science, engineering, and ordinary life.

Knowing the power of a tool in various applications gives highly discriminating standards for assessing its quality. This is important for separating the wheat from the chaff in theoretical work. It also gives a good feeling for the natural limits of a theoretical approach and signals which of the many questions that one can ask about a subject are important and likely to one day yield a strong answer.

What are some technologies and scientific advancements you are most interested in?

Technologies do not hold much interest for me personally. I do not even own a car or a TV set. However, I possess a laptop and a smartphone—primarily so that my 11-year-old son Felix can play with them.

I notice that humankind is well on the way to surrendering its sovereignty to machines (computers, robots, intelligent networks). It may well turn out to be irreversible, but whether it is in the interest of the human race is somewhat questionable.

Part of the research in my group is about teaching computers how to think like a mathematician. Or, more precisely, like me, since I am the only mathematician I know well enough to understand his precise way of thinking.

Apart from that, I concentrate on advances in optimization and on the mathematical and philosophical foundations of relativistic quantum field theory (QFT).

What do you think are neglected open questions in QFT that deserve more attention?

The best people too often focus on seeking new physics—extending the Standard Model to include quantum gravity, string theory, dark matter, etc.—and neglect the fact that many unresolved fundamental issues remain with the simplest realistic QFTs: quantum electrodynamics (QED) and quantum chromodynamics (QCD).

In particular, the question remains wide open of how to model and quantitatively predict bound states (hadrons, nuclei, molecules) from first principles. We have working simplified models on many levels of detail, but lack a deeper understanding of how each coarser model is derivable from the more detailed ones. The simplified models are often postulated without sufficient justification, using hand-waving arguments only.

How do you see the status of the Yang–Mills mass gap problem?

It is the only one among the Clay Millennium problems where the task is not simply to solve a question already stated precisely in mathematical terms. Instead, the requirement is to find a mathematical setting in which partial results from quantum field theory—obtained heuristically by theoretical physicists—are given a logically sound footing.

The task is to turn the physicist’s results into theorems about well-defined mathematical objects with all the properties demanded by the application. In essence it is a quest for the mathematical modeling of quantum field theory in four space–time dimensions with specified interactions. This must be done for Yang–Mills fields—a class of concrete theories deemed technically the easiest. A solution will consist of a nonperturbative mathematical construction of this class of QFTs. In particular, the construction will produce a representation of the bounded part of the quantum algebra on a Hilbert space of physical states.

If solved convincingly, the solution will not only significantly advance the mathematical toolkit available to physicists, but may also change the way quantum field theory is taught to the next generation of students.

You amplify the need to formulate conceptual foundations for nonperturbative QFT. Do you have any suggestions or proposals?

All I can safely say is that the correct approach has not yet been found. It is not even known what it means, in logically precise terms, to have a nonperturbative construction of a relativistic QFT with given interaction terms.

In the literature I see several worthwhile approaches that appear to contain complementary parts of what is needed. But none is likely to have, on its own, the strength required for a successful attack on the problem.

Strocchi’s work on the algebraic structure of rigorous nonperturbative gauge theories will surely be relevant in some way.

The theory of unitary representations of infinite-dimensional groups forms the backbone of many exactly solvable models in two space–time dimensions and in nonrelativistic QFT. It faces, in higher dimensions, similar limitations as current four-dimensional relativistic QFT. I expect that insights from that line of work (due to Mickelsson, Rajeev and others) will play a major role on the way towards proper nonperturbative constructions.

Vertex algebras coordinatize nonperturbative two-dimensional conformal field theories. An appropriate four-dimensional generalization of vertex algebras will likely play a role in providing rigorous foundations for operator product expansions. Holland’s perturbative work in this direction might help abstract the properties needed. A study of representations of infinite-dimensional Lie groups in vertex algebras might overcome some problems of group representations in Hilbert spaces.

The nonrigorous manipulations done in the context of the closed time path (CTP) functional integral approach to kinetic equations also appear relevant for a rigorous nonperturbative setting. Perhaps one day there will be a rigorous version of the exact renormalization group techniques (e.g., work by Wetterich) used in this context.

But of course, the real problem is to combine the various approaches and insights in the right way. If I had realistic suggestions, I would not mention them here; I would work them out and win the Millennium Prize!

There are other questions in QFT that get much popular press these days, for example issues related to “holography”. What do you think about these problems and the way they are being approached by the community?

Holography (especially its realization in the AdS/CFT correspondence) has clear value and has begun to impact numerical modeling in QFT. But, unlike many outspoken experts in the field, I believe it is just one of many useful tools and that it will have little impact on the big question of unifying physics within QFT. Real progress in that area will most likely depend on a much better understanding of the nonperturbative aspects of standard QFT—especially in the infrared, where the true complexities and the most severe unsolved problems appear.

You have highlighted that perturbative QFT has a rigorous formulation in terms of causal perturbation theory. How do you explain the sociological effect that most quantum field theorists are unaware their subject has solid foundations?

The main reason is that most people working in QFT care little about the foundations, which usually only consolidate computations known heuristically for a long time. The additional rigor now available seems to them not worth more attention than the rigorous treatment of Dirac’s “delta functions.” For twenty years after their introduction, these were thought to be mathematically dubious, but they were later given a respectable rigorous foundation through the theory of distributions. That the latter—though over 70 years old—advances is hardly mentioned in typical quantum mechanics textbooks; this reflects a lack of care for foundational developments.

Theoretical physicists working in QFT also tend to dismiss the more rigorous algebraic approach championed by mathematical physicists. Two reasons contribute to this: (1) a severe language barrier separating the communities; and (2) long ago, when people attempted to bridge that barrier, the algebraic results were still too weak to be useful for practitioners. The latter has changed, but the barrier persists.

What are examples of issues in QFT where having the rigorous formulation makes a difference?

The main advantage is that a rigorous formulation shows that the infinities commonly discussed in textbooks are often the result of sloppy argumentation. They disappear when one handles distribution-valued field operators with the care required by their singular nature. This can be done without significantly more formal effort than conventional heuristics require.

All perturbative formulas can be constructed in a clean and unambiguous way, without taking dubious limits. There is no need for cut-offs that must be moved to infinity, possibly through a Landau pole singularity, nor for nonphysical, nonintegral dimensions and then taking a physical limit to four dimensions. Everything can be made manifestly covariant, so that the underlying principles are never compromised.

Finally, the rigorous approach shows that something is deeply amiss in the traditional S-matrix formulation of perturbative QFT, resulting in still unresolved infrared problems. The deeper origin is that the traditional setting has an incorrect view of the asymptotic state space.

In violation of observable physics, perturbation theory treats the electron (which has an intrinsic electromagnetic field) as a chargeless asymptotic particle. Similarly, confined quarks (which do not exist asymptotically) are treated as asymptotic particles, appearing as in- and out-states in the perturbative S-matrix. Conversely, bound states that should be asymptotic states are not directly visible in the perturbative S-matrix; their existence is inferred only indirectly through the poles of renormalized propagators.

Recently there has been a claim of advancement regarding the infrared behaviour of massless fields due to the thesis by Paweł Duch, “Massless fields and adiabatic limit in quantum field theory”. How do you evaluate the results?

Interest in these questions has grown recently after a long period of neglect.

Duch’s thesis embeds the Faddeev–Kulish approach into the causal perturbation theory framework, which is a significant improvement. It remains to be seen whether the limitations of the Faddeev–Kulish approach pointed out in a recent paper by Dybalski are absent in Duch’s approach. On p.57 Duch mentions the potentially most interesting item—the strong adiabatic limit, amounting to a perturbative construction of the S-matrix without infrared problems—but he leaves it unjustified, to be presented elsewhere.

Recent work by Strominger (review) and Herdegen on the asymptotic structure of QED also appears relevant.

None of these address the additional problems encountered when bound states are present. For that, there is related work by Derezinski & Gerard on the nonperturbative construction of the S-matrix in nonrelativistic models that exhibit perturbative infrared problems.

In your “Theoretical physics FAQ” you highlight, among others, the two textbooks by Günter Scharf on causal perturbation theory. Scharf added a new chapter considering isotropic but inhomogeneous solutions to Einstein’s equations and advertises them as a potential solution to the dark matter problem. What do you think?

I leave questions about dark matter and its properties to specialists in that area.

My interest in Scharf’s “true ghost story” lies primarily in how it shows the power of causality. Causality alone forces the gauge-group structure in the presence of massless spin-1 fields (including the ghost features) and it forces the general diffeomorphism covariance structure of massless spin-2 fields.

Scharf’s work also indicates that there seems to be nothing fundamentally wrong with canonically quantized gravity. Indeed, I would bet that the final unified field theory of all four fundamental forces will be a canonically quantized field theory rather than something more elaborate such as string theory.

At first glance, getting a rigorous handle on infrared problems of gauge field theories and finding a consistent nonperturbative formulation of quantum gravity appear far apart. But I believe these problems are intimately related: both are sides of the same technical obstacle that must be overcome to properly understand four-dimensional interacting relativistic quantum field theory.

In your publications and FAQ you consider issues of interpretation of quantum mechanics. Are there open problems in the interpretation of QM that matter and need solutions?

A widely open foundational problem—almost nobody seems interested in it despite its basic nature—is how quantum mechanics (QM) is embedded into quantum field theory. QFT is the more fundamental subject, but it is seemingly built upon QM only through a somewhat loose connection: QFT provides scattering matrices for use in QM.

Conceptually, the objects with which quantum mechanics works (particles, rays, filters, detectors) and their properties (location, number, interaction) should be described by local patterns in spatially extended quantum fields. Here, virtually nothing is known. It is not even clear how a ray containing a stream of particles generated by a source and observed by a detector should be modeled in QFT. It is unsolved how to do this so that the traditional properties of the ray—assumed in quantum mechanical treatments—could be derived rather than postulated.

Clearly this involves idealization and approximation problems, like going from quantum theory to thermodynamics using statistical mechanics. Thus quantum mechanics should be built on QFT rather than the other way around. Investigating this should provide limits on the accuracy of quantum mechanical assumptions due to approximations involved in the idealizations made.

I believe such investigations would help solve the long-standing issue—unparalleled in any other area of science—of multiple coexisting but mutually incompatible interpretations of quantum mechanics. All these interpretations take idealizations at face value and pretend (unlike experimentalists who cope with intrinsic uncertainties in defining systems) that the customary foundations of quantum mechanics are exact. Treating these approximations as exact is, in my opinion, the core of the apparent paradoxes and the resulting perceived weirdness in QM foundations.

Do you have a suggestion for the solution?

It may be that finding satisfying philosophical foundations for quantum theory and finding rigorous logical foundations for quantum field theory will have to be solved simultaneously. This is suggested by the fact that superselection rules inherent in infrared problems give rise to classical (i.e., everywhere commuting) variables. Another hint is that textbook QFT currently lacks clear information about positivity—the Hilbert space structure of the underlying Fock space is often destroyed by renormalization—which is indispensable for interpreting QFT in terms of measurement.

On the other hand, a satisfying solution to the measurement problem may be close. My thermal interpretation seems to be the only interpretation of quantum mechanics that makes sense of how QM is actually applied to concrete problems without extraneous elements: neither external classical observers nor hidden variables nor unobservable worlds nor subjective probabilities are needed.

The thermal interpretation recognizes two basic but previously overlooked facts. First, we never directly measure something microscopic; we deduce microscopic information indirectly from macroscopic measurements together with a theory relating those measurements to the microscopic system of interest. Second, a macroscopic observation is the deterministic reading of an ensemble expectation value, not (as postulated in conventional interpretations) an intrinsically random event governed by Born’s rule. Both facts together remove the validity of many no-go theorems for a realistic interpretation of quantum mechanics.

The thermal interpretation takes into account the approximate nature of quantum objects, but it is not yet sufficiently developed to answer all unsolved questions. This would require explicit QFT models of measurement situations that can be solved with customary approximations (including suitable coarse graining and a thermodynamic limit). Their solution should recover conventional quantum theory, including Born’s rule where it applies.

While I believe such an approach is feasible, it is beyond my current energy. My time is taken by a full-time mathematics professorship and my desire to finish a (delayed) book on quantum mechanics from a Lie algebra point of view. My earlier draft from 2011 has grown oversize and will be split into two volumes. The first volume, concentrating on the mathematics, was expected in Spring 2020.

The second volume, expected to include substantial material related to the thermal interpretation, should appear one year later. With some luck, I might find the time to work out a reasonably elementary QFT model that convincingly shows what is currently missing: an understanding of which approximations are needed to deduce conventional QM starting points from the QFT ontology underlying the thermal interpretation. The ultimate goal is to show that macroscopic measurements of microscopic quantities are random with statistics according to Born’s rule.

But unfortunately, research always takes much longer than anticipated. I could use a 48-hour day if not more to do everything I am interested in…

Special thanks to Urs Schreiber and DrChinese for supplying questions for this interview.

Graphic used in header: Escher, M.C. Dutch, 1898–1972 — The Second Day of the Creation (c. 1925)

Read the next interview with physicist Clifford V. Johnson.

39 replies
  1. A. Neumaier says:
    Urs Schreiber
    1. inductively in ##k in mathbb{N}## choose splittings/extensions of distributions in Epstein-Glaser renormalization as ##k to infty##;
    2. consecutively in ##Lambda in [0,infty)## choose counterterms at UV-cutoff ##Lambda## for ##Lambda to infty##.

    In both cases we zoom in with a sequence of shrinking neighbourhods to a specific point in the space of renormalization schemes in causal perturbation theory. Only the nature and parameterization of these neighbourhoods differs. But the Wilsonian intuition, that as we keep going (either way) we see more and more details of the full theory, is the same in both cases.It seems to me that 2. involves a double limit since the cutoff is also applied order by order, and the limit at each order as the cutoff is removed, gives the corresponding order on causal perturbation theory. Thus Wilson's approach is just an approximation to the causal approach, and at a fixed order one sees in the Wilsonian approach always less details than in the causal approach.

  2. Urs Schreiber says:
    A. Neumaier

    I don't understand how you can phrase in this way the theorem stated. You seem to say (i) below but the theorem seems to assert (ii) below.

    (i) The space of possible Wilsonian effective field theories, viewed perturbatively, is identical with the collection of pQFTs formulated via causal perturbation theory.

    (ii) The space of possible limits ##Lambdatoinfty## of the Wilsonian flows is identical with the collection of pQFTs formulated via causal perturbation theory.

    A Wilsonian effective theory has a finite ##Lambda## and hence seems to me not to be one of the theories defined by causal perturbation theory. In any case, the Wilsonian flow is a flow on a collection of field theories, while causal perturbation theory does not say anything about flows on the space of renormalization parameters.I am pointing out that the following are two ways to converge to a fully renormalized pQFT according to the axioms of causal perturbation theory:

    1. inductively in ##k in mathbb{N}## choose splittings/extensions of distributions in Epstein-Glaser renormalization as ##k to infty##;
    2. consecutively in ##Lambda in [0,infty)## choose counterterms at UV-cutoff ##Lambda## for ##Lambda to infty##.

    In both cases we zoom in with a sequence of shrinking neighbourhods to a specific point in the space of renormalization schemes in causal perturbation theory. Only the nature and parameterization of these neighbourhoods differs. But the Wilsonian intuition, that as we keep going (either way) we see more and more details of the full theory, is the same in both cases.

    BTW, that proof in DFKR 14, A.1 is really terse. I have spelled it out a little more: here.

  3. A. Neumaier says:
    Urs Schreiber

    Wilsonian effective field theory flow with cutoff-dependent counterterms is an equivalent way to parameterize the ("re"-)normalization freedom in rigorous pQFT formulated via causal perturbation theory.I don't understand how you can phrase in this way the theorem stated. You seem to say (i) below but the theorem seems to assert (ii) below.

    (i) The space of possible Wilsonian effective field theories, viewed perturbatively, is identical with the collection of pQFTs formulated via causal perturbation theory.

    (ii) The space of possible limits $Lambdatoinfty$ of the Wilsonian flows is identical with the collection of pQFTs formulated via causal perturbation theory.

    A Wilsonian effective theory has a finite $Lambda$ and hence seems to me not to be one of the theories defined by causal perturbation theory. In any case, the Wilsonian flow is a flow on a collection of field theories, while causal perturbation theory does not say anything about flows on the space of renormalization parameters.

  4. Urs Schreiber says:
    A. Neumaier

    Oh, there was a typoAh, okay. :-)

    A. Neumaier

    I think atyy's point was that Wilson's conceptual view is in principle nonperturbative.To make progress in the discussion we should leave the non-perturbative aspect aside for the moment, and first of all find agreement for pQFT, where we know what we are talking about.

    What I keep insisting is that Wilsonian effective field theory flow with cutoff-dependent counterterms is an equivalent way to parameterize the ("re"-)normalization freedom in rigorous pQFT formulated via causal perturbation theory.

    Namely the theorem by Dütsch-Fredenhagen et. al. which appears with a proof as theorem A.1 in

    • Michael Dütsch, Klaus Fredenhagen, Kai Keller, Katarzyna Rejzner,
      "Dimensional Regularization in Position Space, and a Forest Formula for Epstein-Glaser Renormalization",
      J. Math. Phy. 55(12), 122303 (2014)
      (arXiv:1311.5424)

    says the following:

    Given a gauge-fixed free field vacuum around which to perturb, and choosing any UV-regularization of the Feynman propagator ##Delta_F## by non-singular distributions ##{Delta_{F,Lambda}}_{Lambda in [0,infty)}##, in that

    $$ Delta_F = underset{Lambda to infty}{lim} Delta_{F,Lambda}$$

    and writing

    $$
    mathcal{S}_Lambda(O) := 1 + frac{1}{i hbar} + frac{1}{2} frac{1}{(i hbar)^2} O star_{F,Lambda} O + frac{1}{6} frac{1}{(i hbar)^3} O star_{F,Lambda} O star_{F,Lambda} O
    + cdots
    $$

    for the corresponding regularized S-matrix at scale ##Lambda## (built from the star product that is induced by ##Delta_{F,Lambda}##) then:

    1. There exists a choice of regularization-scale-dependent vertex redefinitions ##{mathcal{Z}_Lambda}_{Lambda in [0,infty)}## (sending local interactions to local interactions), hence of "counterterms" such that the limit
      ## mathcal{S}_infty := underset{Lambda to infty}{lim} mathcal{S}_Lambda circ mathcal{Z}_Lambda ##
      exists and is an S-matrix scheme in the sense of causal perturbation theory (this def., hence is Epstein-Glaser ("re"-)normalized);
    2. every Epstein-Glaser ("re"-)normalized S-matrix scheme ##mathcal{S}## arises this way;
    3. the corresponding Wilsonian effective field theory at scale ##Lambda## is that with effective (inter)action given by
      ##S_{eff,Lambda} = mathcal{S}_Lambda^{-1} circ mathcal{S}_infty(S_{int})##.

    This exhibits the choice of scale-dependent effective actions of Wilsonian effective field theory as an alternative way to parameterize the ("re"-)normalization choice in causal perturbation theory.

    See also

    • Michael Dütsch,
      "Connection between the renormalization groups of Stückelberg-Petermann and Wilson",
      Confluentes Mathematici, Vol. 4, No. 1 (2012) 12400014
      (arXiv:1012.5604)
  5. A. Neumaier says:
    Urs Schreiber

    They'd better notOh, there was a typo; I meant they don't contradict each other.

    Urs Schreiber

    if both are about the same subject, pQFT.But they aren't. I think atyy's point was that Wilson's conceptual view is in principle nonperturbative. The noncontradiction stems from the fact that both lead to valid and time-proved approximations of QFT.

  6. Urs Schreiber says:
    A. Neumaier

    The two point of views contradict each other.They'd better not, if both are about the same subject, pQFT.

    There are various ways to parameterize the ("re"-.)normalization choices in causal perturbation theory, and one is by Wilsonian flow of cutoff. This is explained in section 5.2 of

    • Romeo Brunetti, Michael Dütsch, Klaus Fredenhagen,
      "Perturbative Algebraic Quantum Field Theory and the Renormalization Groups",
      Adv. Theor. Math. Physics 13 (2009), 1541-1599
      (arXiv:0901.2038)
  7. A. Neumaier says:
    atyy

    That's an interesting comparison. But maybe this aspect of the Wilsonian viewpoint is different. In the Wilsonian viewpoint, we don't need to know the theory at infinitely high energies, whereas I don't think Scharf's work makes sense unless a theory exists at infinitely high energies.

    Urs Schreiber

    I'd think this is only superficially so. In Epstein-Glaser-type causal perturbation theory (which is what Scharf's textbooks work out, but Scharf is not the originator of these ideas) one has in front of oneself the entire (possibly infinite) sequence of choices of renormalization contants, but one also has complete control over the space of choices and hence one has directly available the concept "all those pQFTs whose first ##n## renormalization constants have the following fixed values, with the rest being arbitrary". This is exactly the concept of knowing the theory up to that order.The two point of views contradict each other. Causal perturbation theory has no effective cutoff and is at fixed loop order defined at all energies. In this sense it exists and gives results that compare (in case of QED) exceedingly well with experiment. The only question is how accurate the fixed order results are at energies relevant for experiments. Here numerical results indicate that one never needs to go to more than 4 loops.

    atyy

    But in the Epstein-Glaser theory, no theory is constructed, ie. the power series are formal series, and it is unclear how to sum them. In contrast, if we use a lattice theory as the starting point for Wilson, then that starting point is at least a well defined quantum theory.Formal power series can be approximately summed by many methods, including Pade approximation, Borel summation, and extensions of the latter to resurgent transseries. The result is always Poincare invariant and hence in agreement with the principles of relativity; unitarity is guaranteed to the order given, which is usually enough. Thus for practical purposes one has a well-defined theory. only those striving for rigor need more.

    On the other hand, lattice methods don't respect the principles of relativity (not even approximately) unless they are extrapolated to the limit of vanishing lattice spacing and infinite volume. In this extrapolation, all problems reappear that were swept under the carpet through the discretization. The extrapolation limit of lattice QFT – which contains the real physics – is – to the extend we know – as little well-defined as the limit of the formal power series in causal perturbation theory.

    Moreover, concerning the quality of the approximation, lattice QED is extremely poor when compared with few loops QED, and thus cannot compete in quality. The situation is slightly better for lattice QCD, but there both approaches have up to now fairly poor accuracy (5 percent or so), compared with the 12 relative digits of perturbative QED calculations.

  8. atyy says:
    Urs Schreiber

    I'd think this is only superficially so. In Epstein-Glaser-type causal perturbation theory (which is what Scharf's textbooks work out, but Scharf is not the originator of these ideas) one has in front of oneself the entire (possibly infinite) sequence of choices of renormalization contants, but one also has complete control over the space of choices and hence one has directly available the concept "all those pQFTs whose first ##n## renormalization constants have the following fixed values, with the rest being arbitrary". This is exactly the concept of knowing the theory up to that order.But in the Epstein-Glaser theory, no theory is constructed, ie. the power series are formal series, and it is unclear how to sum them. In contrast, if we use a lattice theory as the starting point for Wilson, then that starting point is at least a well defined quantum theory.

  9. Urs Schreiber says:
    atyy

    But maybe this aspect of the Wilsonian viewpoint is different. In the Wilsonian viewpoint, we don't need to know the theory at infinitely high energies, whereas I don't think Scharf's work makes sense unless a theory exists at infinitely high energies.I'd think this is only superficially so. In Epstein-Glaser-type causal perturbation theory (which is what Scharf's textbooks work out, but Scharf is not the originator of these ideas) one has in front of oneself the entire (possibly infinite) sequence of choices of renormalization contants, but one also has complete control over the space of choices and hence one has directly available the concept "all those pQFTs whose first ##n## renormalization constants have the following fixed values, with the rest being arbitrary". This is exactly the concept of knowing the theory up to that order.

  10. atyy says:
    Urs Schreiber

    I feel that this statement deserves more qualification:

    What Scharf argues is that from just the mathematics of perturbative QFT, there is no technical problem with a "non-renormalizable" Lagrangian: The non-renormalizability simply means (by definition) that infinitely many constants need to be chosen when renormalizing. While this may be undesireable, it is not mathematically inconsistent (unless one adopts some non-classical foundation of mathematics without, say, the axiom of choice; but this is not the issue that physicists are commonly concerned with). Simply make this choice, and that's it then.

    I feel that the more popular Wilsonian perspective on this point is really pretty much the same statement, just phrased in different words: The more popular statement is that gravity makes sense as an effective field theory at any given cutoff energy, and that one needs to measure/fix further counterterms as one increases the energy scale.

    So I feel there is actually widespread agreement on this point, just some difference in terminology.That's an interesting comparison. But maybe this aspect of the Wilsonian viewpoint is different. In the Wilsonian viewpoint, we don't need to know the theory at infinitely high energies, whereas I don't think Scharf's work makes sense unless a theory exists at infinitely high energies.

  11. A. Neumaier says:
    Urs Schreiber

    The non-renormalizability simply means (by definition) that infinitely many constants need to be chosen when renormalizing. While this may be undesirable, it is not mathematically inconsistentSee also this thread from Physics Stack exchange, where solutions of an ''unrenormalizable'' QFT obtained by reparameterizing a renormalizable QFT are discussed.

  12. A. Neumaier says:
    Urs Schreiber

    Hm, I guess Arnold will argue that we can construct choices for these auxiliary functions. There won't be a canonical choice but at least constructions exist and we don't need to appeal to non-constructive choice principles. Okay, I suppose I agree then!Yes.

    More specifically, there is no significant difference between choosing from a finite number of finite-dimensional affine spaces in the renormalizable case and choosing from a countable number of finite-dimensional affine spaces in the renormalizable case. The same techniques that apply in the first case to pick a finite sequence of physical parameters (a few dozen in the case of the standard model) that determine a single point in each of these spaces can be used in the second case to pick an infinite sequence of physical parameters that determine a single point in each of these spaces. Here a parameter is deemed physical if it could be in principle obtained from sufficiently accurate statistics on collision events or other in principle measurable information.

    Any specific such infinite sequence provides a well-defined nonrenormalizable perturbative quantum field theory. Thus there is no question of being able to make the choices in very specific ways. As in the renormalizable case, experiments just restrict the parameter region in which the theory is compatible with experiment. Typically, this region constrains the first few parameters a lot and the later ones much less.This is precisely the same situation as when we have to estimate the coefficients of a power series of a function ##f(x)## from a finite number of inaccurate function values given together with statistical error bounds.

  13. Urs Schreiber says:

    I wrote:

    Urs Schreiber

    In the original Epstein-Glaser 73 the choice at order ##omega## happens on p. 27, where it says "we choose a fixed auxiliary function ##w in mathcal{S}(mathbb{R}^n)## such that…".Hm, I guess Arnold will argue that we can construct choices for these auxiliary functions. There won't be a canonical choice but at least constructions exist and we don't need to appeal to non-constructive choice principles. Okay, I suppose I agree then!

  14. Urs Schreiber says:
    A. Neumaier

    I gave the simplest constructive choice that leads to a valid solution, namely setting to zero enough higher order parameters in a given renormalization fixing schemeThe space of choices of renormalization parameters at each order is not a vector space, but an affine space. There is no invariant meaning of "setting to zero" these parameters, unless one has already chosen an origin in these affine spaces. The latter may be addressed as a choice of renormalization scheme, but this just gives another name to the choice to be made, it still does not give a canonical choice.

    You know this, but here is pointers to the details for those readers who have not see this:

    In the original Epstein-Glaser 73 the choice at order ##omega## happens on p. 27, where it says "we choose a fixed auxiliary function ##w in mathcal{S}(mathbb{R}^n)## such that…". With the choice of this function they build one solution to the renormalization problem at this order (for them a splitting of distributions) which they call ##(T^+, T^-)##. With this "origin" chosen, every other solution of the renormalization at that order is labeled by a vector space of renormalization constants ##c_alpha## (on their p. 28, after "The most general solution"). It might superficially seem the as if we could renormalize canonically by declaring "choose all ##c_alpha## to be zero". But this is an illusion, the choice is now in the "scheme" ##w## relative to which the ##c_alpha## are given.

    In the modern reformulation of Epstein-Glaser's work in terms of extensions of distributions in Brunetti-Fredenhagen 00 the analogous step happens on p. 24 in or below equation (38), where at order ##omega## bump functions ##mathfrak{w}_alpha## are chosen. The theorem 5.3 below that states then that with this choice, the space of renormalization constants at that order is given by coefficients relative to these choices ##mathfrak{w}_alpha##.

    One may succintly summarize this statement by saying that the space of renormalization parameters at each order, while not having a preferred element (in particular not being a vector space with a zero-element singled out) is a torsor over a vector space, meaning that after any one point is chosen, then the remaining points form a vector space relative to this point. That more succinct formulation of theorem 5.3 in Brunetti-Fredenhagen 00 is made for instance as corollary 2.6 on p.5 of Bahns-Wrochna 12.

    Hence for a general Lagrangian there is no formula for choosing the renormalization parameters at each order. It is in very special situations only that we may give a formula for choosing the infinitely many renormalization parameters. Three prominent such situations are the following:

    1) if the theory is "renormalizable" in that it so happens that after some finite order the space of choices of parameters contain a unique single point. In that case we may make a finite number of choices and then the remaining choices are fixed.

    2) If we assume the existence of a "UV-critical hypersurface" (e.g. Nink-Reuter 12, p. 2), which comes down to postulating a finite dimensional submanifold in the infinite dimensional space of renormalization parameters and postulating/assuming that we make a choice on this submanifold. Major extra assumptions here. If they indeed happen to be met, then the space of choices is drastically shrunk.

    3) We assume a UV-completion by a string perturbation series. This only works for field theories which are not in the "swampland" (Vafa 05). It transforms the space of choices of renormalization parameters into the space of choices of full 2d SCFTS of central charge 15, the latter also known as the "perturbative landscape". Even though this space received a lot of press, it seems that way too little is known about it to say much at the moment. But that's another discussion.

    There might be more, but the above three seem to be the most prominent ones.

  15. Urs Schreiber says:

    I'd eventually enjoy a more fine-grained technical discussion of some of these matters, to clear out the issues. But for the moment I'll leave it at that.

    By the way, not only may we view the the string perturbation series as a way to choose these infinitely many renormalization parameters for gravity by way of other data, but the same is also true for "asymptotic safety". Here it's the postulate of being on a finite-dimensional subspace in the space of couplings that amounts to the choice.

  16. A. Neumaier says:
    Urs Schreiber

    Recently there is a suggestion on how to relate these to causal perturbation theory/pAQFT:

    This is nice in that it illustrates the concepts on free scalar fields, so that one can understand them without all the technicalities that come later with the renormalization. I don't have yet a good feeling for factorization algebras, though.

  17. A. Neumaier says:
    Urs Schreiber

    Not sure what this is debating, I suppose we agree on this. What I claimed is that the choice of renormalization constants in causal perturbation theory (in general infinite) corresponds to the choice of couplings/counterterms in the Wilsonian picture. It's two different perspectives on the same subject: perturbative QFT.Wilson's perspective is intrinsically approximate; changing the scale changes the theory. This is independent of renormalized perturbation theory (whether causal or not), where the theory tells us that there is a vector space of parameters from which to choose one point that defines the theory. The vector space is finite-dimensional iff the theory is renormalizable.

    In the latter case we pick a parameter vector by calculating its consequences for observable behavior and matching as many key quantities as the vector space of parameters has dimensions. This is deemed sufficient and needs no mathematical axiom of choice of any sort. Instead it involves experiments that restrict the parameter space to sufficiently narrow regions.

    In the infinite-dimensional case the situation is similar, except that we would need to match infinitely many key quantities, which we do not (and will never) have. This just means that we are not able to tell which precise choice Nature is using.

    But we don't know this anyway for any physical theory – even in QED, the best theory we ever had, we know Nature's choice only to 12 digits of accuracy or so. Nevertheless, QED is very predictive.

    Infinite dimensions do not harm predictability elsewhere in physics. In fluid dynamics, the solutions of interest belong to an infinite-dimensional space. But we are always satisfied with finite-dimensional approximations – the industry pays a lot for finite element simulations because its results are very useful in spite of their approximate nature. Thus there is nothing bad in not knowing the infinite-dimensional details as long as we have good enough finite-dimensional approximations.

  18. A. Neumaier says:
    Urs Schreiber

    The space of choices is of finite dimension in each degree, but it's is not a finite set in each degree. In general, given a function ##p : S to mathbb{Z}## one needs a choice principle to pick a section unless there is secretly some extra structure around. Maybe that's what you are claiming.

    This sounds like you are claiming that there is a preferred section (zero section), when restricting to large elements. This doesn't seem to be the case to me, in general. But maybe I am missing something.I am claiming that each particular choice made by a particular formula gives a valid solution. In particular, I gave the simplest constructive choice that leads to a valid solution, namely setting to zero enough higher order parameters in a given renormalization fixing scheme (that relates the choices available to values of higher order coefficients in the expansion of some observables, just like the standard renormalization scales are fixed by matching physical constants such as masses, charges, and the gravitational constant).This gives one constructive solution for each renormalization fixing scheme, each of them for the foreseeable future consistent with the experiments.

    There are of course many other constructive solutions, but there is no way to experimentally distinguish between them in my lifetime. Thus my simple solution is adequate, and in view of Ockham's razor best. You may claim that the dependence on the renormalization fixing scheme is ugly, but this doesn't matter – Nature's choices don't have to be beautiful; only the general theory behind it should have this property.

    Maybe there are additional principles (such as string theory) giving extra structure that would lead to other choices, but being experimentally indistinguishable in the foreseeable future, there is no compelling reason to prefer them unless working with them is considerably simpler than working with my simple recipe.

  19. vanhees71 says:
    A. Neumaier

    In causal perturbation theory there is no cutoff, so Wilson's point of view is less relevant. Everything is constructed exactly; there is nothing effective. Gravity is no exception! Of course one can still make approximations to simplify a theory to an approximate effective low energy theory in the Wilson sense, but this is not intrinsic in the causal approach. (Nobody thinks of power series as being only effective approximations of something that fundamentally should be polynomials in a family of fundamental variables.)Well, conventional BPHZ also has no UV cutoff but a renormalization scale only.

    Of course, a theory which is not Dyson renormalizable (like, e.g., chiral perturbation theory), needs necessarily a kind of "cut-off scale" since to make sense of the theory you have to read it as expansion in powers of energy-momentum. Since it's a low-energy theory you need to tell what's the "large scale" (in ##chi##PT it's ##4 pi f_{pi} simeq 1 ; text{GeV}##).

  20. Urs Schreiber says:
    A. Neumaier

    Yes. The Stueckelberg renormalization group is what is left from the Wilson renormalization semigroup when no approximation is permitted. In causal perturbation theory one only has the former. It describes a parameter redundancy of any fixed theory.

    Approximating a theory by a simpler one to get better numerical access is a completely separate thing from constructing the theory in the first place. Causal perturbation theory clearly separates these issues, concentrating on the second.Not sure what this is debating, I suppose we agree on this. What I claimed is that the choice of renormalization constants in causal perturbation theory (in general infinite) corresponds to the choice of couplings/counterterms in the Wilsonian picture. It's two different perspectives on the same subject: perturbative QFT.

  21. Urs Schreiber says:
    A. Neumaier

    The parameters come with a definite grading and finitely many parameters for each grade,The space of choices is of finite dimension in each degree, but it's is not a finite set in each degree. In general, given a function ##p : S to mathbb{Z}## one needs a choice principle to pick a section unless there is secretly some extra structure around. Maybe that's what you are claiming.

    A. Neumaier

    hence one can make the choice constructibly (in many ways, e.g., choosing all parameters of large grade as zero).This sounds like you are claiming that there is a preferred section (zero section), when restricting to large elements. This doesn't seem to be the case to me, in general. But maybe I am missing something.

  22. A. Neumaier says:
    Urs Schreiber

    It's two different perspectives on the same phenomenon of pQFT. This is discussed in section 5.2 of Brunetti-Dütsch-Fredenhagen 09.Yes. The Stueckelberg renormalization group is what is left from the Wilson renormalization semigroup when no approximation is permitted. In causal perturbation theory one only has the former. It describes a parameter redundancy of any fixed theory.

    Approximating a theory by a simpler one to get better numerical access is a completely separate thing from constructing the theory in the first place. Causal perturbation theory clearly separates these issues, concentrating on the second.

  23. A. Neumaier says:
    Urs Schreiber

    Sure, but nevertheless, it needs a choice axiom, here the axiom of countable choice.No. The parameters come with a definite grading and finitely many parameters for each grade, hence one can make the choice constructive (in many ways, e.g., by forcing all parameters of large grade to vanish).

    Urs Schreiber

    Don't confuse the way to make a choice with the space of choices. A formula is a way to write down a choice. But there are still many formulas.Of course. But making choices does not require a nonconstructive axiom. Each of the possible choices deserved to be called a possible theory of quantum gravity, so there are many testable constructive choices (and possibly some nonconstructive ones if one admits a corresponding axiom).

    Making the right choice is, as with any theorem building, a matter of matching Nature through experiment. But of course, given our limited capabilities, there is not much to distinguish different proposed formulas for choosing the parameters; see C.P. Burgess, Quantum Gravity in Everyday Life: General Relativity as an Effective Field Theory. Only one new parameter (the coefficient of curvature^2) appears at one loop, where Newton's constant of gravitation becomes a running coupling constant with $$G(r) = G – 167/30pi G^2/r^2 + …$$ in terms of a renormalization length scale ##r##, which is already below the current observability limit.

    C.P. Burgess (Section 4.1)

    Numerically, the quantum corrections are so miniscule as to be unobservable within the solar system for the forseeable future. Clearly the quantum-gravitational correction is numerically extremely small when evaluated for garden-variety gravitational fields in the solar system, and would remain so right down to the event horizon even if the sun were a black hole. At face value it is only for separations comparable to the Planck length that quantum gravity effects become important. To the extent that these estimates carry over to quantum effects right down to the event horizon on curved black hole geometries (more about this below) this makes quantum corrections irrelevant for physics outside of the event horizon, unless the black hole mass is as small as the Planck mass

  24. Urs Schreiber says:

    I expect that an appropriate 4-dimensional generalization of vertex algebras will play a role in giving rigorous foundations for operator product expansions.Yes, maybe a key problem is to understand the relation between the Haag-Kastler axioms (local nets) for the algebras of quantum observables with similar axioms for the operator product expansion. These days there is a growing community trying to phrase QFT in terms of factorization algebras and since these generalize vertex operator algebras in 2d, they are to be thought of as formalizing OPEs in Euclidean (Wick rotated) field theory. Recently there is a suggestion on how to relate these to causal perturbation theory/pAQFT:

    but I suppose many things still remain to be understood.

  25. Urs Schreiber says:
    A. Neumaier

    This has nothing to doCareful with sweeping statements.

    A. Neumaier

    the number of constants to choose is countably infinite only.Sure, but nevertheless, it needs a choice axiom, here the axiom of countable choice. Not that it matters for the real point of concern in physics, I just mentioned it for completeness.

    A. Neumaier

    The complaint is completely unfounded that power series are not predictive since one needs infinitely many parameters to specify them. It is well-known how to specify infinitely many parameters by a finite formula for them!Don't confuse the way to make a choice with the space of choices. A formula is a way to write down a choice. But there are still many formulas.

    Incidentally, this is what the string perturbation series gives: a formula for producing a certain choice of the infinitely many counterterms in (some extension) of perturbative gravity. To some extent one may think of perturbative string theory as parameterizing (part of) the space of choices in choosing renormalizaton parameters for gravity by 2d SCFTs. If these in turn arise as sigma models, this gives a way to parameterize these choices by differential geometry. It seems that the subspace of choices thus parameterized is still pretty large, though, despite much discussion, little is known for sure about this.

    A. Neumaier

    This is not really a problem.Not in perturbation theory, but that's clear.

  26. A. Neumaier says:
    Urs Schreiber

    The non-renormalizability simply means (by definition) that infinitely many constants need to be chosen when renormalizing. While this may be undesireable, it is not mathematically inconsistent (unless one adopts some non-classical foundation of mathematics without, say, the axiom of choice; but this is not the issue that physicists are commonly concerned with).This has nothing to do with the axiom of choice; the number of constants to choose is countably infinite only. The level of mathematical and physical consistency is precisely the same as when, in a context where people are used to working with polynomials defined by finitely many parameters, someone suggests to use instead power series. The complaint is completely unfounded that power series are not predictive since one needs infinitely many parameters to specify them. It is well-known how to specify infinitely many parameters by a finite formula for them!

    Urs Schreiber

    I feel that the more popular Wilsonian perspective on this point is really pretty much the same statement, just phrased in different words: The more popular statement is that gravity makes sense as an effective field theory at any given cutoff energy, and that one needs to measure/fix further counterterms as one increases the energy scale.In causal perturbation theory there is no cutoff, so Wilson's point of view is less relevant. Everything is constructed exactly; there is nothing effective. Gravity is no exception! Of course one can still make approximations to simplify a theory to an approximate effective low energy theory in the Wilson sense, but this is not intrinsic in the causal approach. (Nobody thinks of power series as being only effective approximations of something that fundamentally should be polynomials in a family of fundamental variables.)

    Urs Schreiber

    The true issue of quantizing gravity is of course that the concept of background causality structure as used in Epstein-Glaser or Haag-Kastler does not apply to gravity, since for gravity the causal structure depends on the fields.This is not really a problem. It is well-known that, just as massless spin 1 quantization produces gauge invariance, so massless spin 2 quantization produces diffeomorphism invariance. Hence as long as two backgrounds describe the same smooth manifold when the metric is ignored, they are for constructive purpose equivalent. Thus one may simply choose at each point a local coordinate system consisting of orthogonal geodesics, and you have a Minkowski parameterization in which you can quantize canonically. Locality and diffeomorphism invariance will make the construction behave correctly in every other frame.

  27. Urs Schreiber says:
    A. Neumaier

    Scharf’s work also shows that there seems nothing wrong with canonically quantized gravity.I feel that this statement deserves more qualification:

    What Scharf argues is that from just the mathematics of perturbative QFT, there is no technical problem with a "non-renormalizable" Lagrangian: The non-renormalizability simply means (by definition) that infinitely many constants need to be chosen when renormalizing and while this may be undesireable, it is not mathematically inconsistent (unless one adopts some non-classical foundation of mathematics without, say, the axiom of choice; but this is not the issue that physicists are commonly concerned with). Simply make this choice, and that's it then.

    I feel that the more popular Wilsonian perspective on this point is really pretty much the same statement, just phrased in different words: The more popular statement is that gravity makes sense as an effective field theory at any given cutoff energy, and that one needs to measure/fix further counterterms as one increases the energy scale.

    So I feel there is actually widespread agreement on this point, just some difference in terminology. But the true issue is elsewhere: Namely the above statements apply to perturbation theory about a fixed gravitational background. The true issue of quantizing gravity is of course that the concept of background causality structure as used in Epstein-Glaser or Haag-Kastler does not apply to gravity, since for gravity the causal structure depends on the fields. For this simple reasons it is clear that beyond perturbation theory, gravity definitely does not have "canonical quantization" if by this one means something like the established causal perturbation theory.

    Instead, if quantum gravity really does exist as a quantum field theory (instead of, say, as a holographic dual of a quantum field theory), then this necessarily needs some slightly more flexible version of the Epstein-Glaser and/or Haag-Kastler-type axioms on causality: One needs to have a concept of causality that depends on the fields itself.

    I am aware of one program aiming to address and solve this:

    I think this deserves much more attention.

  28. A. Neumaier says:
    DrChinese

    Nice definitions. I assume you consider an entangled system (of 2 or more particles) to be an "extended object", correct?Yes.

    Even a single particle is a slightly extended object. In quantum field theory, there are no point particles – only ''pointlike'' particles (which means particles obtained from point particles through renormalization – which makes them slightly extended).

    On the other hand, entangled systems of 2 particles can be very extended objects, if the distance of the center of mass of the particles is large, as in long distance entanglement experiments. But these very extended objects are very fragile, too (metastable in the thermal interpretation, due to decoherence by the environment), and it takes a lot of experimental expertise to maintain them in an entangled state.

  29. DrChinese says:
    A. Neumaier

    See my post on extended causality.

    … Here are the definitions:

    • Point causality: Properties of a point object depend only on its closed past cones, and can influence only its closed future cones.
    • Extended causality: Joint properties of an extended object depend only on the union of the closed past cones of their constituent parts, and can influence only the union of the closed future cones of their constituent parts.
    • Separable causality: Joint properties of an extended object consist of the combination of properties of their constituent points.

    I believe that only extended causality is realized in Nature. It can probably be derived from relativistic quantum field theory. If this is true, there is nothing acausal in Nature. In any case, causality in this weaker, much more natural form is not ruled out by current experiments.Nice definitions. I assume you consider an entangled system (of 2 or more particles) to be an "extended object", correct?

  30. A. Neumaier says:
    ftr

    Arnold, I am interested in your view on EPR.See the following extended discussions:

    https://www.physicsforums.com/threads/an-abstract-long-distance-correlation-experiment.852684/

    https://www.physicsforums.com/threa…is-not-weird-unless-presented-as-such.850860/

    https://www.physicsforums.com/threads/collapse-from-unitarity.860627/

    ftr

    the physical alternative to ^ virtual particles" seem to be a taboo. It seems to me that non locality is of essence.The physical alternative is their interpretation as contributions to an infinite series of scattering amplitude approximations. There is nothing nonlocal in relativistic QFT. See my post on extended causality.

  31. bhobba says:

    I cant express my gratitude enough for this extremely insightful and informative interview.

    Arnold has always been one of my favorite posters that I have learnt an enormous amount from.

    In particular his efforts to dispel, and explain in detail, myths like the reality of virtual particles is much appreciated, and IMHO very very important because of the huge amount of confusion about the issue, especially amongst beginners.

    Thanks
    Bill

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply