Highest loop order of experimental relevance?

In summary: Urs Schreiber[/URL], post: 5839672, member: 567385"]Thanks! But now help me: You seem to be saying that even the 4th order contribution is not much smaller than the experimental precision and uncertaintly. This makes me ask again how you know that adding the 5th, 6th, 7th etc. contribution will necessarily further improve the match to experiment?Or if you feel my questions are not going in the right direction, could you lay out again from scratch the argument by which you conclude that all the first 430 loop orders should... contribute to the experimental result?The 4-loop contribution is about 6*10-14
  • #1
Urs Schreiber
Science Advisor
Insights Author
Gold Member
573
675
What is the highest loop order in standard model scattering computations that still contributes a measurable effect seen in past and present particle collider experiments?

In other words, to which order are loop corrections necessary for accounting for observed high energy physics?

I expect it is order-1 for most computations and order-2 in some rare cases, and not higher. Or are there notable exceptions? What would be a good reference to check this?
 
Physics news on Phys.org
  • #2
Some Higgs calculations are done at NNNLO to improve the accuracy. The calculations for the electron g-factor work with up to 5 loops (12672 diagrams) but that is not a collider experiment.

In general NNLO is strongly preferred where available, as often the theoretical uncertainties are larger than the experimental ones.
 
  • Like
Likes vanhees71 and Urs Schreiber
  • #3
Thanks, mfb! Just the kind of reply that I was hoping for.

Here is a related question: Given that the perturbation series is non-converging and at best asymptotic, is there a general consensus or idea on the order up to which we have the right to expect the result to improve, before it starts diverging?
 
  • #4
You could argue that we reached this point in low-energy QCD already, where perturbative approaches don't work.

Apart from that, we are far away from divergence. Here is an estimate - the QED contributions stop getting smaller at around n=430, where their contribution is 10-187.
The problem arises earlier in (high-energy) QCD, but still way beyond the reach of calculations.
 
  • Like
Likes arivero, Greg Bernhardt and Urs Schreiber
  • #5
Thanks once more.

Now I remember having seen that Physics.SE thread before (It seems I had even commented on it.)

I'd like to check where that estimate of order 430 comes from. The author Diego Mazón indicates that he thinks of it as the ratio ##\pi/\alpha\,,## so probably he has some estimate of small phases in mind? But is that how one should determine this number? I'd think it can be very subtle to determine the point where an asymptotic series starts going bad. In particular it is in general unrelated to the point where the contributions cease to decrease (if they ever do).

(For instance in ##\sum_{n = 1}^\infty 1/n## the contributions keep decreasing, and yet there is no point at which this can be truncated to give an approximation to anything.)
 
  • #6
##\displaystyle \left(\frac {\alpha}{ \pi}\right)^n## is simply the power in the expansion.
##(2n-1)! \approx \sqrt{2n!}## looks like the number of diagrams.

Both up to some numerical prefactors I don't know about.
[URL='https://www.physicsforums.com/insights/author/urs-schreiber/']Urs Schreiber[/URL] said:
I'd think it can be very subtle to determine the point where an asymptotic series starts going bad. In particular it is in general unrelated to the point where the contributions cease to decrease (if they ever do).
I'm not aware of mathematical proofs that the series do indeed go towards the physical value in the range where contributions get smaller, but they do.
 
  • #7
mfb said:
I'm not aware of mathematical proofs that the series do indeed go towards the physical value in the range where contributions get smaller, but they do.

Sorry, how do you know that? What's a reference for the claim that you are thinking of here?
 
  • #8
Well, the calculations agree with experimental results in the range accessible so far, and it is what you would generally expect from perturbation theory.
 
  • #9
mfb said:
Well, the calculations agree with experimental results in the range accessible so far, and it is what you would generally expect from perturbation theory.

But that refers to calculations to some very small order, not calculation in the full range where the contributions get smaller. Or does it? Where is that computation to order 430 done?
 
  • #10
We can't do computation to 430 orders, of course, but all these high orders are tiny (and that we know without explicit calculations). The 5th order in QED for g-2 is 6*10-13, and relevant only for electron g-2.
 
  • #11
mfb said:
We can't do computation to 430 orders, of course, but all these high orders are tiny (and that we know without explicit calculations). The 5th order in QED for g-2 is 6*10-13, and relevant only for electron g-2.

Thanks for your patience, sorry for being slow. Please bear with me: I gather I am missing one bit of information, which you are taking for granted.
Could you remind me of the order of magnitude of ##\text{AbsoluteValue}\left( \text{result}_{\text{experiment}} - \text{result}_{\text{theory}} \right)##
and of its uncertainty for the case at hand? I suppose you are saying that ##430 \cdot 6 \cdot 10^{-13}## is much smaller than both of these?
 
  • #12
[URL='https://www.physicsforums.com/insights/author/urs-schreiber/']Urs Schreiber[/URL] said:
Could you remind me of the order of magnitude of ##\text{AbsoluteValue}\left( \text{result}_{\text{experiment}} - \text{result}_{\text{theory}} \right)##
and of its uncertainty for the case at hand?
That depends on the measurement.

The experimental value for (g-2)/2 is 0.001 159 652 180 73 (28), an absolute uncertainty of 2.8*10-13. http://gabrielse.physics.harvard.edu/gabrielse/papers/2011/ElectronMagneticMomentMeasurements.pdf.
I misinterpreted the abstract of the theory paper, the 5-loop contribution number is for g/2 not for g, so 6*10-13 means it is twice the experimental uncertainty. It has a 6% relative uncertainty. The uncertainty on the 4-loop contribution is a bit larger than that (~6*10-14), but still much smaller than the experimental uncertainty.
It is important to consider the 5-loop contribution to compare theory and experiment.

Edit: Missed minus signs in exponents
 
Last edited:
  • Like
Likes Greg Bernhardt
  • #13
mfb said:
That depends on the measurement.

The experimental value for (g-2)/2 is 0.001 159 652 180 73 (28), an absolute uncertainty of 2.8*10-13. http://gabrielse.physics.harvard.edu/gabrielse/papers/2011/ElectronMagneticMomentMeasurements.pdf.
I misinterpreted the abstract of the theory paper, the 5-loop contribution number is for g/2 not for g, so 6*1013 means it is twice the experimental uncertainty. It has a 6% relative uncertainty. The uncertainty on the 4-loop contribution is a bit larger than that (~6*1014), but still much smaller than the experimental uncertainty.
It is important to consider the 5-loop contribution to compare theory and experiment.

Thanks! But now help me: You seem to be saying that even the 4th order contribution is not much smaller than the experimental precision and uncertaintly. This makes me ask again how you know that adding the 5th, 6th, 7th etc. contribution will necessarily further improve the match to experiment?

Or if you feel my questions are not going in the right direction, could you lay out again from scratch the argument by which you conclude that all the first 430 loop orders should keep improving the match between theory and experiment (in the given example). Because that's what I am still missing. Feel free to tell me that I am missing the obvious, but please do state the obvious then. Thanks!
 
  • #14
The 4th order contribution is large (~50 times the experimental uncertainty). Its uncertainty is small (~1/10 times the experimental uncertainty).

Without the 4th order the theoretical prediction wouldn't match at all, with it but without the 5th order it would still have a notable tension, only with the 5th order we get good agreement. The 6th order is again a factor ~500 smaller, so it won't play a role for quite some time.
 
  • #15
mfb said:
The 6th order is again a factor ~500 smaller, so it won't play a role for quite some time.

Thanks for your replies, I'll stop insisting and thank you for your patience. In closing, I'll just point out once more that the smallness alone of the contribution at some loop order does not address the question which I tried to raise in #3: The contributions beyond some order may be tiny, and still push the theoretical result away from the physical value, instead of towards it. But probably it is just not known for available theories at which order this happens.
 
  • #16
We know the value goes away from the physical value at some point, but it is not expected that this happens before the point where the contributions stop getting smaller, and no instance where this would happen early has been observed so far.
 
  • #17
mfb said:
it is not expected that this happens before the point where the contributions stop getting smaller

Thanks. What is the basis of this expectation?
 
  • #18
General experience. See above, I'm not aware of mathematical proofs of it.
 
  • #19
mfb said:
General experience. See above, I'm not aware of mathematical proofs of it.

That's fine, I would be content with general experience. But which experiences are you referring to, could you give me pointers? Or do you rather mean "general feeling" than "general experience"?
 
  • #20
"That's what the theorists working on these calculations say".
I'm not a theorist.
 
  • #21
[URL='https://www.physicsforums.com/insights/author/urs-schreiber/']Urs Schreiber[/URL] said:
What is the highest loop order in standard model scattering computations that still contributes a measurable effect seen in past and present particle collider experiments?

In other words, to which order are loop corrections necessary for accounting for observed high energy physics?

I expect it is order-1 for most computations and order-2 in some rare cases, and not higher. Or are there notable exceptions? What would be a good reference to check this?

It is also important to note that the relevance is different in different parts of the Standard Model. Electroweak processes converge much more quickly than QCD processes.

As noted in previous posts in this thread, state of the art QED calculations get you to 13 orders of magnitude precision results with five loops and experimental measurements can measure QED observables to comparable degrees of precision. Two loop calculations in QED are accurate to about a factor of about 10-3 (better than four loop calculations in QCD).

Leading order results in QCD are accurate to a factor of 2. An NLO result in QCD (i.e. 2-loop) gets you a result with single digit percentage precision, an NNLO result in QCD (i.e. 3-loop) gets you a result with 1% precision or so (or worse, e.g. this NNLO result has 3% precision), and an NNNLO result in QCD (i.e. 4-loop), which is about as good as it gets for most purposes in QCD, gets you to perhaps 0.2% to 0.5% precision (for example here). This may be generous. As noted in a 2015 review paper:

For example, the dilepton invariant mass distribution has been measured for masses between 15 and 2000 GeV, covering more than 8 orders of magnitude in cross section. NNLO QCD predictions, together with modern PDF sets and including higher-order electroweak and QED final-state radiation corrections, describe the data to within 5–10% over this large range, whereas NLO predictions show larger deviations, unless matched to a parton shower.

As of 2016, five loop calculations in QCD were vaporware for anything other than the beta function of the QCD coupling constant.

In QCD, the limiting factor is primarily computational capacity because QCD calculations converge much, much more slowly with higher order terms having more relevance to the outcome. Even if you could do 10-loop calculations in QCD (which is far beyond any reasonable near term realm of possibility, but extrapolating how much gain has been obtained from additional loops so far) the precision you get would be comparable to perhaps 3-loop calculations in QED.

We can do measurements of observables in QCD that are thousands and even millions of times as precise as our theoretical computations in some cases. For example, we can compute the proton mass from first principals in QCD to more than 1% precision but less than 0.1% precision (it was about 2% in 2008), but can measure it experimentally to eleven or twelve significant digits. (The far more obscure bottom lambda baryon is still described experimentally to five significant digits, while Delta and Omega baryon masses are computed from first principles to about 0.2% precision.).

On the other hand, there are other observables (like the strength of the QCD coupling constant at high energies also reviewed here) where the precision of the experimental measurements can be as bad as low single digit percentages. Poor precision in the measurement of this experimental constant is another reason that QCD calculations are so imprecise.

Similarly, while in principle, you can compute parton distribution functions in QCD from first principles, in practice, in the real world, physicists always use experimental data fitted to hypothetical functional relations motivated by but not rigorously derived from pure QCD equations and fundamental constants. (The handbook for beginners in the subject from CERN available online runs to something like a 194 pages of detailed discussion of the sourcing of current PDF data sources and the fine points of the pros and cons of different sources.)

Weak force calculations are in between, mostly because imprecision in experimental measurements of the relevant physical constants limits the accuracy of any weak force calculation to about five significant digits (see the Particle Data Group data on W bosons and Z boson) no matter how many loops you take theoretical calculation out to, with any additional precision being spurious. This is the main area in the Standard Model where you wouldn't do calculations to as many loops as you had the computational capacity to do in a reasonable time because more precise theoretical calculations wouldn't be relevant. Also, weak force calculations often have some QCD influences and the recommended practice in considering what are primarily weak force calculations is to include at least NLO (2-loop) QCD calculations even though QCD contributions are a minor second order factor.
 
Last edited:
  • Like
Likes dextercioby, Urs Schreiber and odietrich
  • #22
mfb said:
"That's what the theorists working on these calculations say".

Thanks. Might you have a pointer to a theorist saying this? Might there be a printed record of this saying?
 
  • #23
ohwilleke said:
It is also important to note that the relevance is different in different parts of the Standard Model. Electroweak processes converge much more quickly than QCD processes. [...]

Thanks for all the detailed additional pointers! That's useful.
 
  • #24
mfb said:
I'm not aware of mathematical proofs of it.

I don't think there are any, because there are counterexamples to these "rules of thumb." I would say the rules of thumb are, a) each higher order makes a contribution smaller than the previous order, and b) an estimate of the uncertainty due to uncalculated higher orders is the variation in the quantity you are calculating on the renormalization scale (since that's unphysical, at infinite order it must be zero).

The counterexample I am thinking of is heavy flavor production. The NLO contributions are about the same size as the LO contributions, and the scale dependence is actually worse at NLO than LO.
 
  • Like
Likes Urs Schreiber
  • #25
[URL='https://www.physicsforums.com/insights/author/urs-schreiber/']Urs Schreiber[/URL] said:
Thanks. Might you have a pointer to a theorist saying this? Might there be a printed record of this saying?
Theorists wouldn't calculate higher orders if they wouldn't expect this.
I'm not aware of explicit statements like this, but the whole idea of calculating higher orders is based on that expectation.
 
  • #26
Vanadium 50 said:
since that's unphysical, at infinite order it must be zero

Isn't it a little more subtle than this makes it sound? Because as the order goes to infinity, the perturbation series is not to be expected to approximate the true physical value, but to be infinitely far from it! The perturbation series of realistic QFTs is expected to be divergent (this goes back to Dyson 52.)

Now for a general divergent formal power series, even summing up the first few terms makes no particular sense. But since we may assume that the perturbation series is the Taylor expansion of an actual smooth function (namely the non-perturbative theory) it is plausible to expect that it is, even if not convergent, an "asymptotic series". If so, this sort of guarantees that the first few terms (depending on "how small Plnack's constant really is") give a good approximation, but it still means that beyond that the series will diverge arbitrarily far from the desired physical value.
 
Last edited:
  • Like
Likes vanhees71
  • #27
mfb said:
Theorists wouldn't calculate higher orders if they wouldn't expect this.

All right, thanks for finally saying that this is your reasoning!
 
  • #28
[URL='https://www.physicsforums.com/insights/author/urs-schreiber/']Urs Schreiber[/URL] said:
Isn't it a little more subtle than this makes it sound?

I dunno. It's a counterexample, after all.
 
  • #29
Vanadium 50 said:
I dunno. It's a counterexample, after all.

Ah, sorry about this, now I see that I misread what you wrote, actually you are saying precisely what I am after. Okay so your hint is:

Vanadium 50 said:
...heavy flavor production. The NLO contributions are about the same size as the LO contributions, and the scale dependence is actually worse at NLO than LO.

Could you point me to a good reference for this?
 
  • #30
By the way, the discussion here reminds me of the following quotes from the (very nice) review of asymptotic perturbation series theory in Suslov 05 (please take this in a good spirit, I don't mean to bug anyone):

Classical books on diagrammatic techniques describe the construction of diagram series as if they were well defined. However, almost all important perturbation series are hopelessly divergent since they have zero radii of convergence. The first argument to this effect was given by Dyson with regard to quantum electrodynamics.
[...]
Even though Dyson’s argument is unquestionable, it was hushed up or decried for many years: the scientific community was not ready to face the problem of the hopeless divergency of perturbation series.
[...]
The modern status of divergent series suggests that techniques for manipulating them should be included in a minimum syllabus for graduate students in theoretical physics. However, the theory of divergent series is almost unknown to physicists, because the corresponding parts of standard university courses in calculus date back to the mid-nineteenth century, when divergent series were virtually banished from mathematics.
 
  • Like
Likes vanhees71
  • #31
[URL='https://www.physicsforums.com/insights/author/urs-schreiber/']Urs Schreiber[/URL] said:
Could you point me to a good reference for this?

The original NLO paper was (Paulo) Nason, (Sally) Dawson, and (R. Keith) Ellis, around 1989. It builds on a paper a few years earlier by (John) Collins, (Dave) Soper and (Jack) Smith where they derive the relevant factorization theorems. Matteo Cacciari was giving talks about LO, NLO and the state of the art about ten years ago; if you find a conference proceedings by him that references one or both of the above papers, that's probably as good as you are going to get in one place.
 
  • #32
Vanadium 50 said:
The original NLO paper was (Paulo) Nason, (Sally) Dawson, and (R. Keith) Ellis, around 1989. It builds on a paper a few years earlier by (John) Collins, (Dave) Soper and (Jack) Smith where they derive the relevant factorization theorems. Matteo Cacciari was giving talks about LO, NLO and the state of the art about ten years ago; if you find a conference proceedings by him that references one or both of the above papers, that's probably as good as you are going to get in one place.

Thanks. Maybe slide 12 in
  • Matteo Cacciari: "(Theoretical) review of heavy quark production" BNL 14/12/2005 (pdf)
has the kind of statement that you are referring to.
 
  • #33
I think the slides as a whole give a reasonable view of the heavy flavour state of the art. Slide 5 is a motivation for NNLO (and why N3LO may play only a minor role).
 
  • #34
Vanadium 50 said:
I think the slides as a whole give a reasonable view of the heavy flavour state of the art. Slide 5 is a motivation for NNLO (and why N3LO may play only a minor role).

Right, sorry, I meant slide 12 (I was pointing somebody else to slide 5 for another reason, and mixed up the numbers when writing here).

I am trying to pinpoint the statement which you were referring to above when you wrote:

Vanadium 50 said:
...heavy flavor production. The NLO contributions are about the same size as the LO contributions, and the scale dependence is actually worse at NLO than LO.
 
  • #35
[URL='https://www.physicsforums.com/insights/author/urs-schreiber/']Urs Schreiber[/URL] said:
I am trying to pinpoint the statement which you were referring to above

I'm sorry, but that's a little unfair. "Here's an article I found - why can't I find a statement you made in it?"

I think I did a pretty good job of pointing you in the right direction, but it may well be that a single document that has everything you want doesn't exist. But if a literature search needs to be done, I don't think I am the one who needs to do it.
 
<h2>1. What is the highest loop order of experimental relevance?</h2><p>The highest loop order of experimental relevance refers to the highest level of complexity in a scientific experiment, typically in the field of particle physics. This level is determined by the number of loops, or iterations, in a Feynman diagram, which represents the interactions between particles.</p><h2>2. Why is the highest loop order of experimental relevance important?</h2><p>The highest loop order of experimental relevance is important because it provides insight into the fundamental laws of nature and can help validate or disprove existing theories. It also allows scientists to make predictions about the behavior of particles at higher energies and smaller scales.</p><h2>3. How is the highest loop order of experimental relevance determined?</h2><p>The highest loop order of experimental relevance is determined through a combination of theoretical calculations and experimental data. Theoretical physicists use mathematical models to predict the behavior of particles at different energy levels, and experiments are conducted to test these predictions.</p><h2>4. Has the highest loop order of experimental relevance been reached?</h2><p>No, the highest loop order of experimental relevance has not yet been reached. Currently, the highest loop order achieved in experiments is three, but scientists are working towards reaching higher levels of complexity in order to further our understanding of the fundamental laws of nature.</p><h2>5. What are some potential implications of reaching the highest loop order of experimental relevance?</h2><p>If the highest loop order of experimental relevance is reached, it could lead to significant advancements in our understanding of the universe and the development of new technologies. It could also potentially lead to the discovery of new particles and phenomena that could revolutionize our understanding of physics.</p>

1. What is the highest loop order of experimental relevance?

The highest loop order of experimental relevance refers to the highest level of complexity in a scientific experiment, typically in the field of particle physics. This level is determined by the number of loops, or iterations, in a Feynman diagram, which represents the interactions between particles.

2. Why is the highest loop order of experimental relevance important?

The highest loop order of experimental relevance is important because it provides insight into the fundamental laws of nature and can help validate or disprove existing theories. It also allows scientists to make predictions about the behavior of particles at higher energies and smaller scales.

3. How is the highest loop order of experimental relevance determined?

The highest loop order of experimental relevance is determined through a combination of theoretical calculations and experimental data. Theoretical physicists use mathematical models to predict the behavior of particles at different energy levels, and experiments are conducted to test these predictions.

4. Has the highest loop order of experimental relevance been reached?

No, the highest loop order of experimental relevance has not yet been reached. Currently, the highest loop order achieved in experiments is three, but scientists are working towards reaching higher levels of complexity in order to further our understanding of the fundamental laws of nature.

5. What are some potential implications of reaching the highest loop order of experimental relevance?

If the highest loop order of experimental relevance is reached, it could lead to significant advancements in our understanding of the universe and the development of new technologies. It could also potentially lead to the discovery of new particles and phenomena that could revolutionize our understanding of physics.

Similar threads

  • High Energy, Nuclear, Particle Physics
Replies
1
Views
2K
  • High Energy, Nuclear, Particle Physics
Replies
11
Views
1K
  • Beyond the Standard Models
Replies
9
Views
498
  • High Energy, Nuclear, Particle Physics
Replies
2
Views
2K
  • High Energy, Nuclear, Particle Physics
Replies
6
Views
2K
  • High Energy, Nuclear, Particle Physics
Replies
6
Views
4K
  • Beyond the Standard Models
Replies
4
Views
2K
  • Beyond the Standard Models
Replies
5
Views
2K
  • Beyond the Standard Models
Replies
28
Views
4K
  • STEM Academic Advising
Replies
8
Views
970
Back
Top