B What's Delaying Fermilab's Muon g-2 Results Release?

  • #51
vanhees71 said:
That's the above mentioned lattice-QCD calculation of the leading hadronic contribution to ##(g-2)## by the Wuppertal (BMW) lattice-QCD collaboration. It's at least a hint that one has to consolidate the prediction of the theory side. If I understand it right, what's compared to the measurement is a theoretical calculation using empirical input for the said hadronic contributions, which uses dispersion-relation analyses of the data, and afaik that fitting is a tricky business of its own.

Of course also the lattice calculation has to be solidified and maybe also checked by other lattice collaborations since also lattice-QCD calculations are tricky business (I only remind about the long debate about the deconfinement and/or chiral-transition temperature, which finally settled at the lower value of around 155 MeV predicted by the Wuppertal group ;-)).

Whether or not the ##(g-2)## results are really hints for "physics beyond the Standard Model" still seems to stay an exciting question.
Can't wait until I learn enough QFT for all that to not sound like complete gibberish to me!
 
  • Haha
Likes Demystifier
Physics news on Phys.org
  • #52
How does the relate to LHCb result? I think I get them mixed up. Are they measuring totally separate things that just have to do with muons? Are both sensitive to the same or similar QCD calculations?
 
  • #53
nolxiii said:
Are they measuring totally separate things that just have to do with muons?
Yes.
 
  • Like
Likes vanhees71 and ohwilleke
  • #55
exponent137 said:
What can this article tell us about g-2 disagreement?
https://www.quantamagazine.org/protons-antimatter-revealed-by-decades-old-experiment-20210224/
At least, it can tell us that the hadrons are not explained enough?

(Although we talk about muons, the problem of g-2 disagreement is because of hadrons.)

Not much. The article is about proton structure and the proton parton distribution function (PDF).

The Theory Initiative's white paper is basically looking at the propensity of electron-positron collisions to produce pions and the properties of the pion's produced in order to avoid having to calculate it from first principles, and then extrapolating that to the muon g-2 calculation context, while the BMW calculation is straight up from QCD. The BMW paper argues that the transfer of the electron-positron collision data to the muon g-2 calculation by the Theory Initiative has been done wrong (and an ugly mix of experimental results for parts of a calculation and lattice QCD simulations for other parts of it is certainly an unconventional approach).

In the muonic hydrogen proton radius case, it turns out that the measurement of the proton radius in the muonic hydrogen case was spot on and that the old and inaccurate measurement of the proton radius in ordinary electron hydrogen was the source of the discrepancy. We could be seeing something similar here.
 
  • Like
Likes vanhees71 and exponent137
  • #56
But indeed the largest uncertainty in the theoretical calculation of ##(g-2)## of the muon are the radiative corrections due to the strong interactions (in low-energy language, "the hadronic contributions" or "hadronic vacuum polarization" (HVP). If I understand it right, what's usually compared as "theory" to the data uses a semiempirical approach to determine these hadronic distributions by calculating the needed matrix elements via dispersion relations from measurements of the ##\mathrm{e}^+ + \mathrm{e}^{-} \rightarrow \text{hadrons}## cross section. This is based on very general theoretical input, i.e., the unitarity of the S-matrix, but the devil is in the detail, because it's everything else then easy to use the dispersion relations to get the HVP contributions from data. So I'd be not too surprised, if the systematic unertainty of this procedure turns out to be underestimated. After all there are hints from the lattice (by the Wuppertal/BMW lQCD group) that the HVP contributions may well be such that the discrepancy between "theory" and "data" is practically gone (only about 1 sigma discrepancy). Of course, also lQCD calculations are a tricky business. One must not forget that we talk here about high-accuracy physics, which is never easy to get (neither experimental nor theoretical).
 
  • Like
Likes ohwilleke, websterling, PeroK and 1 other person
  • #57
I am not an expert, but I don't believe that any of the theory calculations (of HVP) are pure ab initio calculations. All are ways of relating low energy measurements (done at places like Serpukhov in the 1960's) to g-2.

Had history evolved differently and we had a g-2 measurement first, we would be discussing whether there was an anomaly in the low-energy Serpukov data.
 
  • Like
Likes Astronuc, vanhees71 and exponent137
  • #58
Vanadium 50 said:
I am not an expert, but I don't believe that any of the theory calculations (of HVP) are pure ab initio calculations. All are ways of relating low energy measurements (done at places like Serpukhov in the 1960's) to g-2.

Had history evolved differently and we had a g-2 measurement first, we would be discussing whether there was an anomaly in the low-energy Serpukov data.
Are you sure?

I had understood lattice QCD to be akin to N-body simulations in cosmology. You discretize the QCD equations and the particles and time intervals and then iterate it. The description of what they did in their pre-print sounds like this is what they did.

Quanta Magazine, interviewing the authors, summarizes what was done by the BMW groups as follows:
They made four chief innovations. First they reduced random noise. They also devised a way of very precisely determining scale in their lattice. At the same time, they more than doubled their lattice’s size compared to earlier efforts, so that they could study hadrons’ behavior near the center of the lattice without worrying about edge effects. Finally, they included in the calculation a family of complicating details that are often neglected, like mass differences between types of quarks. “All four [changes] needed a lot of computing power,” said Fodor.

The researchers then commandeered supercomputers in Jülich, Munich, Stuttgart, Orsay, Rome, Wuppertal and Budapest and put them to work on a new and better calculation. After several hundred million core hours of crunching, the supercomputers spat out a value for the hadronic vacuum polarization term. Their total, when combined with all other quantum contributions to the muon’s g-factor, yielded 2.00233183908. This is “in fairly good agreement” with the Brookhaven experiment, Fodor said. “We cross-checked it a million times because we were very much surprised.”
 
  • #60
ohwilleke said:
quantamagazine.org said:
The researchers then commandeered supercomputers in Jülich, Munich, Stuttgart, Orsay, Rome, Wuppertal and Budapest and put them to work on a new and better calculation. After several hundred million core hours of crunching...
So I have an off-topic question about this computer time. One-hundred million hours is 11,000 years; split between the seven computers mentioned that would be what, 1600 years each. How does that work?
 
  • #61
Each computer has more than one CPU.
 
  • Like
Likes vanhees71, Demystifier and ohwilleke
  • #62
Vanadium 50 said:
Each computer has more than one CPU.
Thanks. I just looked up the first mentioned, Jülich, and just one of their machines (JUWELS) is said to have 122,768 CPU cores. Amazing.
 
  • Like
Likes vanhees71, Demystifier and ohwilleke
  • #63
gmax137 said:
122,768 CPU cores. Amazing.

Tiny. ANL's Mira, now retired, had 786,432. Each would run four threads.

A lot of DOE supercomputer use goes to Lattice QCD.
 
  • Like
Likes vanhees71, Demystifier and ohwilleke
  • #64
Great article on the Muon g-2 results posted in Forbes yesterday (just to add to the discussion back on page two of this thread)...

Obviously, what was released a couple of weeks ago are just some of the first results from Muon g-2. It will be interesting to see what else comes out of that campus and the engineers at FNAL.

If anyone else is interested, our organization provided some (or all) of the copper thermal straps (flexible thermal links) that are used by the accelerators at FNAL, SLAC, JLAB, ANL, and CERN, in their cryomodules, as well as the cold boxes, cryocoolers, cryostats, and dilution refrigerators in use at these labs, and we are always looking for university collaboration/partners at physics departments across North America, Europe, and Asia (partnering on articles for journals, collaborative research, ways to more efficiently cool cryocoolers, etc.).

If anyone on this thread would like to discuss how we can work together and even provide your university or lab with free thermal hardware, comment here or reach out to me at any time. You can also take a look at some of our other thermal strap products used by physics departments across the globe (for both terrestrial and spaceflight applications).

Arguments over the data and controversy aside--congrats to the Fermi team for their work...
 
  • #65
Information for the next announcement:
https://physicstoday.scitation.org/doi/10.1063/PT.3.4765
The second and third runs, which incorporated additional improvements informed by the first run, are already complete; their results are expected to be published by next summer. According to Chris Polly, a spokesperson for the collaboration and a physicist at Fermilab, there’s about a 50-50 chance that those results will push the muon anomaly beyond 5 standard deviations.
 
  • Like
Likes gwnorth, ohwilleke and vanhees71
  • #66
exponent137 said:
Information for the next announcement:
https://physicstoday.scitation.org/doi/10.1063/PT.3.4765
The second and third runs, which incorporated additional improvements informed by the first run, are already complete; their results are expected to be published by next summer. According to Chris Polly, a spokesperson for the collaboration and a physicist at Fermilab, there’s about a 50-50 chance that those results will push the muon anomaly beyond 5 standard deviations.
Either it will or it won't.
Didn't need to use any fancy equations for this.
:cool:
 
  • #67
exponent137 said:
Information for the next announcement:
https://physicstoday.scitation.org/doi/10.1063/PT.3.4765
The second and third runs, which incorporated additional improvements informed by the first run, are already complete; their results are expected to be published by next summer. According to Chris Polly, a spokesperson for the collaboration and a physicist at Fermilab, there’s about a 50-50 chance that those results will push the muon anomaly beyond 5 standard deviations.
Of course, all the drama in this story is on the theory side and not the experiment side. If someone determines that the SM prediction really is the BMW one then this becomes a case of boring every more precise confirmation of the SM, and all of the BSM theories proposed to explain the muon g-2 anomaly are wrong because there isn't one.
 
  • Like
Likes vanhees71 and exponent137
  • #68
It's still interesting to figure out why the other prediction is off in that case (and I think that's the most likely case).
 
  • Like
Likes vanhees71, ohwilleke and exponent137
  • #69
mfb said:
It's still interesting to figure out why the other prediction is off in that case (and I think that's the most likely case).
I agree. I'm not sure that the muon g-2 experiment, as opposed to conducting new rounds of the experiments incorporated in the estimate (which BMW didn't use), will resolve that, however.
 
  • #70
The other prediction is based on a semiempirical calculation of certain "hadronic contributions" to ##(g-2)_{\mu}## based on ultraprecise measurements of ##\text{e}^+ + \text{e}^- \rightarrow \text{hadrons}## using dispersion relations. There the devil is in the detail, how to apply these dispersion relations based on the data. It's numerically not trivial, given that it's really high-precision physics. It's of course also important to consolidate the lattice calculations further.
 
  • #71
Just as a reference point. The muonic proton radius discrepancy was almost entirely due to weaknesses in old ordinary hydrogen proton radius, and the data used in the Theory Initiative SM calculation could present similar issues.
 
  • #72
ohwilleke said:
Just as a reference point. The muonic proton radius discrepancy was almost entirely due to weaknesses in old ordinary hydrogen proton radius, and the data used in the Theory Initiative SM calculation could present similar issues.
Do you think about this article:
https://physicsworld.com/a/solving-the-proton-puzzle/

Maybe also atom interferometry will give that those other classical measurements of G had some unknown systematic error.
 
Last edited:
  • #73
exponent137 said:
Do you think about this article:
https://physicsworld.com/a/solving-the-proton-puzzle/

Maybe also atom interferometry will give that those other classical measurements of G had some unknown systematic error.
The article is a well done analysis.
 
  • #74
The new release of g-2 measurement will be on Aug, 10th:
 
  • Like
Likes vanhees71, mfb and ohwilleke
  • #75
The new August 10, 2023 paper and its abstract:

Screenshot 2023-08-10 at 11.12.57 AM.png

The new paper doesn't delve in depth into the theoretical prediction issues even to the level addressed in today's live streamed presentation. It says only:

A comprehensive prediction for the Standard Model value of the muon magnetic anomaly was compiled most recently by the Muon g−2 Theory Initiative in 2020[20], using results from[21–31]. The leading order hadronic contribution, known as hadronic vacuum polarization (HVP)was taken from e+e−→hadrons cross section measurements performed by multiple experiments. However, a recent lattice calculation of HVP by the BMW collaboration[30] shows significant tension with the e+e− data. Also, a new preliminary measurement of the e+e−→π+π−cross section from the CMD-3 experiment[32] disagrees significantly with all other e+e−data. There are ongoing efforts to clarify the current theoretical situation[33]. While a comparison between the Fermilab result from Run-1/2/3 presented here, aµ(FNAL),and the 2020 prediction yields a discrepancy of 5.0σ, an updated prediction considering all available data will likely yield a smaller and less significant discrepancy.
The CMD-3 paper is:
F.V. Ignatovetal. (CMD-3 Collaboration), Measurement of the e+e−→π+π−cross section from threshold to 1.2GeV with the CMD-3 detector(2023), arXiv:2302.08834.​

This is 5 sigma from the partially data based 2020 White Paper's Standard Model prediction, but much closer to (consistent at the 2 sigma level with) the 2020 BMW Lattice QCD based prediction (which has been corroborated by essentially all other partial Lattice QCD calculations since the last announcement) and to a prediction made using a subset of the data in the partially data based prediction which is closest to the experimental result.

The 2020 White Paper is:

T. Aoyama et al.,The anomalous magnetic moment of the muon in the Standard Model, Phys. Rep. 887,1 (2020).​

This is shown in the YouTube screen shot from their presentation this morning (below):

Screenshot 2023-08-10 at 10.03.01 AM.png

As the screenshot makes visually very clear, there is now much more uncertainty in the theoretically calculated Standard Model predicted value of muon g-2 than there is in the experimental measurement itself.

For those of you who aren't visual learners:

World Experimental Average (2023): 116,592,059(22)
Fermilab Run 1+2+3 data (2023): 116,592,055(24)​
Fermilab Run 2+3 data(2023): 116,592,057(25)​
Combined measurement (2021): 116,592,061(41)​
Fermilab Run 1 data (2021): 116,592,040(54)​
Brookhaven's E821 (2006): 116,592,089(63)​
Theory Initiative calculation: 116,591,810(43)
BMW calculation: 116,591,954(55)

It is likely that the true uncertainty in the 2020 White Paper result is too low, quite possibly because of understated systemic error in some of the underlying data upon which it relies from electron-positron collisions.

In short, there is no reason to doubt that the Fermilab measurement of muon g-2 is every bit as solid as claimed, but the various calculations of the predicted Standard Model value of the QCD part of muon g-2 varies are in strong tension with each other.

It appears the the correct Standard Model prediction calculation is closer to the experimental result than the 2020 White Paper calculation (which mixed lattice QCD for parts of the calculation and experimental data in lieu of QCD calculations for other parts of the calculation), although the exact source of the issue is only starting to be pinned down.

Side Point: The Hadronic Light By Light Calculation

The hadronic QCD component is the sum of two parts, the hadronic vacuum polarization (HVP) and the hadronic light by light (HLbL) components. In the Theory Initiative analysis the QCD amount is 6937(44) which is broken out as HVP = 6845(40), which is a 0.6% relative error and HLbL = 98(18), which is a 20% relative error.

In turn, the e+e−→π+π− cross section portion of the HVP contribution to muon g-2, which is the main thing that the Theory Initiative relied upon experimental data rather than first principles calculations to do, accounts for 5060±34×10−11 out of the total aHVP µ =6931±40×10−11 value, and is the source of most of the uncertainty in the Theory Initiative prediction.

The presentation doesn't note it, but there was also an adjustment bringing the result closer to the experimental result in the hadronic light-by-light calculation (which is the smaller of two QCD contributions to the total value of muon g-2 and wasn't included in the BMW calculation) which was announced on the same day as the previous data announcement. The new calculation of the hadronic light by light contribution to the muon g-2 calculation increases the contribution from that component from 92(18) x 10-11 to 106.8(14.7) x 10-11.

As the precision of the measurements and the calculations of the Standard Model Prediction improves, a 14.8 x 10-11 discrepancy in the hadronic light by light portion of the calculation becomes more material.

Why Care?

Muon g-2 is an experimental observable which implicates all three Standard Model forces that serves as a global test of the consistency of the Standard Model with experiment.

If there really were a five sigma discrepancy between the Standard Model prediction and the experimental result, this would imply new physics at fairly modest energies that could probably be reached at next generation colliders (since muon g-2 is an observable that is more sensitive to low energy new physics than high energy new physics).

On the other hand, if the Standard Model prediction and the experimental result are actually consistent with each other, then low energy new non-gravitational physics are strongly disfavored at foreseeable new high energy physics experiments, except in very specific ways that cancel out in a muon g-2 calculation.
 
Last edited:
  • Like
Likes vanhees71 and exponent137
  • #76
For a moment, let us forget about the measurements of g-2. Can we say that the BMW assumptions are more logical and correct than these of the Standard Model? Or, this is not clear?
 
  • #77
BMW is the SM. As is the Theory Initiative.
 
  • Like
Likes exponent137, ohwilleke and vanhees71
  • #78
exponent137 said:
For a moment, let us forget about the measurements of g-2. Can we say that the BMW assumptions are more logical and correct than these of the Standard Model? Or, this is not clear?
Everybody is trying to make a Standard Model calculation.

BMW does a first principles lattice QCD calculation relying only on general physical constants (like the strong force coupling constant) measurement as experimental inputs.

The Theory Initiative took a different approach. It concluded that a big part of the lattice QCD calculation (which is profoundly difficult to do with BMW being the only group that has ever done the entire thing and that using multiple supercomputers for a long period of time) is equivalent to experiments that had already been done, although those experiments were somewhat stale.

The QCD calculations are so involved that it is hard to error check your work, and we haven't had a full replication of these calculations by an independent group yet which is the only surefire way to confirm that BMW didn't make errors. But, key parts of the BMW calculation have been replicated repeatedly, and the Theory Initiative values for those key parts of the calculation (called the "window") are very different from the BMW calculation. So, there is no good reason to doubt the BMW calculations at this point, and there is good reason to doubt the Theory Initiative result.

It is possible that the Theory Initiative merged the experimental results with different in kind lattice QCD calculations in a manner that was not correct.

But, the early CMD-3 experimental data redoing the stale experiments that the Theory Initiative relied upon and getting results very close to the lattice QCD calculation done by BMW and very different from the stale experiments, make it seem more likely that while the Theory Initiative's method for integrating experiment and lattice QCD calculations may have been sound, that the experimental results it was relying upon were flawed and had understated systemic error. (Very much like the Proton Radius Puzzle problem discussed above in this thread.)

I think that the underlying electron-positron data that the Theory Initiative was relying upon was from the Large Electron-Positron Collider that operated from the years 1989-2000 at CERN, although I haven't definitively pinned this down by going paper by paper back to the original sources. But, I may be wrong about that. The introduction to the CMD-3 paper cited above notes that:

The π+π−channel gives the major part of the hadronic contribution to the muon anomaly,506.0±3.4×10−10 out of the total aHVP µ =693.1±4.0×10−10 value. It also determines(together with the light-by-light contribution) the overall uncertainty ∆aµ= ±4.3×10−10 of the standard model prediction of muon g−2 [5]. To conform to the ultimate target precision of the ongoing Fermilab experiment [16,17]∆aexp µ [E989]≈±1.6×10−10 and the future J-PARC muon g-2/EDM experiment[18],the π+π− production cross section needs to be known with the relative overall systematic uncertainty about 0.2%. Several sub-percent precision measurements of the e+e−→π+π− cross section exist. The energy scan measurements were performed at VEPP-2M collider by the CMD-2 experiment (with the systematic precision of 0.6–0.8%)[19,20,21,22] and by the SND experiment (1.3%)[23].These results have some what limited statistical precision. There are also measurements based on the initial-state radiation(ISR) technique by KLOE(0.8%)[24,25, 26,27],BABAR(0.5%)[28] and BES-III (0.9%)[29]. Due to the high luminosities of these e+e−factories, the accuracy of the results from the experiments are less limited by statistics, meanwhile they are not fully consistent with each other within the quoted systematic uncertainties. One of the main goals of the CMD-3 and SND experiments at the newVEPP-2000 e+e− collider at BINP,Novosibirsk, is to perform the new high precision high statistics measurement of the e+e−→π+π−cross section. Recently, the first SND result based on about 10% of the collected statistics was presented with a systematic uncertainty of about 0.8%[30]. Here we present the first CMD-3result.
 
Last edited:
  • Like
Likes AndreasC, vanhees71 and exponent137
  • #79
AFAIK the method used by the theory initiative is to use experimental data to extract spectral functions and then use dispersion relations to get the radiative corrections for g-2. That's also numerically challenging method.
 
  • Like
Likes Vanadium 50, exponent137 and ohwilleke
  • #80
vanhees71 said:
AFAIK the method used by the theory initiative
I believe this is correct.

Some history - the "old way" was for people to take the calculations and experimental inputs and combine them out of the box. This had some consistency problems, including a rather embaarssing sign error. The Theory Initiative was a community response to this: instead of a patchwork, let's all Do The Right Thing.

There is not consensus on what the "right thing" (more later) so this evolved to something closer to "Do The Same Thing". The procedure is at least consistent. The problem - or at least a problem - is with the data inputs. Term X might depend on experiments A, B, and C. Term Y might depend on B, C and D, and Term Z on A, E, F and G. How do you get from the errors on A-G to the errors on X, Y and Z? If the errors were Gaussian, and you fully understood the correlations, you'd have a chance, but the errors aren't Gaussian, nor exact, nor are the correlations 100% understood.

And some data is just wrong. You can get two measurements that feed in, but can't both be right. Do you pick one? How? Do you take the average and inglate the error, thus ensuring that the central value is wrong, but hopefully covered by the errors? Something else?

A similar issue cropped up with parton densities in the proton. It was twelve years between when they started down a Theory Initiative like path and where the PDF sets had serious predictive power (i.e. could tell you what you didn't already know). This is not easy, and the fact that two groups get different answers does not mean one is right and one is wrong. Both are wrong to a degree, and will become less wrong as the calculations and input data improve.
 
  • Like
  • Informative
Likes ohwilleke, nsaspook, vanhees71 and 2 others
  • #81
I've also a general quibble with this approach. After all the main motivation for this high-precision measurement of the muon's g-2 is to test the Standard Model of elementary particle physics (SM) with some hope to finally find some deviations pointing in the direction, how a better theory might look like, which again is motivated by the belief that the SM is incomplete. For me the most convincing argument for this conjecture is that the SM seems not to have "enough CP violation" in it to explain the matter-antimatter asymmetry in the universe, which also rests on the believe that the "initial state" an ##\epsilon## after the big bang has been symmetric. Anyway, a test of the SM is always interesting.

Now if you extract some QCD-radiative corrections from corresponding experimental data of ##\mathrm{e}^+ + \mathrm{e}^- \rightarrow \text{hadrons}##, you don't compare the g-2 measurement with the prediction of the SM but with parts of the SM prediction, which can be calculated perturbative (mostly the electroweak corrections) and parts that are extracted from measurements. The latter are not SM predictions but what's really going on in Nature for the processes under consideration like strong-interaction corrections to photon-photon scattering etc. So maybe, there's some beyond-the-SM physics involved, i.e., it's indeed not the result you'd get from a calculation of these processes/radiative corrections within the SM.

That's why lattice calculations are so important, because they provide the corresponding radiation corrections from QCD in some approximation, and obviously to get these contributions is computationally very challenging. So there's only one complete calcuation by the BMW group, and interestingly that lowers the discrepancy between the SM prediction and g-2 tremendously (I think it's only around ##2 \sigma##), i.e., it seems as if the SM after all might survive also this test. This is the more likely since parts of the BMW calculation have been checked and confirmed by other, independent lattice groups.

From history in my own field, it's clear that such independent checks of highly complicated lattice calculations are very important, as the determination of the pseudo-critical chiral as well as confinement-deconfinement transition temperature (even at ##\mu_{\text{B}}=0##!) demonstrates, but that's another story.
 
  • #82
J-PARC works on its own muon g-2 experiment. At the time of the proposal we didn't have the lattice calculations (at least not with competitive uncertainties) and the experimental uncertainty was larger as well. Now the motivation for this experiment has gotten significantly weaker. It will still be useful as independent measurement with a different method to cross-check the Fermilab result, and it will improve the world average - but it has become clear that the main issue is on the theory side.
 
  • Informative
  • Like
Likes ohwilleke and berkeman
  • #83
vanhees71 said:
So there's only one complete calcuation by the BMW group, and interestingly that lowers the discrepancy between the SM prediction and g-2 tremendously (I think it's only around ##2 \sigma##), i.e., it seems as if the SM after all might survive also this test.
The BMW calculation is consistent with the new muon g-2 world average at 1.77 sigma. The fit is even a little better than that (despite a slightly lower combined uncertainty in the theoretical calculations of the SM prediction) when the improvement in the hadronic light-by-light calculation that was not included in the BMW calculation is taken into account.
 
Last edited:
  • #84
mfb said:
J-PARC works on its own muon g-2 experiment. At the time of the proposal we didn't have the lattice calculations (at least not with competitive uncertainties) and the experimental uncertainty was larger as well. Now the motivation for this experiment has gotten significantly weaker. It will still be useful as independent measurement with a different method to cross-check the Fermilab result, and it will improve the world average - but it has become clear that the main issue is on the theory side.
FWIW, an independent cross check from J-PARC is desirable because both the Brookhaven and Fermilab experiments are using some of the same experimental equipment that was shipped (in part by barge), from one lab to the other:

Transporting the g-2 ring 900 miles from Brookhaven to Fermilab was a feat of a different sort. While the iron that makes up the magnet yoke comes apart, the three 50-foot-diameter superconducting coils that energize the magnet do not, and therefore had to travel as a single unit. In order to maintain the superb accuracy of the electromagnet, the 50-foot-diameter circular coil shape had to maintain to within a quarter-inch, and flatness to within a tenth of an inch, during transportation.
In the summer of 2013, the Muon g-2 team successfully transported a 50-foot-wide electromagnet from Long Island to the Chicago suburbs in one piece. The move took 35 days and traversed 3,200 miles over land and sea. Thousands of people followed the move of the ring, and thousands were on hand to greet it upon its arrival at Fermilab.

The move began on June 22, 2013, as the ring was transported across the Brookhaven National Laboratory site, using a specially adapted flatbed truck and a 45-ton metal apparatus keeping the electromagnet as flat as possible. On the morning of June 24, the ring was driven down the William Floyd Parkway on Long Island, and then a massive crane was used to move it from the truck onto a waiting barge.

The barge set to sea on June 25, and spent nearly a month traveling down the east coast, around the tip of Florida, into the Gulf of Mexico and then up the Tennessee-Tombigbee Waterway to the Mississippi, Illinois and Des Plaines rivers. The barge arrived in Lemont, Illinois on July 20, and the ring was moved to the truck again on July 21. And then over three consecutive nights — July 23, 24 and 25 — that truck was used to drive the ring to Fermilab in Batavia, Illinois.

The Muon g-2 electromagnet crossed the threshold into Fermilab property at 4:07 a.m. on July 26. That afternoon, Fermilab held a party to welcome it, and about 3,000 of our neighbors attended. The collaboration is grateful for the support, and for the assistance of all the local, county and state agencies who made this move possible.

So any systemic error due to flaws in the design or construction of that equipment wouldn't be caught by the Fermilab replication. But, J-PARC would address that issue.
 
  • Like
Likes exponent137 and Astronuc
  • #85
I don't think I buy that. The biggest recycled part is the magnet yoke, but the coils were all redone and the field remeasured (and remeasured better), so unless you want to argue that the iron is somehow cursed, it's not an equipment problem.

It could be a problem with the "magic momentum" technique, and that would possibly be exposed by a different technique. That still would not solve the problem of the theoretical uncertainties, of course.
 
  • #86
Vanadium 50 said:
That still would not solve the problem of the theoretical uncertainties, of course.
Of course.

And, nobody has any good reason to think that Fermilab's measurements are not spot on. It is an expensive and cumbersome measurement to do at that level of precision, but it is a much more straight forward and cleaner measurement than, for example, most of the quantities measured at the Large Hadron Collider.
 
  • #87
The YouTube presentation on August 10 also discussed how much improvement in the precision of the measurement is expected as new data is collected (something that wasn't discussed in the paper that was submitted).

The experimental value is already twice as precise as the best available theoretical prediction of muon g-2 in the Standard Model. The experimental value is expected to ultimately be about four times more precise than the current best available theoretical predictions, as illustrated below:

Screenshot%202023-08-10%20at%2010.08.27%20AM.png


Completed Runs 4 and 5 and in progress Run 6 are anticipated to reduce the uncertainty in the experimental measurement over the next two or three years by about 50%.

But the improvement will be mostly from Run 4 which should release its results sometime around October of 2024. The additional experimental precision after that which is anticipated from Run 5 and Run 6 is expected to be pretty modest.

The chart only shows the reduction in uncertainty due to a larger sample size, but so far, reductions in systemic uncertainty and reductions in statistical uncertainty in each new run have been almost exactly proportionate, and there is good reason to think that this trend will continue.
 
  • Like
Likes exponent137 and vanhees71
  • #88
Muon g-2 announcement:
 
  • #89
Next week (as noted in #88), on June 2, 2025, the final round of experimental results for muon g-2 will be announced. Ahead of that there is an update of the original Fermilab experiment related Muon g-2 White paper that got the Standard Model predicted value for muon g-2 badly wrong. The revised version acknowledges this was inaccurate and remarks that the revised prediction is spot on with the experimental value of muon g-2.

The revised state of the art Standard Model prediction will still be about four times less precise than the experimentally measured value after June 2, 2025, however.

The predicted value's uncertainty is greater than the experimentally measured uncertainty almost entirely due to the uncertainties in the QCD (quantum chromodynamics a.ka. strong force) calculation of the leading order hadronic vacuum polarization contribution to muon g-2.

These uncertainties are hard to reduce, since the values of the fundamental physical constants relevant to the calculation, like the strong force coupling constant's value and the light quark masses, have uncertainties of the same magnitude as the total HVP calculation.

The consistency of the experimental value of muon g-2 and the value for it predicted in the Standard Model, is a broad, global, high precision measurement of the consistency of all parts of the low to medium energy scale Standard Model of Particle Physics with the real world.

The consistency which exists strongly disfavors the discovery of any beyond the Standard Model physics at a next generation particle collider (even though there one could cherry pick potential modifications of the Standard Model that haven't already been ruled out by other high energy physics data, that could have no impact on muon g-2, or would have an impact that is too negligible to discern).

This summary chart appears in the introduction to the paper:

Screenshot%202025-05-28%20at%201.02.14%E2%80%AFPM.webp

A chart from the conclusion shows how the old White Paper Standard Model prediction for muon g-2 and the new one differ.
Screenshot%202025-05-28%20at%201.26.02%E2%80%AFPM.webp

We present the current Standard Model (SM) prediction for the muon anomalous magnetic moment, aμ, updating the first White Paper (WP20) [1].
The pure QED and electroweak contributions have been further consolidated, while hadronic contributions continue to be responsible for the bulk of the uncertainty of the SM prediction. Significant progress has been achieved in the hadronic light-by-light scattering contribution using both the data-driven dispersive approach as well as lattice-QCD calculations, leading to a reduction of the uncertainty by almost a factor of two.
The most important development since WP20 is the change in the estimate of the leading-order hadronic-vacuum-polarization (LO HVP) contribution. A new measurement of the e+e−→π+π− cross section by CMD-3 has increased the tensions among data-driven dispersive evaluations of the LO HVP contribution to a level that makes it impossible to combine the results in a meaningful way. At the same time, the attainable precision of lattice-QCD calculations has increased substantially and allows for a consolidated lattice-QCD average of the LO HVP contribution with a precision of about 0.9%.
Adopting the latter in this update has resulted in a major upward shift of the total SM prediction, which now reads a(SM)(μ) = 116592033(62) × 10^−11 (530 ppb). When compared against the current experimental average based on the E821 experiment and runs 1-3 of E989 at Fermilab, one finds a(exp)(μ)−a(SM)(μ) = 26(66) × 10^−11, which implies that there is no tension between the SM and experiment at the current level of precision. The final precision of E989 is expected to be around 140 ppb, which is the target of future efforts by the Theory Initiative. The resolution of the tensions among data-driven dispersive evaluations of the LO HVP contribution will be a key element in this endeavor.
R. Aliberti, et al., "The anomalous magnetic moment of the muon in the Standard Model: an update" arXiv:2505.21476 (May 27, 2025) (188 pages).

The conclusion explains that:
By comparing the uncertainties of Eq. (9.5) and Eq. (9.4) it is apparent that the precision of the SM prediction must be improved by at least a factor of two to match the precision of the current experimental average, which will soon be augmented by the imminent release of the result based on the final statistics of the E989 experiment at Fermilab. We expect progress on both data-driven and lattice methods applied to the hadronic contributions in the next few years. Resolving the tensions in the data-driven estimations of the HVP contribution is particularly important, and additional experimental results combined with further scrutiny of theory input such as from event generators should provide a path towards this goal. Further progress in the calculation of isospin-breaking corrections, from both data-driven and lattice-QCD methods, should enable a robust SM prediction from τ data as well. For lattice-QCD calculations of HVP continuing efforts by the world-wide lattice community are expected to yield further significant improvements in precision and, hopefully, even better consolidation thanks to a diversity of methods. The future focus will be, in particular, on more precise evaluations of isospin-breaking effects and the noisy contributions at long distances.
The role of aµ as a sensitive probe of the SM continues to evolve. We stress that, even though a consistent picture has emerged regarding lattice calculations of HVP, the case for a continued assessment of the situation remains very strong in view of the observed tensions among data-driven evaluations. New and existing data on e+e− hadronic cross sections from the main collaborations in the field, as well as new measurements of hadronic τ decays that will be performed at Belle II, will be crucial not only for resolving the situation but also for pushing the precision of the SM prediction for aµ to that of the direct measurement. This must be complemented by new experimental efforts with completely different systematics, such as the MUonE experiment, aimed at measuring the LO HVP contribution, as well as an independent direct measurement of aµ, which is the goal of the E34 experiment at J-PARC. The interplay of all these approaches, various experimental techniques and theoretical methods, may yield profound insights in the future, both regarding improved precision in the SM prediction and the potential role of physics beyond the SM. Finally, the subtleties in the evaluation of the SM prediction for aµ will also become relevant for the anomalous magnetic moment of the electron, once the experimental tensions in the determination of the fine-structure constant are resolved.
Basically, the conclusion calls for scientists to get to the bottom of why the experiments that were used as a basis for the first White Paper prediction were wrong, and hopes against all reasonable expectations that the process of doing that will reveal new physics.

The paper's claim that the uncertainty in the Standard Model prediction can be cut dramatically "in the next few years" is pretty much wishful thinking.

This paper doesn't address in detail how completely this result ruled out new physics, but further papers by unaffiliated scientists will no doubt do just that not long after the new experimental results are released next week.
 
Last edited:
  • Informative
Likes PeroK and exponent137
  • #90
I'm shocked! Wait, I'm not.
mfb said:
If there are two SM predictions and only one agrees with measurements...
The new prediction is right between experimental result and BMW and perfectly compatible with both. Very approximate drawing:

muong2.webp
 
  • Like
Likes exponent137 and ohwilleke
  • #91
The presentation is tomorrow (June 3, 2025) at 10 a.m. CT (11 a.m. ET, 9 a.m. MT, 8 a.m. PT) on YouTube. The link should appear here.
 
Last edited:
  • Like
Likes exponent137
  • #92
From this morning's seminar: The new experimental result from Fermilab Runs 1-6 combined is:

0.01165920705(148) which is 125 parts per billion (at one place the seminar said 125 ppb and in another it said 127 ppb). This exceeded their goal of 140 ppb.

1748966317733.webp

Including the Brookhaven result in the global experimental average only slightly tweaks this result because the average is inverse error weighted and the Brookhaven result is a much greater uncertainty. Runs 4-6 whose results were announced today, slightly pulled up the total value. It's result was 5 x 10-12 higher than the overall average.

1748966610466.webp

A crude breakdown of the sources of the uncertainty in the final results was as follows:
1748966669898.webp

The 125-127 ppb uncertainty in the experimental result (i.e. 0.01165920705(148)) compares to a 530 ppb uncertainty in the 2025 White Paper predicted value of muon g-2 which is a(SM)(μ) = 0.0116592033(62).

The experimental value minus the SM prediction is (375 ± 637) × 10−12 which is a difference of about 0.6 sigma, which is a very strong global confirmation of the Standard Model of Particle Physics at low to moderate energies.

The world average value minus the SM prediction is (385 ± 637) × 10−12 which is also a difference of about 0.6 sigma, which is a very strong global confirmation of the Standard Model of Particle Physics at low to moderate energies.

Most likely, the discrepancy is mostly due to the leading order hadronic vacuum polarization (LO HVP) calculation in the Standard Model prediction being about 0.5% low in a calculation with a ± 0.9% uncertainty.

This means that no new physics are expected at energies that could influence muon g-2 in amounts significantly greater than the uncertainty in this result. The experimental precision is about four times greater than the Standard Model predicted value calculation. Realistically, this means that no new physics are expected at a next generation particle collider (in any way that could influence muon g-2, which is almost, but not quite, any possible way).

The paper is available here. It's abstract states:

A new measurement of the magnetic anomaly aµ of the positive muon is presented based on data taken from 2020 to 2023 by the Muon g−2 Experiment at Fermi National Accelerator Laboratory (FNAL). This dataset contains over 2.5 times the total statistics of our previous results. From the ratio of the precession frequencies for muons and protons in our storage ring magnetic field, together with precisely known ratios of fundamental constants, we determine aµ = 1165920710(162) × 10−12 (139ppb) for the new datasets, and aµ = 1165920705(148) × 10−12 (127ppb) when combined with our previous results. The new experimental world average, dominated by the measurements at FNAL, is aµ(exp) = 1165920715(145) × 10−12 (124ppb). The measurements at FNAL have improved the precision on the world average by over a factor of four.

1748968527355.webp
 
Last edited:
  • #93
125 ppb on the anomalous part is a 1.5 ppb uncertainty on the magnetic moment.

From the experimental side, this looks done for now. JPARC will check it with a different approach for confirmation, but the comparison really depends on theory uncertainties now.
 
  • #94
mfb said:
125 ppb on the anomalous part is a 1.5 ppb uncertainty on the magnetic moment.
I was thinking the same thing. It is interesting that they don't present it that way as it sounds more impressive (because it is more impressive and really a more accurate description of the precision of the work that they are doing, since they're measuring the magnetic moment and not just the anomalous part).
mfb said:
From the experimental side, this looks done for now. JPARC will check it with a different approach for confirmation, but the comparison really depends on theory uncertainties now.
I agree. JPARC is expected to be less precise than this result, so it is really a check on the robustness and replicability of the result, rather than an effort to get more precision.

It is also interesting that the theory uncertainties are so well understood. We don't just know that the uncertainty mostly comes from the QCD contribution, or even that the uncertainty mostly comes from the HVP contribution, we know that the uncertainty mostly comes from the leading order HVP contribution, and not from NLO-HVP or NNLO-HVP contributions. So, the brute force approach of doing to calculation out to more loops doesn't help.

Looking at the error budget of the LO-HVP calculations would be the next step (the 2025 White Paper just averaged the recent high quality LO-HVP calculations without looking at them one by one, so it doesn't discuss that).

The 2025 White Paper suggests that their strategy is to do more on the data driven side to try to get better data and to use it to increase the HVP with greater precision. I'm skeptical that this can be done in "a few years" as claimed, although it might be possible eventually. I'd give it a decade or two minimum, however (and without diving too deep into partisan politics, which the acting Fermilab director alluded to her in YouTube presentation delivered remotely from D.C. since she was making presentations of Congressional committees on the issue, pure science research funding prospects in the U.S. in the next four years don't look good which will slow down scientific research on all fronts and may send a lot of U.S. researchers abroad disrupting existing U.S. based research programs).

But the Lattice QCD approach is kind of up against a wall. BMW and other others doing the calculations really pulled out all stops to achieve it, but can't get better than the uncertainty in their experimentally measured SM constants inputs (especially the strong force coupling constant) permits. And, the rate of improvement in the precision of the strong force coupling constant has been painfully slow.

It almost makes more sense to assume the experimental muon g-2 result is a pure SM result and to use that as a way to make a high precision strong force coupling constant determination, although, of course, that defeats the goal of using muon g-2 to identify BSM physics. You wouldn't even have to solve it analytically. You could just try a few strong force coupling constant values in the approximately right direction and magnitude and see which one hit the LO-HVP inferred from the muon g-2 experimental result most closely, in otherwise unchanged lattice QCD setups to calculate LO-HVP.

The Particle Data Group value for the strong force coupling constant at Z boson energies is 0.1180(9). The energy scale of the muon g-2 experimental results is very tightly controlled at 3.1 GeV since that is a "magic" energy level that causes a lot of noise terms in the calculation cancel out, so the conversion from 91.188 GeV energy scales to 3.1 GeV energy scales using the strong force coupling constant beta function would insert virtually no uncertainty of its own. (FWIW, Google AI thinks that the strong force coupling constant at 3.1 GeV is about 0.236, although one should take that with a huge grain of salt.)

My intuition is that if you did that, you'd end up with a strong force coupling constant at Z boson energy of something like 0.1185(2). Honestly, an improvement like that in the measurement of this particular physical constant would be huge for all QCD calculations and hadronic physics (and for making determinations of the quark masses from existing data), maybe more valuable scientifically than ruling out new physics.
 

Attachments

  • 1749063008052.gif
    1749063008052.gif
    43 bytes · Views: 12
Last edited:
  • #95
ohwilleke said:
Looking at the error budget of the LO-HVP calculations would be the next step (the 2025 White Paper just averaged the recent high quality LO-HVP calculations without looking at them one by one, so it doesn't discuss that).
What do you mean? I think there is a whole section in the White Paper discussing different procedures to average those results. It seems challenging, because the calculations is split into three different parts (or "windows"), and only BMW and Mainz have computed the all of them.

ohwilleke said:
The 2025 White Paper suggests that their strategy is to do more on the data driven side to try to get better data and to use it to increase the HVP with greater precision. I'm skeptical that this can be done in "a few years" as claimed, although it might be possible eventually. I'd give it a decade or two minimum, however (and without diving too deep into partisan politics, which the acting Fermilab director alluded to her in YouTube presentation delivered remotely from D.C. since she was making presentations of Congressional committees on the issue, pure science research funding prospects in the U.S. in the next four years don't look good which will slow down scientific research on all fronts and may send a lot of U.S. researchers abroad disrupting existing U.S. based research programs).
Well, I think that with "better data" they don't necessarily mean "new" data. Different collaborations intend to re-analyze their raw data and, hopefully, reduce uncertainties or resolve sistematic effects. This is also discussed in the White Paper.

ohwilleke said:
But the Lattice QCD approach is kind of up against a wall. BMW and other others doing the calculations really pulled out all stops to achieve it, but can't get better than the uncertainty in their experimentally measured SM constants inputs (especially the strong force coupling constant) permits. And, the rate of improvement in the precision of the strong force coupling constant has been painfully slow
How does the strong coupling constant impact the uncertainty of the calculations? Where do they use it? I have never read anything concerning that. As far as I can see, the uncertainty is dominated by the long-distance window. Some hybrid approaches have been employed to tackle that: the R-ratio has been used to compute that contribution, which corresponds to the low energy spectrum of e+e- data, below the rho-resonance region. That is nice because there are no tensions between the different datasets for the 2pi in that energy range.

ohwilleke said:
It almost makes more sense to assume the experimental muon g-2 result is a pure SM result and to use that as a way to make a high precision strong force coupling constant determination, although, of course, that defeats the goal of using muon g-2 to identify BSM physics. You wouldn't even have to solve it analytically. You could just try a few strong force coupling constant values in the approximately right direction and magnitude and see which one hit the LO-HVP inferred from the muon g-2 experimental result most closely, in otherwise unchanged lattice QCD setups to calculate LO-HVP.
After the precise LQCD calculations, I don't think anyone takes the g-2 as a signal of new physics. Concerning using LQCD results to compute the strong coupling: this has already been done. But I don't think it is as easy as you suggest. It requires to compute the Hadronic Vacuum Polarization function in LQCD and then match it to the pQCD computation + non-perturbative contributions. That function has been computed in LQCD by some groups because it can be used to calculate the LO-HVP, but the part corresponding to the low-energy region is not very precise. I think that is discussed in the 2020 White Paper. As far as I understand, the main reason to introduce the "window" approach was to avoid using that and therefore increase the precision of the results. Anyway, the MuonE experiment intends to determine that part of the HVP function, so that might be interesting.
 
  • #96
GlitchedGluon said:
What do you mean? I think there is a whole section in the White Paper discussing different procedures to average those results. It seems challenging, because the calculations is split into three different parts (or "windows"), and only BMW and Mainz have computed the all of them.
It is averaging the results and the uncertainty in those results, but the White Paper doesn't discuss where the uncertainty in each of the calculations contributing to that average comes from, which is what you really need to know to improve your results.
GlitchedGluon said:
Well, I think that with "better data" they don't necessarily mean "new" data. Different collaborations intend to re-analyze their raw data and, hopefully, reduce uncertainties or resolve sistematic effects. This is also discussed in the White Paper.
Re-analysis is the raw data is unlikely to help much. It is very rare for such a re-analysis to make much of a difference in my experience. The CDF W boson mass data re-analysis is an example of how badly that approach can go. Re-analysis might identify additional sources of systemic error that were omitted or underestimated the first time around, but it is unlikely to shift the bottom line result meaningfully.

Put another way, HEP physicists from 25 years ago were just as smart and good at analysis as HEP physicists are today. Most of the progress in the last 25 years has been in improved instrumentation (and from the brute force of collecting more data) rather than being a result of improved analysis.
GlitchedGluon said:
How does the strong coupling constant impact the uncertainty of the calculations? Where do they use it? I have never read anything concerning that.
Every term (except for possibly one integration constant per calculation) in all QCD calculations have terms that have powers of the strong coupling constant in them. There are hundreds of thousands, if not millions, of such terms that go into a Lattice QCD calculation like the leading order hadronic vacuum polarization number (each of which corresponds to possible Feynman diagrams). This is too granular to put in a summary document like the White Paper.
GlitchedGluon said:
As far as I can see, the uncertainty is dominated by the long-distance window. Some hybrid approaches have been employed to tackle that: the R-ratio has been used to compute that contribution, which corresponds to the low energy spectrum of e+e- data, below the rho-resonance region. That is nice because there are no tensions between the different datasets for the 2pi in that energy range.
A lot of the uncertainty is irreducible due to the uncertainty in the physical constants that go into the underlying calculations. Unless you can find a calculation in which the physical constants cancel out, and this isn't one of them, you are fundamentally limited in calculation precision to the precision of the physical constants you are putting into the calculation. It's a little bit more complicated than that, but not much.
GlitchedGluon said:
After the precise LQCD calculations, I don't think anyone takes the g-2 as a signal of new physics. Concerning using LQCD results to compute the strong coupling: this has already been done.
The strong coupling constant is only known to a precision of about one part per hundred or so, so while it has been done, the efforts haven't been terribly fruitful so far. Yet, lots of directly measured quantities that you use the strong force coupling constant to calculate are known to far greater precision (e.g. the masses of the proton, neutron, and pion).

One of the virtues of using muon g-2 to make the calculation is that is a very "clean" calculation that isn't confounded by issues like imperfect detection rates of particle decays, and uncertainties related to jet energies, that are pervasive and major sources of error in most attempts to extra the strong force coupling constant from experimental data generated at colliders.

A closely related problem, however, which isn't solved by a clean calculation, is that the gluon propagator involves an infinite series that isn't truly convergent and gets worse rather than better after a smaller number of terms the the EW propagators. So, there are intrinsic limits to how precise a reverse engineering of a strong force coupling constant is possible with current calculation methods.
 
Last edited:
  • #97
ohwilleke said:
It is averaging the results and the uncertainty in those results, but the White Paper doesn't discuss where the uncertainty in each of the calculations contributing to that average comes from, which is what you really need to know to improve your results.
The whole Section 3 is dedicated to discuss the lattice results. There are several tables showing the different contributions along with their uncertainties for the individual calculations and the averages. I think the different sources of uncertainties and their impact on the results are very well known.

ohwilleke said:
Re-analysis is the raw data is unlikely to help much. It is very rare for such a re-analysis to make much of a difference in my experience. The CDF W boson mass data re-analysis is an example of how badly that approach can go. Re-analysis might identify additional sources of systemic error that were omitted or underestimated the first time around, but it is unlikely to shift the bottom line result meaningfully.
This does not make sense. There is a huge discrepancy between experimental results for e+e- to 2pi. If those tensions are resolved of course it is going to help a lot, because when the data sets are averaged taking into account those tensions by inflating the uncertainties you get, well, bigger uncertainties. It is well known that there are issues with the radiative corrections that KLOE used in their analysis, for instance. BaBar is performing a re-analysis using a method based on the angular distribution of the decay products to improve the precision because most of the systematic uncertainty comes from particle identification, to put another example.

With respect to the CDF W-boson mass, I guess you are talking about the result that came out a few years ago. I do not get your reasoning behind that statement. Re-analyses should not be performed then? I mean, the LCHb flavor anomalies went away with a re-analysis, as well as other results like the infamous di-photon resonance.

ohwilleke said:
Put another way, HEP physicists from 25 years ago were just as smart and good at analysis as HEP physicists are today. Most of the progress in the last 25 years has been in improved instrumentation (and from the brute force of collecting more data) rather than being a result of improved analysis.
I do not think this is accurate. There has been a lot of developments in statistical analysis methods, as well as software and computational power. Take for instance Machine Learning or Bayesian methods.

ohwilleke said:
Every term (except for possibly one integration constant per calculation) in all QCD calculations have terms that have powers of the strong coupling constant in them. There are hundreds of thousands, if not millions, of such terms that go into a Lattice QCD calculation like the leading order hadronic vacuum polarization number (each of which corresponds to possible Feynman diagrams). This is too granular to put in a summary document like the White Paper.
I think you might be mistaking Perturbative QCD with Lattice QCD. In Perturbative QCD you have a perturbative expansion in the strong coupling constant, and then, of course, whatever you compute in perturbation theory is going to depend on that. But this perturbative expansion only works at high energies because of Asymptotic Freedom. But this is not true for Lattice QCD. The whole point of Lattice QCD is to compute observables in a non-perturbative way, which is useful because you can make computations at low energies, where PQCD does not work. In Lattice QCD, you discretize spacetime and this allows you to compute a correlator (the path integral of it) numerically. At the end, you take the limit of lattice spacing to zero in order to recover QCD. You do not plug any value of the strong coupling in that process.

I do not understand that "This is too granular to put in a summary document like the White Paper". The White Paper is not a summary, it is a technical report of the status of the computations of the g-2 of the muon and its comparison with the experimental values. A lot of detail is provided. Why would they not mention something if it is impactul in the results?

ohwilleke said:
A lot of the uncertainty is irreducible due to the uncertainty in the physical constants that go into the underlying calculations. Unless you can find a calculation in which the physical constants cancel out, and this isn't one of them, you are fundamentally limited in calculation precision to the precision of the physical constants you are putting into the calculation. It's a little bit more complicated than that, but not much.
What does this have to do with the hybrid approach? They use experimental data to compute the part of the lattice contribution that yields the biggest source of uncertainty.

ohwilleke said:
The strong coupling constant is only known to a precision of about one part per hundred or so, so while it has been done, the efforts haven't been terribly fruitful so far. Yet, lots of directly measured quantities that you use the strong force coupling constant to calculate are known to far greater precision (e.g. the masses of the proton, neutron, and pion).
I think those masses are computed in Lattice QCD which, again, does not use the strong coupling to get the results. The strong coupling is already determined using Lattice QCD. The way they do it is that they compute some observable and then they match it to a perturbative QCD calculation. I do not think you can do that with masses.

ohwilleke said:
One of the virtues of using muon g-2 to make the calculation is that is a very "clean" calculation that isn't confounded by issues like imperfect detection rates of particle decays, and uncertainties related to jet energies, that are pervasive and major sources of error in most attempts to extra the strong force coupling constant from experimental data generated at colliders.
What would you propose to get the strong coupling from the experimental g-2 value?

ohwilleke said:
A closely related problem, however, which isn't solved by a clean calculation, is that the gluon propagator involves an infinite series that isn't truly convergent and gets worse rather than better after a smaller number of terms the the EW propagators. So, there are intrinsic limits to how precise a reverse engineering of a strong force coupling constant is possible with current calculation methods.
I do not know what you are talking about. Are you refering to the Hadronic Vacuum Polarization?
 
Last edited:
  • Like
  • Informative
Likes weirdoguy and PeroK

Similar threads

Back
Top