B What's Delaying Fermilab's Muon g-2 Results Release?

Click For Summary
Fermilab's E989 experiment is conducting a precision measurement of the muon’s anomalous magnetic moment, with preliminary results initially expected in late 2020. The delay in releasing these results has led to speculation about the reasons, including the possibility of significant findings requiring further verification. Participants in the discussion emphasize the importance of ensuring accuracy before publication, arguing that delays are common in scientific research. The collaboration is expected to announce results in early 2021, with recent updates indicating a new measurement is set for April 7. The anticipation around these results highlights their potential implications for the Standard Model of Particle Physics.
  • #91
The presentation is tomorrow (June 3, 2025) at 10 a.m. CT (11 a.m. ET, 9 a.m. MT, 8 a.m. PT) on YouTube. The link should appear here.
 
Last edited:
  • Like
Likes exponent137
Physics news on Phys.org
  • #92
From this morning's seminar: The new experimental result from Fermilab Runs 1-6 combined is:

0.01165920705(148) which is 125 parts per billion (at one place the seminar said 125 ppb and in another it said 127 ppb). This exceeded their goal of 140 ppb.

1748966317733.webp

Including the Brookhaven result in the global experimental average only slightly tweaks this result because the average is inverse error weighted and the Brookhaven result is a much greater uncertainty. Runs 4-6 whose results were announced today, slightly pulled up the total value. It's result was 5 x 10-12 higher than the overall average.

1748966610466.webp

A crude breakdown of the sources of the uncertainty in the final results was as follows:
1748966669898.webp

The 125-127 ppb uncertainty in the experimental result (i.e. 0.01165920705(148)) compares to a 530 ppb uncertainty in the 2025 White Paper predicted value of muon g-2 which is a(SM)(μ) = 0.0116592033(62).

The experimental value minus the SM prediction is (375 ± 637) × 10−12 which is a difference of about 0.6 sigma, which is a very strong global confirmation of the Standard Model of Particle Physics at low to moderate energies.

The world average value minus the SM prediction is (385 ± 637) × 10−12 which is also a difference of about 0.6 sigma, which is a very strong global confirmation of the Standard Model of Particle Physics at low to moderate energies.

Most likely, the discrepancy is mostly due to the leading order hadronic vacuum polarization (LO HVP) calculation in the Standard Model prediction being about 0.5% low in a calculation with a ± 0.9% uncertainty.

This means that no new physics are expected at energies that could influence muon g-2 in amounts significantly greater than the uncertainty in this result. The experimental precision is about four times greater than the Standard Model predicted value calculation. Realistically, this means that no new physics are expected at a next generation particle collider (in any way that could influence muon g-2, which is almost, but not quite, any possible way).

The paper is available here. It's abstract states:

A new measurement of the magnetic anomaly aµ of the positive muon is presented based on data taken from 2020 to 2023 by the Muon g−2 Experiment at Fermi National Accelerator Laboratory (FNAL). This dataset contains over 2.5 times the total statistics of our previous results. From the ratio of the precession frequencies for muons and protons in our storage ring magnetic field, together with precisely known ratios of fundamental constants, we determine aµ = 1165920710(162) × 10−12 (139ppb) for the new datasets, and aµ = 1165920705(148) × 10−12 (127ppb) when combined with our previous results. The new experimental world average, dominated by the measurements at FNAL, is aµ(exp) = 1165920715(145) × 10−12 (124ppb). The measurements at FNAL have improved the precision on the world average by over a factor of four.

1748968527355.webp
 
Last edited:
  • #93
125 ppb on the anomalous part is a 1.5 ppb uncertainty on the magnetic moment.

From the experimental side, this looks done for now. JPARC will check it with a different approach for confirmation, but the comparison really depends on theory uncertainties now.
 
  • #94
mfb said:
125 ppb on the anomalous part is a 1.5 ppb uncertainty on the magnetic moment.
I was thinking the same thing. It is interesting that they don't present it that way as it sounds more impressive (because it is more impressive and really a more accurate description of the precision of the work that they are doing, since they're measuring the magnetic moment and not just the anomalous part).
mfb said:
From the experimental side, this looks done for now. JPARC will check it with a different approach for confirmation, but the comparison really depends on theory uncertainties now.
I agree. JPARC is expected to be less precise than this result, so it is really a check on the robustness and replicability of the result, rather than an effort to get more precision.

It is also interesting that the theory uncertainties are so well understood. We don't just know that the uncertainty mostly comes from the QCD contribution, or even that the uncertainty mostly comes from the HVP contribution, we know that the uncertainty mostly comes from the leading order HVP contribution, and not from NLO-HVP or NNLO-HVP contributions. So, the brute force approach of doing to calculation out to more loops doesn't help.

Looking at the error budget of the LO-HVP calculations would be the next step (the 2025 White Paper just averaged the recent high quality LO-HVP calculations without looking at them one by one, so it doesn't discuss that).

The 2025 White Paper suggests that their strategy is to do more on the data driven side to try to get better data and to use it to increase the HVP with greater precision. I'm skeptical that this can be done in "a few years" as claimed, although it might be possible eventually. I'd give it a decade or two minimum, however (and without diving too deep into partisan politics, which the acting Fermilab director alluded to her in YouTube presentation delivered remotely from D.C. since she was making presentations of Congressional committees on the issue, pure science research funding prospects in the U.S. in the next four years don't look good which will slow down scientific research on all fronts and may send a lot of U.S. researchers abroad disrupting existing U.S. based research programs).

But the Lattice QCD approach is kind of up against a wall. BMW and other others doing the calculations really pulled out all stops to achieve it, but can't get better than the uncertainty in their experimentally measured SM constants inputs (especially the strong force coupling constant) permits. And, the rate of improvement in the precision of the strong force coupling constant has been painfully slow.

It almost makes more sense to assume the experimental muon g-2 result is a pure SM result and to use that as a way to make a high precision strong force coupling constant determination, although, of course, that defeats the goal of using muon g-2 to identify BSM physics. You wouldn't even have to solve it analytically. You could just try a few strong force coupling constant values in the approximately right direction and magnitude and see which one hit the LO-HVP inferred from the muon g-2 experimental result most closely, in otherwise unchanged lattice QCD setups to calculate LO-HVP.

The Particle Data Group value for the strong force coupling constant at Z boson energies is 0.1180(9). The energy scale of the muon g-2 experimental results is very tightly controlled at 3.1 GeV since that is a "magic" energy level that causes a lot of noise terms in the calculation cancel out, so the conversion from 91.188 GeV energy scales to 3.1 GeV energy scales using the strong force coupling constant beta function would insert virtually no uncertainty of its own. (FWIW, Google AI thinks that the strong force coupling constant at 3.1 GeV is about 0.236, although one should take that with a huge grain of salt.)

My intuition is that if you did that, you'd end up with a strong force coupling constant at Z boson energy of something like 0.1185(2). Honestly, an improvement like that in the measurement of this particular physical constant would be huge for all QCD calculations and hadronic physics (and for making determinations of the quark masses from existing data), maybe more valuable scientifically than ruling out new physics.
 

Attachments

  • 1749063008052.gif
    1749063008052.gif
    43 bytes · Views: 22
Last edited:
  • #95
ohwilleke said:
Looking at the error budget of the LO-HVP calculations would be the next step (the 2025 White Paper just averaged the recent high quality LO-HVP calculations without looking at them one by one, so it doesn't discuss that).
What do you mean? I think there is a whole section in the White Paper discussing different procedures to average those results. It seems challenging, because the calculations is split into three different parts (or "windows"), and only BMW and Mainz have computed the all of them.

ohwilleke said:
The 2025 White Paper suggests that their strategy is to do more on the data driven side to try to get better data and to use it to increase the HVP with greater precision. I'm skeptical that this can be done in "a few years" as claimed, although it might be possible eventually. I'd give it a decade or two minimum, however (and without diving too deep into partisan politics, which the acting Fermilab director alluded to her in YouTube presentation delivered remotely from D.C. since she was making presentations of Congressional committees on the issue, pure science research funding prospects in the U.S. in the next four years don't look good which will slow down scientific research on all fronts and may send a lot of U.S. researchers abroad disrupting existing U.S. based research programs).
Well, I think that with "better data" they don't necessarily mean "new" data. Different collaborations intend to re-analyze their raw data and, hopefully, reduce uncertainties or resolve sistematic effects. This is also discussed in the White Paper.

ohwilleke said:
But the Lattice QCD approach is kind of up against a wall. BMW and other others doing the calculations really pulled out all stops to achieve it, but can't get better than the uncertainty in their experimentally measured SM constants inputs (especially the strong force coupling constant) permits. And, the rate of improvement in the precision of the strong force coupling constant has been painfully slow
How does the strong coupling constant impact the uncertainty of the calculations? Where do they use it? I have never read anything concerning that. As far as I can see, the uncertainty is dominated by the long-distance window. Some hybrid approaches have been employed to tackle that: the R-ratio has been used to compute that contribution, which corresponds to the low energy spectrum of e+e- data, below the rho-resonance region. That is nice because there are no tensions between the different datasets for the 2pi in that energy range.

ohwilleke said:
It almost makes more sense to assume the experimental muon g-2 result is a pure SM result and to use that as a way to make a high precision strong force coupling constant determination, although, of course, that defeats the goal of using muon g-2 to identify BSM physics. You wouldn't even have to solve it analytically. You could just try a few strong force coupling constant values in the approximately right direction and magnitude and see which one hit the LO-HVP inferred from the muon g-2 experimental result most closely, in otherwise unchanged lattice QCD setups to calculate LO-HVP.
After the precise LQCD calculations, I don't think anyone takes the g-2 as a signal of new physics. Concerning using LQCD results to compute the strong coupling: this has already been done. But I don't think it is as easy as you suggest. It requires to compute the Hadronic Vacuum Polarization function in LQCD and then match it to the pQCD computation + non-perturbative contributions. That function has been computed in LQCD by some groups because it can be used to calculate the LO-HVP, but the part corresponding to the low-energy region is not very precise. I think that is discussed in the 2020 White Paper. As far as I understand, the main reason to introduce the "window" approach was to avoid using that and therefore increase the precision of the results. Anyway, the MuonE experiment intends to determine that part of the HVP function, so that might be interesting.
 
  • #96
GlitchedGluon said:
What do you mean? I think there is a whole section in the White Paper discussing different procedures to average those results. It seems challenging, because the calculations is split into three different parts (or "windows"), and only BMW and Mainz have computed the all of them.
It is averaging the results and the uncertainty in those results, but the White Paper doesn't discuss where the uncertainty in each of the calculations contributing to that average comes from, which is what you really need to know to improve your results.
GlitchedGluon said:
Well, I think that with "better data" they don't necessarily mean "new" data. Different collaborations intend to re-analyze their raw data and, hopefully, reduce uncertainties or resolve sistematic effects. This is also discussed in the White Paper.
Re-analysis is the raw data is unlikely to help much. It is very rare for such a re-analysis to make much of a difference in my experience. The CDF W boson mass data re-analysis is an example of how badly that approach can go. Re-analysis might identify additional sources of systemic error that were omitted or underestimated the first time around, but it is unlikely to shift the bottom line result meaningfully.

Put another way, HEP physicists from 25 years ago were just as smart and good at analysis as HEP physicists are today. Most of the progress in the last 25 years has been in improved instrumentation (and from the brute force of collecting more data) rather than being a result of improved analysis.
GlitchedGluon said:
How does the strong coupling constant impact the uncertainty of the calculations? Where do they use it? I have never read anything concerning that.
Every term (except for possibly one integration constant per calculation) in all QCD calculations have terms that have powers of the strong coupling constant in them. There are hundreds of thousands, if not millions, of such terms that go into a Lattice QCD calculation like the leading order hadronic vacuum polarization number (each of which corresponds to possible Feynman diagrams). This is too granular to put in a summary document like the White Paper.
GlitchedGluon said:
As far as I can see, the uncertainty is dominated by the long-distance window. Some hybrid approaches have been employed to tackle that: the R-ratio has been used to compute that contribution, which corresponds to the low energy spectrum of e+e- data, below the rho-resonance region. That is nice because there are no tensions between the different datasets for the 2pi in that energy range.
A lot of the uncertainty is irreducible due to the uncertainty in the physical constants that go into the underlying calculations. Unless you can find a calculation in which the physical constants cancel out, and this isn't one of them, you are fundamentally limited in calculation precision to the precision of the physical constants you are putting into the calculation. It's a little bit more complicated than that, but not much.
GlitchedGluon said:
After the precise LQCD calculations, I don't think anyone takes the g-2 as a signal of new physics. Concerning using LQCD results to compute the strong coupling: this has already been done.
The strong coupling constant is only known to a precision of about one part per hundred or so, so while it has been done, the efforts haven't been terribly fruitful so far. Yet, lots of directly measured quantities that you use the strong force coupling constant to calculate are known to far greater precision (e.g. the masses of the proton, neutron, and pion).

One of the virtues of using muon g-2 to make the calculation is that is a very "clean" calculation that isn't confounded by issues like imperfect detection rates of particle decays, and uncertainties related to jet energies, that are pervasive and major sources of error in most attempts to extra the strong force coupling constant from experimental data generated at colliders.

A closely related problem, however, which isn't solved by a clean calculation, is that the gluon propagator involves an infinite series that isn't truly convergent and gets worse rather than better after a smaller number of terms the the EW propagators. So, there are intrinsic limits to how precise a reverse engineering of a strong force coupling constant is possible with current calculation methods.
 
Last edited:
  • #97
ohwilleke said:
It is averaging the results and the uncertainty in those results, but the White Paper doesn't discuss where the uncertainty in each of the calculations contributing to that average comes from, which is what you really need to know to improve your results.
The whole Section 3 is dedicated to discuss the lattice results. There are several tables showing the different contributions along with their uncertainties for the individual calculations and the averages. I think the different sources of uncertainties and their impact on the results are very well known.

ohwilleke said:
Re-analysis is the raw data is unlikely to help much. It is very rare for such a re-analysis to make much of a difference in my experience. The CDF W boson mass data re-analysis is an example of how badly that approach can go. Re-analysis might identify additional sources of systemic error that were omitted or underestimated the first time around, but it is unlikely to shift the bottom line result meaningfully.
This does not make sense. There is a huge discrepancy between experimental results for e+e- to 2pi. If those tensions are resolved of course it is going to help a lot, because when the data sets are averaged taking into account those tensions by inflating the uncertainties you get, well, bigger uncertainties. It is well known that there are issues with the radiative corrections that KLOE used in their analysis, for instance. BaBar is performing a re-analysis using a method based on the angular distribution of the decay products to improve the precision because most of the systematic uncertainty comes from particle identification, to put another example.

With respect to the CDF W-boson mass, I guess you are talking about the result that came out a few years ago. I do not get your reasoning behind that statement. Re-analyses should not be performed then? I mean, the LCHb flavor anomalies went away with a re-analysis, as well as other results like the infamous di-photon resonance.

ohwilleke said:
Put another way, HEP physicists from 25 years ago were just as smart and good at analysis as HEP physicists are today. Most of the progress in the last 25 years has been in improved instrumentation (and from the brute force of collecting more data) rather than being a result of improved analysis.
I do not think this is accurate. There has been a lot of developments in statistical analysis methods, as well as software and computational power. Take for instance Machine Learning or Bayesian methods.

ohwilleke said:
Every term (except for possibly one integration constant per calculation) in all QCD calculations have terms that have powers of the strong coupling constant in them. There are hundreds of thousands, if not millions, of such terms that go into a Lattice QCD calculation like the leading order hadronic vacuum polarization number (each of which corresponds to possible Feynman diagrams). This is too granular to put in a summary document like the White Paper.
I think you might be mistaking Perturbative QCD with Lattice QCD. In Perturbative QCD you have a perturbative expansion in the strong coupling constant, and then, of course, whatever you compute in perturbation theory is going to depend on that. But this perturbative expansion only works at high energies because of Asymptotic Freedom. But this is not true for Lattice QCD. The whole point of Lattice QCD is to compute observables in a non-perturbative way, which is useful because you can make computations at low energies, where PQCD does not work. In Lattice QCD, you discretize spacetime and this allows you to compute a correlator (the path integral of it) numerically. At the end, you take the limit of lattice spacing to zero in order to recover QCD. You do not plug any value of the strong coupling in that process.

I do not understand that "This is too granular to put in a summary document like the White Paper". The White Paper is not a summary, it is a technical report of the status of the computations of the g-2 of the muon and its comparison with the experimental values. A lot of detail is provided. Why would they not mention something if it is impactul in the results?

ohwilleke said:
A lot of the uncertainty is irreducible due to the uncertainty in the physical constants that go into the underlying calculations. Unless you can find a calculation in which the physical constants cancel out, and this isn't one of them, you are fundamentally limited in calculation precision to the precision of the physical constants you are putting into the calculation. It's a little bit more complicated than that, but not much.
What does this have to do with the hybrid approach? They use experimental data to compute the part of the lattice contribution that yields the biggest source of uncertainty.

ohwilleke said:
The strong coupling constant is only known to a precision of about one part per hundred or so, so while it has been done, the efforts haven't been terribly fruitful so far. Yet, lots of directly measured quantities that you use the strong force coupling constant to calculate are known to far greater precision (e.g. the masses of the proton, neutron, and pion).
I think those masses are computed in Lattice QCD which, again, does not use the strong coupling to get the results. The strong coupling is already determined using Lattice QCD. The way they do it is that they compute some observable and then they match it to a perturbative QCD calculation. I do not think you can do that with masses.

ohwilleke said:
One of the virtues of using muon g-2 to make the calculation is that is a very "clean" calculation that isn't confounded by issues like imperfect detection rates of particle decays, and uncertainties related to jet energies, that are pervasive and major sources of error in most attempts to extra the strong force coupling constant from experimental data generated at colliders.
What would you propose to get the strong coupling from the experimental g-2 value?

ohwilleke said:
A closely related problem, however, which isn't solved by a clean calculation, is that the gluon propagator involves an infinite series that isn't truly convergent and gets worse rather than better after a smaller number of terms the the EW propagators. So, there are intrinsic limits to how precise a reverse engineering of a strong force coupling constant is possible with current calculation methods.
I do not know what you are talking about. Are you refering to the Hadronic Vacuum Polarization?
 
Last edited:
  • Like
  • Informative
Likes weirdoguy and PeroK

Similar threads

  • · Replies 0 ·
Replies
0
Views
2K
  • · Replies 0 ·
Replies
0
Views
3K
Replies
2
Views
2K
  • · Replies 83 ·
3
Replies
83
Views
15K
  • · Replies 17 ·
Replies
17
Views
6K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 18 ·
Replies
18
Views
4K
  • · Replies 49 ·
2
Replies
49
Views
12K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 10 ·
Replies
10
Views
4K