What's Delaying Fermilab's Muon g-2 Results Release?

Click For Summary

Discussion Overview

The discussion revolves around the delays in releasing the results of Fermilab's E989 experiment, which is measuring the anomalous magnetic moment of the muon (muon g-2). Participants explore potential reasons for the delay, including the implications of the results and the ethical considerations surrounding publication timing.

Discussion Character

  • Debate/contested
  • Exploratory
  • Technical explanation

Main Points Raised

  • Some participants express concern over the lack of communication regarding the delay in results, speculating whether it could be due to significant findings requiring additional verification.
  • Others argue that the collaboration is under no obligation to rush the publication and that ensuring accuracy is paramount, suggesting that delays are common in precision measurements.
  • There is a discussion about the ethical implications of delaying publication, with some asserting that it is acceptable to take time to ensure confidence in results.
  • One participant raises the point that once the data is unblinded, the analysis should not be altered based on the results, although they acknowledge the need to investigate any anomalies that may arise.
  • Concerns are voiced about the potential for speculation when a collaboration fails to meet a previously announced publication date, with suggestions that transparency about delays could mitigate misunderstandings.
  • Some participants question whether the anomaly in results would definitively indicate new physics or if it could stem from calculation errors, reflecting uncertainty in interpreting the implications of the findings.

Areas of Agreement / Disagreement

Participants do not reach a consensus on the reasons for the delay or the appropriateness of the collaboration's communication practices. There are competing views on the ethical considerations of delaying publication and the implications of the results.

Contextual Notes

Participants acknowledge that without insider knowledge of the collaboration, speculation about the reasons for the delay may not be grounded in fact. The discussion highlights the complexities of scientific communication and the challenges faced by large collaborations.

  • #31
This is generating a huge amount of noise in the mainstream press in the UK: national news last night and a feature on breakfast television this morning!

Batten down the hatches Mentors, I see a storm on its way...
 
  • Like
Likes   Reactions: PeroK
Physics news on Phys.org
  • #32
AndreasC said:
there's another issue, which is the curvature of the Earth
No, it's a circle. Think about it.
 
  • Like
Likes   Reactions: AndreasC
  • #33
pbuk said:
No, it's a circle. Think about it.
Oh fair.
 
  • #34
Well, according to a lattice calculation by the Wuppertal group on the leading hadronic contributions, maybe the SM prediction is again closer to the new measurement:

https://www.nature.com/articles/s41586-021-03418-1

It seems as if ##(g_{\mu}-2)## stays exciting also from the theory side!
 
  • Like
Likes   Reactions: Demystifier
  • #35
  • Love
Likes   Reactions: gwnorth
  • #36
First, this thread started because one member felt that the experiment had the results in January but was withholding them because they were hiding a problem. We know now that was totally untrue. Somebody made it up and then it was used to cast aspersions on the scientific team's competence, integrity, or both.

Second, it is also not the case that all new physics must affect g-2. It's actually quite easy: 2HDM with a light h and H and a heavy A, H+ and H-. One might even say "trivial". I'm not even a theorist and it took me less time to think of one than to type it. It may be relevant that the electroweak contribution is in the seventh significant digit, so a W' and Z' that were a factor of ~3 heavier (long excluded by direct searches) would be invisible here.

Third, there seems to be the feeling that 4.2 sigma means "new physics". If you go to the theory paper (Ref. [13] in the PRL) you can see in Figure 1 that the calculation is well within the "no new physics" band. Also, the BMW collaboration has a calculation they say is right on the money.

Fourth, as Chris Polly said, this assumes there is no problem with the technique. Such a problem does not need to be large - this is a 460 ppb measurement. There is a long history of different techniques giving different results - two recent ones are the neutron lifetime and the proton radius. This is why the JPARC experiment is so important. It would be important even if it were less sensitive than the FNAL experiment (as it stands, the two have comparable targets).
 
  • Like
  • Informative
Likes   Reactions: PeroK, weirdoguy, ohwilleke and 2 others
  • #37
ohwilleke said:
In China, it comes from coal fired power plants with very few emissions controls.
BTW this post of mine reminded me of the great and catchy tune of level 42:
 
  • #40
gmax137 said:
Apparently it depends on who is asked.

Just like the SM prediction for g-2. 😈
 
  • Haha
  • Like
Likes   Reactions: ohwilleke, vanhees71, Demystifier and 2 others
  • #42
mfb said:
Reaching 750 times the energy with the LHC technology would need a ring 750 times as large, ~20,000 km circumference. Even if you double the field strength it's still 10000 km. Europe doesn't have space for that, but in North America it could just fit between Hudson Bay, Mexico, Washington and Washington DC. At least if we ignore all financial and technical problems.

They should build in in outer space :smile:
 
  • Like
Likes   Reactions: ohwilleke
  • #43
gmax137 said:
Not that it is important to this conversation, but no.
Also recall NAFTA the "North American Free Trade Agreement" between Canada, US, and Mexico.
I am always right even when I am not!
 
  • #44
mfb said:
in North America it could just fit between Hudson Bay, Mexico, Washington and Washington DC. At least if we ignore all financial and technical problems.

**** it, let's do it.
 
  • #45
JLowe said:
**** it, let's do it.
It's called dang your curse word... :oldbiggrin:
 
  • #46
I expect JPARC to end up close to the Fermilab value, and eventually most theory predictions to end up at the same value. The BMW prediction is matching the experimental result.

At least the accelerators for g-2 experiments are nice and compact. Here are some 3000 km diameter circles. Note the map distortions.
 
  • Like
Likes   Reactions: ohwilleke and vanhees71
  • #47
But it would be so much more fun if E34 gets the g-2 Theory Initiative value, and FNAL continues to match the BMW value.
 
  • Haha
Likes   Reactions: ohwilleke and vanhees71
  • #48
mfb said:
If there are two SM predictions and only one agrees with measurements...
At least we will learn more about SM predictions of hadronic effects.
Do you have any opinion about this second model? Is real?
https://www.nature.com/articles/s41586-021-03418-1
 
Last edited:
  • #49
That's the above mentioned lattice-QCD calculation of the leading hadronic contribution to ##(g-2)## by the Wuppertal (BMW) lattice-QCD collaboration. It's at least a hint that one has to consolidate the prediction of the theory side. If I understand it right, what's compared to the measurement is a theoretical calculation using empirical input for the said hadronic contributions, which uses dispersion-relation analyses of the data, and afaik that fitting is a tricky business of its own.

Of course also the lattice calculation has to be solidified and maybe also checked by other lattice collaborations since also lattice-QCD calculations are tricky business (I only remind about the long debate about the deconfinement and/or chiral-transition temperature, which finally settled at the lower value of around 155 MeV predicted by the Wuppertal group ;-)).

Whether or not the ##(g-2)## results are really hints for "physics beyond the Standard Model" still seems to stay an exciting question.
 
  • Like
Likes   Reactions: PeroK and exponent137
  • #50
gwnorth said:
Central America is part of North America.
I hate this.

EDIT: To end the argument once and for all, Mexico is part of central southern north America. Either that or southern central north America. Perhaps both.
 
  • #51
vanhees71 said:
That's the above mentioned lattice-QCD calculation of the leading hadronic contribution to ##(g-2)## by the Wuppertal (BMW) lattice-QCD collaboration. It's at least a hint that one has to consolidate the prediction of the theory side. If I understand it right, what's compared to the measurement is a theoretical calculation using empirical input for the said hadronic contributions, which uses dispersion-relation analyses of the data, and afaik that fitting is a tricky business of its own.

Of course also the lattice calculation has to be solidified and maybe also checked by other lattice collaborations since also lattice-QCD calculations are tricky business (I only remind about the long debate about the deconfinement and/or chiral-transition temperature, which finally settled at the lower value of around 155 MeV predicted by the Wuppertal group ;-)).

Whether or not the ##(g-2)## results are really hints for "physics beyond the Standard Model" still seems to stay an exciting question.
Can't wait until I learn enough QFT for all that to not sound like complete gibberish to me!
 
  • Haha
Likes   Reactions: Demystifier
  • #52
How does the relate to LHCb result? I think I get them mixed up. Are they measuring totally separate things that just have to do with muons? Are both sensitive to the same or similar QCD calculations?
 
  • #53
nolxiii said:
Are they measuring totally separate things that just have to do with muons?
Yes.
 
  • Like
Likes   Reactions: vanhees71 and ohwilleke
  • #55
exponent137 said:
What can this article tell us about g-2 disagreement?
https://www.quantamagazine.org/protons-antimatter-revealed-by-decades-old-experiment-20210224/
At least, it can tell us that the hadrons are not explained enough?

(Although we talk about muons, the problem of g-2 disagreement is because of hadrons.)

Not much. The article is about proton structure and the proton parton distribution function (PDF).

The Theory Initiative's white paper is basically looking at the propensity of electron-positron collisions to produce pions and the properties of the pion's produced in order to avoid having to calculate it from first principles, and then extrapolating that to the muon g-2 calculation context, while the BMW calculation is straight up from QCD. The BMW paper argues that the transfer of the electron-positron collision data to the muon g-2 calculation by the Theory Initiative has been done wrong (and an ugly mix of experimental results for parts of a calculation and lattice QCD simulations for other parts of it is certainly an unconventional approach).

In the muonic hydrogen proton radius case, it turns out that the measurement of the proton radius in the muonic hydrogen case was spot on and that the old and inaccurate measurement of the proton radius in ordinary electron hydrogen was the source of the discrepancy. We could be seeing something similar here.
 
  • Like
Likes   Reactions: vanhees71 and exponent137
  • #56
But indeed the largest uncertainty in the theoretical calculation of ##(g-2)## of the muon are the radiative corrections due to the strong interactions (in low-energy language, "the hadronic contributions" or "hadronic vacuum polarization" (HVP). If I understand it right, what's usually compared as "theory" to the data uses a semiempirical approach to determine these hadronic distributions by calculating the needed matrix elements via dispersion relations from measurements of the ##\mathrm{e}^+ + \mathrm{e}^{-} \rightarrow \text{hadrons}## cross section. This is based on very general theoretical input, i.e., the unitarity of the S-matrix, but the devil is in the detail, because it's everything else then easy to use the dispersion relations to get the HVP contributions from data. So I'd be not too surprised, if the systematic unertainty of this procedure turns out to be underestimated. After all there are hints from the lattice (by the Wuppertal/BMW lQCD group) that the HVP contributions may well be such that the discrepancy between "theory" and "data" is practically gone (only about 1 sigma discrepancy). Of course, also lQCD calculations are a tricky business. One must not forget that we talk here about high-accuracy physics, which is never easy to get (neither experimental nor theoretical).
 
  • Like
Likes   Reactions: ohwilleke, websterling, PeroK and 1 other person
  • #57
I am not an expert, but I don't believe that any of the theory calculations (of HVP) are pure ab initio calculations. All are ways of relating low energy measurements (done at places like Serpukhov in the 1960's) to g-2.

Had history evolved differently and we had a g-2 measurement first, we would be discussing whether there was an anomaly in the low-energy Serpukov data.
 
  • Like
Likes   Reactions: Astronuc, vanhees71 and exponent137
  • #58
Vanadium 50 said:
I am not an expert, but I don't believe that any of the theory calculations (of HVP) are pure ab initio calculations. All are ways of relating low energy measurements (done at places like Serpukhov in the 1960's) to g-2.

Had history evolved differently and we had a g-2 measurement first, we would be discussing whether there was an anomaly in the low-energy Serpukov data.
Are you sure?

I had understood lattice QCD to be akin to N-body simulations in cosmology. You discretize the QCD equations and the particles and time intervals and then iterate it. The description of what they did in their pre-print sounds like this is what they did.

Quanta Magazine, interviewing the authors, summarizes what was done by the BMW groups as follows:
They made four chief innovations. First they reduced random noise. They also devised a way of very precisely determining scale in their lattice. At the same time, they more than doubled their lattice’s size compared to earlier efforts, so that they could study hadrons’ behavior near the center of the lattice without worrying about edge effects. Finally, they included in the calculation a family of complicating details that are often neglected, like mass differences between types of quarks. “All four [changes] needed a lot of computing power,” said Fodor.

The researchers then commandeered supercomputers in Jülich, Munich, Stuttgart, Orsay, Rome, Wuppertal and Budapest and put them to work on a new and better calculation. After several hundred million core hours of crunching, the supercomputers spat out a value for the hadronic vacuum polarization term. Their total, when combined with all other quantum contributions to the muon’s g-factor, yielded 2.00233183908. This is “in fairly good agreement” with the Brookhaven experiment, Fodor said. “We cross-checked it a million times because we were very much surprised.”
 
  • #60
ohwilleke said:
quantamagazine.org said:
The researchers then commandeered supercomputers in Jülich, Munich, Stuttgart, Orsay, Rome, Wuppertal and Budapest and put them to work on a new and better calculation. After several hundred million core hours of crunching...
So I have an off-topic question about this computer time. One-hundred million hours is 11,000 years; split between the seven computers mentioned that would be what, 1600 years each. How does that work?
 

Similar threads

  • · Replies 0 ·
Replies
0
Views
3K
  • · Replies 0 ·
Replies
0
Views
4K
Replies
2
Views
3K
  • · Replies 25 ·
Replies
25
Views
5K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 83 ·
3
Replies
83
Views
15K
  • · Replies 7 ·
Replies
7
Views
4K
  • · Replies 17 ·
Replies
17
Views
6K
  • · Replies 18 ·
Replies
18
Views
4K
  • · Replies 49 ·
2
Replies
49
Views
13K