# Scientists may have discovered a new force of nature?

There are two theory predictions, one agrees with the experiment. It's very likely that the other prediction is just incorrect. We already know that at least one of them must be off, and it's probably not the one that agrees with experiment.
Ya but the other one is still 1.6 sigma off right? Still new physics until it isn't.

weirdoguy
ohwilleke
Gold Member
Ya but the other one is still 1.6 sigma off right? Still new physics until it isn't.

The universally accepted practice in physics experiments (and almost all other disciplines as well) is to consider any result within two sigma of the prediction to be "consistent" with the prediction and statistically insignificant (which means that the discrepancy has more than a 5% chance of being due to random statistical flukes).

If your estimated uncertainty is perfectly determined, and your predicted value is exactly right, the expected average difference between experiment and prediction, simply due to random chance, is 1 sigma.

If your results are consistently showing less than a 1 sigma difference between prediction and experiment, it means that you have overestimated the uncertainty in your measurement.

Physicists consider "new physics" to be discovered when there is a 5 sigma difference, the result has been replicated, and there is some plausible scientifically motivated theory to explain why there is a difference.

When there is a discrepancy that is more than 2 sigma and less than 5 sigma, physicists call that a "tension" between theory and experiment, which is considered a "weak tension" if it is just a little more than 2 sigma, and a "strong tension" if it is close to 5 sigma.

In real life, because margins of error are routinely underestimated and systemic errors (as opposed to sample size based errors) usually have "fat tails" that make big errors more likely than they would be if they were simply due to a small sample size, discrepancies of 3 sigma and less end up going away over time about half of the time.

The requirements of replication and a scientifically motivated theory to consider "new physics" to be discovered is there to guard against systemic errors in either the experiment or the theoretical calculation that the original people to find the discrepancy had no idea were present, like the faulty cable at the Opera Experiment that seems to show that neutrinos were traveling faster than the speed of light.

The first scientists to 5 sigma are considered to have discovered "new physics" but only retroactively once their results are replicated and confirmed. Experiments like the Large Hadron Collider and Tevatron were set up with two independent groups of scientists for each main part of the experiment that share the equipment, in order to allow results to be replicated, even though it means that each independent group gets less statistically significant results as a result.

Last edited:
vanhees71
mfb
Mentor
Ya but the other one is still 1.6 sigma off right?
1.3 sigma including a more recent hadronic light by light scattering calculation, but it doesn't matter.
That's at the level of getting 3 "heads" when flipping a coin 10 times. Sure, it's not most likely result, but would you assume the coin is biased seeing that?
Still new physics until it isn't.
You can never reach infinite precision with measurements or theory. There is always some uncertainty, and we don't expect the values to match exactly. Being compatible within the uncertainties is the best (or worst?) outcome, and 1.3- 1.6 sigma is certainly within what we expect from statistical fluctuations.

vanhees71 and ohwilleke
collinsmark
Homework Helper
Gold Member
Here's Lawrence Krauss's take on the subject, complete with a at least a little bit of mathematics*. I found it rather insightful.

Part 1:

Part 2:

*I assume that since you're reading this on PF, you're probably not one to run away at the first sign of a math equation.

vanhees71, Tom.G and ohwilleke
@ohwilleke in your post #25 you mention the masses of up and down quarks and how they are used to calculate the sigma of the muon experiment ,

To give a more "real" example, one of the big differences between the prediction that says there is a 4.2 sigma distinction between experiment and prediction, and the one that says that there is only a 1.6 sigma distinction, is that the second prediction treats up and down quarks as having different masses, while the first one uses only the average mass of the up and down quarks. This slight tweak in the assumed masses of two Standard Model quarks makes a quite significant impact on the predicted discrepancy between theory and experiment, even though both the up quark and down quark masses are tiny (about 2.5% and 5% respectively, of the muon mass).
Pardon if this comes across as lazy but don't we know the mass of those quarks with certainty that we can use different numbers for different approaches?

mfb
Mentor
Up and down quarks never appear in isolation, which makes it difficult to define what their mass is, and you get different answers (and large uncertainties) with different methods. Many calculations get much simpler if you neglect the small masses of up and down, or at least neglect their difference. But that's not what lead to the deviating theory predictions here. They used completely different approaches.

artis, vanhees71 and ohwilleke
DennisN
2020 Award
A rather more technical take on the experiment itself is this 12 minute, fast moving video:

New Sixty Symbols video. Professors Ed Copeland and Tony Padilla discuss latest results in particle physics from Fermilab and the Large Hadron Collider.
Very interesting. Thanks for posting!

artis and collinsmark
ohwilleke
Gold Member
But that's not what lead to the deviating theory predictions here. They used completely different approaches.
The approaches aren't completely different. Almost all of the difference comes from HVP and there are about five main differences between the two approaches. Using a 1+1+1+1 Lattice approach rather than a 2+1+1 Lattice approach is one of the significant differences.

vanhees71
mfb
Mentor
I thought you were comparing it to the reference value that leads to a large discrepancy, not another lattice calculation. The sigma values suggested that.

vanhees71
collinsmark
Homework Helper
Gold Member
A few words of caution about potential, new physics discoveries. Some historical examples are given. (The video leads up to the new anomalies discussed in this thread.)

Tom.G, DennisN, ohwilleke and 1 other person
phinds
Gold Member
Nice, informative discussion. Thanks for posting.

collinsmark
If one wants to read quality scientific news with technicalities ( assuming the reader has BSc in physics ) , what kind of media can you recommend?

ohwilleke
ohwilleke
Gold Member
Blogs by credentialed physicists (Twitter feeds too).

ChrisVer
Gold Member
A few words of caution about potential, new physics discoveries. Some historical examples are given. (The video leads up to the new anomalies discussed in this thread.)

I really liked seeing the historical examples in her videos. I find it interesting that these are not talked about much, while most of the public discussion weight goes on the "holy grail" of 5σ... But these examples are a proof that what shines is not necessarily gold... So, it made me thinking:
What is the approach one would take after observing a 5 or even a 6σ deviation from the SM expectation? Wouldn't it be called it a discovery right away? I suppose, these are set by the collaborations or the analysists beforehand (the 5σ of HEP would put most of other sciences' null hypotheses results indestructible), but the point is that their claims would directly affect the theory community worldwide (they would have an indirect verification in hand that there is BSM).

For example FNAL said during their conferences that after unblinding, whatever their result would be, that's what would be published. So, supposing they had observed a 5σ deviation to Δαμ, would that be called an "observation of LFV" right away?
The latest example from particle physics was the Higgs discovery (OK and several other bound states that are observed by LHCb every now or then). For the Higgs, it looks like the case was to call it a discovery right away, but it could be because it's a special occasion (it appeared in different decay channels and in 2 "independent" experiments at the same time).

ohwilleke
mfb
Mentor
A deviation in g-2 doesn't have to be LFV. It can be anything.
Keep in mind that systematic uncertainties and theory uncertainties are not Gaussian - they have long tails. A 5 sigma systematics/theory error is far more likely than a 5 sigma statistical fluctuation. I would have expected a careful phrasing of the result. It's not like the discovery of the Higgs boson, which was dominated by statistical uncertainties that are well understood, and of course seen by two experiments that both had ~5 sigma.

ohwilleke