FactChecker said:
I don't follow you here but I know nothing about particle physics. How do you go from ##5\sigma## to 20%? The
##5\sigma## probability for a normal distribution would be 0.00006% if it was two-tailed. This must be 20% of something else.
The existence of the decay is confirmed to really exist to five sigma. The frequency of the decay was determined with a precision of ± 25%.
To give a stylized example (not the actual facts, just to illustrate the concept), suppose that 40 events were observed, and that the odds that the 40 events were just a statistical fluke in background events from other decays, rather than the decay that was discovered, was 0.00006%. So, it was a 5 sigma observation that this decay was really happening.
But suppose that due to statistical and systemic uncertainties, an observation of 40 events would be consistent with a long term expected number of events per 100 billion collisions that was anywhere from 30 to 50 events. So, the likelihood of this decay happening could only be pinned down to ± 25%, which isn't all that precise (although again, the expected decay frequency is a certain number of events per 100 billion decays, so getting in the right ballpark is still a big deal).
What Vanadium50 is saying is that even if the Standard Model prediction is 40 events, there might be a tweak to the Standard Model (for example, a variation in which there is some rule that makes semi-leptonic decays like the one discovered a little more common than expected in the vanilla Standard Model, but that this variation on the Standard Model makes fully hadronic decays a little less common than expected in the Standard Model), in which the expected number of decays of this kind was 50 in the alternative model, rather than the 40 of the Standard Model (perhaps the alternative is some sort of TeV scale supersymmetry theory). And, this experiment wouldn't be precise enough to distinguish between the null hypothesis of the Standard Model, and the alternative hypothesis in which more decays are expected than in the Standard Model.
The more precise your experimental measurement is, the more strongly your experimental results can rule out subtle alternatives to the Standard Model, based on how common a decay that has definitely been discovered turns out to be.
At that point there is a cost-benefit analysis. How much do you want to spend to get a more precise measurement in order to rule out subtle alternatives to the Standard Model?
Maybe you can do 20% more collisions at a modest additional cost with no upgrades to your collider to reduce the uncertainty in the long term frequency of this decay from 30-50 events per 100 billion collisions down to the 35-45, which would favor the Standard Model over the subtle alternative to it that predicts 50 events, but not strongly enough to totally rule out that alternative in a definitive way. But, maybe it would take 100 times as many collisions and expensive upgrades to your detectors at your collider to get that frequency pinned down to limit the experimental result to one that is consistent with 39-41 events per 100 billion collisions, which would strongly rule out the subtle modification of the Standard Model that predicts 50 decay events per 100 billion collisions.
The main point Vanadium50 is making is that while I tend to see this result from the current experiment as a vindication of the Standard Model (because I am thinking about this particular experimental result in the context of the hundreds of experimental results that have confirmed the Standard Model in many different ways, and just because I'm more of an optimist in this particular matter), he's dropping the footnote (legitimately and correctly) that really, we can never 100% vindicate the Standard Model or any other high energy physics theory with experiments. Instead, we can only rule out alternatives by doing experiments that are sufficiently precise to distinguish between the Standard Model and subtle alternatives to it.
Figuring out where to draw that line is hard. It is harder still because there aren't just two alternatives. There are infinitely many theoretically possible tweaks to the Standard Model that could be imagined, some of which are only slightly different in expected outcomes from the Standard Model (e.g. a PeV scale supersymmetry theory).
Vanadium50 then makes a rough guestimate of where we ought to draw the line between greater precision and greater cost, based upon his experience with what is necessary to improve the precision of an experimental HEP result's precision and his evaluation of the scientific value of greater precision in this particular measurement. He thinks that the cost to get the precision with which we can measure the frequency of this decay from ± 25% to ± 10% would probably be small and worth it, but the cost to get precision with which we can measure the frequency of this decay down to ± 1% probably isn't worth it.
Part of his analysis in drawing this line is that this particular rare kaon decay doesn't have any great intrinsic importance in and of itself.
We are looking for it and trying to pin it down, basically as part of a long term, ongoing high energy physics effort to confirm (or identify flaws in) the Standard Model of Particle Physics generally, and not because there is something special or important about this particular decay (except that, at the moment, it happens to be right on the edge between what we are experimentally able to do and what we aren't able to do experimentally).
It isn't, for example, comparable to the measurement of muon g-2 which is a strong global test of all parts of the Standard Model of Particle Physics at once (at least at relatively low energies) that is particularly susceptible to ultra-precise measurement.
So, maybe our limited money for high energy physics experiments would be better spent on something else that has more potential to show us something new than on this measurement which is already decent enough and isn't particularly better than other experiments to tease out any possible and plausible flaws in the Standard Model.
One of the links in #1 in this thread explains the number of events actually expected in the Standard Model (80 at this point) and what the people doing the experiment see its purpose as being:
In two years of data taking the experiment is expected to detect about 80 decay candidates if the Standard Model prediction for the rate of charged kaon decays is correct. This data will enable the NA62 team to determine the value of a quantity called |Vtd|, which defines the likelihood that top quarks decay to down quarks.
Understanding with precision the relations between quarks is one constructive way to check the consistency of the Standard Model.
So, in addition to the other points discussed, this experiment is a partial measurement of one of the experimentally measured physical constants of the Standard Model of Particle Physics called |Vtd|, which does tilt the balance a little in favor of making more of an effort to measure the frequency of this decay more precisely.
|Vtd| together with eight other elements of what is called the
CKM matrix are used to pin down the four degrees of freedom that fully describe all nine elements of this 3 x 3 matrix which code the probability that when a quark emits a W boson that that quark will turn into a different kind of quark via the weak force.
The current global average of the experimentally measured values of |Vtd| is 0.0086 ± 0.0002 (a relative uncertainty of between two and three percent). This latest rare kaon decay measurement, because it isn't very precise, probably won't tweak this experimental value very much yet, because other experimental measurements of it, at this point, are much more precise.