Latest Tau Lepton Measurement Consistent With Koide's Rule

In summary, the latest tau lepton mass measurement, from Belle II, is 1777.28 ± 0.75 (stat.) ±.33 (sys.) MeV/c^2. The combined error is ± 0.82 MeV/c^2 (which is 0.38 sigma greater the the Koide's rule prediction). This is consistent at a one sigma level with the current Particle Data Group world average measurement of the tau lepton mass.
  • #1
ohwilleke
Gold Member
2,366
1,359
TL;DR Summary
Koide predicted the tau lepton mass as a function of the electron and muon masses in 1981 when all three masses were known less accurately. In 2020, the rule still holds true within 1 sigma as measurement accuracy increases.
The latest tau lepton mass measurement, from Belle II is 1777.28 ± 0.75 (stat.) ± 0.33 (sys.) MeV/c^2. The combined error is ± 0.82 MeV/c^2 (which is 0.38 sigma greater the the Koide's rule prediction). This is consistent at a one sigma level with the current Particle Data Group world average measurement of the tau lepton mass.

The current Particle Data Group value for the tau lepton mass is 1776.86 ± 0.12 MeV/c^2 (which is 0.91 sigma below the Koide's rule prediction). This is a relative error of one part per 14,807. In 2014, immediately before the most recent update of the PDG value, it was 1776.82 +/- 0.16 MeV/c^2.

If error is Gaussian (i.e. has a normal distribution) and systemic error estimates are "conservative" then the difference between the true value and the measured value should average a little less than 1 sigma. In real life statistical error is almost always close to Gaussian, but systemic error usually has a distribution best fit by a t-test distribution that has somewhat fatter tails than a Gaussian distribution.

Since the new Belle II measurement is higher than the PDG value, it will nudge the global PDG value towards the Koide's rule value, although not by much, since the significant margin of error means it is weighted only lightly in the world average.

Koide's rule was proposed in 1981 by Yoshio Koide, six years after the tau lepton was discovered, when its mass was known much less accurately. It is a hypothesis about the relationship between the masses of the charged leptons. It predicts that the sum of the three charged lepton masses, divided by the square of the sum of the square roots of the charged lepton masses, is equal to exactly 2/3rds. Since the electron and muon masses are known much more precisely than the tau lepton mass, it is possible to use the original Koide's rule to very precisely predict the tau lepton mass. This prediction using current electron and muon mass measurements is:

1776.96894 ± 0.00007 MeV/c^2.

In 1983, using the then best available measurements of the electron mass and muon mass, the original Koide's rule predicted a tau lepton mass of 1786.66 MeV/c^2. But by 1994 (and probably somewhat sooner than that), the prediction of the original Koide's rule had shifted to 1776.97 MeV/c^2. Thus, the prediction of the original Koide's rule has been essentially unchanged for more than twenty-six years.
 
Last edited:
  • Like
Likes arivero
Physics news on Phys.org
  • #2
The Belle II measurement has such a large uncertainty that I don't expect PDG to ever include it. It will likely be superseded by a measurement with a larger dataset soon. The systematic uncertainty will likely stay, however - they use a different method, which provides an independent cross check, but it can't reach the precision of measurements at the threshold.

A naive weighted average gives 1776.87, i.e. the last digit goes up by 1.
 
  • Like
Likes arivero and ohwilleke
  • #3
A naive weighted average gives 1776.87, i.e. the last digit goes up by 1.

It is worth observing, in the same vein, that the combined eight measurements made prior to 2014 pull the global average down by 5 in the last digit from the value measured in 2014 at BESIII.

mfb said:
The Belle II measurement has such a large uncertainty that I don't expect PDG to ever include it. It will likely be superseded by a measurement with a larger dataset soon. The systematic uncertainty will likely stay, however - they use a different method, which provides an independent cross check, but it can't reach the precision of measurements at the threshold.

Certainly, if Belle II were to produce a new value this year with a larger dataset and hence less statistical uncertainty, I agree that it wouldn't make sense to include this result. And, the systemic uncertainty at Belle II is high enough that it will never be a world's most accurate measurement in isolation. But it is also the first and only published tau lepton mass measurement in the last six years.

The Belle II measurements do have a fairly large uncertainty, but four of the nine several measurements in the current PDG average for the tau lepton mass (shown in bold) have great uncertainty, there are no data points from Belle II so including it would make the average more robust in a difficult to quantify way since it comes from more independent replicating experiments. These old results with high uncertainties, however, like this result, make only a modest tweak to the overall value (and they are all at least 20 years old too).

It is also worth observing that the 2014 measurement, which is the most precise and makes the largest contribution to the overall world average as a result is also the result with the value closest to the Koide's Rule prediction, after the 1996 BES result which has larger error bars (but still very small for 1996). BESIII took more data in April 2018 to update its 2014 measurement according to a December 2018 preprint, but for some reason, hasn't published a new result from that two year old data. The new BESIII result should have a precision of less than ± 0.1 MeV.

The data for the July 2014 paper from BESIII was collected in December 2011, so it took 31 months to publish it. Thirty-one months from April 2018 when the data for its second round to tau lepton mass measurements were collected would be November 2020, so we could get a new record high precision tau lepton mass measurement from BESIII as soon as sometime this fall. A new result from BESIII would replace the 2014 measurement and would dominant the world average even more strongly than its best in the world 2014 measurement does already.

The results that contribute to the currently tau lepton mass world average of the Particle Data Group are:

1776.91 ±0.12+0.10−0.131171
1
ABLIKIM
2014​
D
BES323.3 pb−1, Ecmee= 3.54−3.60 GeV
1776.68 ±0.12 ±0.41682k
2
AUBERT
2009​
AK
BABR423 fb−1, Ecmee=10.6 GeV
1776.81 +0.25−0.23 ±0.1581
ANASHIN
2007​
KEDR6.7 pb−1, Ecmee= 3.54−3.78 GeV
1776.61 ±0.13 ±0.35
2
BELOUS
2007​
BELL414 fb−1Ecmee=10.6 GeV
1775.1 ±1.6 ±1.013.3k
3
ABBIENDI
2000​
A
OPAL1990−1995 LEP runs
1778.2 ±0.8 ±1.2
ANASTASSOV
1997​
CLEOEcmee= 10.6 GeV
1776.96 +0.18−0.21−+0.25−0.1765
4
BAI
1996​
BESEcmee= 3.54−3.57 GeV
1776.3 ±2.4 ±1.411k
5
ALBRECHT
1992​
M
ARGEcmee= 9.4−10.6 GeV
1783 +3−4692
6
BACINO
1978​
B
DLCOEcmee= 3.1−7.4 GeV

One resource that I wish was more easily available would be a compilation of the PDG values for the various data points is reports over time. So, for example, you could watch the evolution of the world average measurements of the tau lepton mass, or the strong force coupling constant, over the past few decades.
 
Last edited:
  • #4
I have heard rumors that the BES III update isn't that far into the future.

Belle II now has about 10 times as much data recorded as used for the tau mass measurement, but as it's not competitive I don't expect an update with that alone. Maybe with the fall dataset, at that time the systematic uncertainty will be larger and future work would need to focus on them instead of statistics.
 
  • Like
Likes ohwilleke
  • #5
In 2006 I came up with a different relationship among the masses of the electron, muon, and tau, which I updated in 2009. Please see arXiv:physics/0602118 for details, but the bottom line is that my formula predicts a tau mass of 1776.81234 (33) MeV, which differs from the Koide prediction by about 0.157 MeV, but is also within 1 standard deviation of the measurements. As a side note, I was one of the co-authors of the BES measurement of the tau mass in 1995.
Alan Breakstone
 
  • #6
The fact that two different formulas both "explain" masses makes it unlikely either is telling us anything.
 
  • #7
The previous post misses the point of the mass formulas. Neither attempts to explain why the masses are what they are, they merely attempt to find a relationship that might be explained by a future theoretical model. An example of this in the past is the Balmer series of emission lines from hydrogen atoms. In 1885 Johann Balmer came up with an empirical relationship among some of the spectral lines of hydrogen. This was first explained theoretically by the Bohr model, published in 1913 (28 years later). The Bohr model was not correct; however, when Schroedinger came up with his famous equation in 1925 (40 years after Balmer's publication) which was subsequently solved for a Coulomb potential, the Balmer series was finally understood in a satisfactory way. Both Koide's formula and my formula are similar in this way to the Balmer formula. They may be harbingers of some new physics that we currently don't understand.
 
  • #8
It is you who misses the point.

If there is a single formula that explains the data, maybe there's something to it. If there are two, at least one is totally wrong. Quite probably there are N, with no fewer than N-1 being bogus. And if most are bogus, there is little value in having found one.
 
  • #9
Vanadium 50 said:
If there is a single formula that explains the data, maybe there's something to it. If there are two, at least one is totally wrong. Quite probably there are N, with no fewer than N-1 being bogus. And if most are bogus, there is little value in having found one.

Actually, even if there are multiple formulas, finding them still articulates what kind of relationships could give rise to this hierarchy. They may not be "totally correct." But, the only "totally wrong" formulas are the ones that don't come within error bars.

There are also additional criteria beyond the current match that one can use to handicap the likelihood that different relationships are closer to being correct.

One is the extent to which a formula is a prediction rather than a post-diction fit to recent experimental data, and another is the extent to which the formula has gotten closer or further from the experimentally measured value as the measurement has gotten more precise.

The original Koide's rule was proposed by Yoshio Koide in 1981 just six years after the tau lepton was first discovered experimentally. In 1983, using the then best available measurements of the electron mass and muon mass, the original Koide's rule predicted a tau lepton mass of 1786.66 MeV/c^2.

At that time this prediction wasn't a particularly close fit to the best fit experimentally measured value of the tau lepton mass. The state of the art tau lepton mass measurement in 1981 was 1783 +3−4 MeV/c^2.

To some extent dumb luck pointed him in the right direction as the errors in the electron and muon mass measurements at the time were in the same direction as the errors in the tau lepton mass measurement which otherwise would have looked much further off and might have been rejected as disproven at the outset.

But. as the tau lepton mass measurements have gotten more precise, the match has improved, although, with detours along the way like the 1997 CLEO tau lepton mass measurement of 1778.2 ±0.8 ±1.2 MeV/c^2.

Increased precision in the measurement of the electron and muon masses soon tweaked that prediction to something close to the current 1776.96894(7) MeV/c^2 predicted value, which is, to the same level of precision as current experimental measurements, 1776.97 Mev/c^2. By 1994 (and probably somewhat sooner than that), the prediction of the original Koide's rule had shifted to 1776.97 MeV/c^2. Thus, the prediction of the original Koide's rule has been essentially unchanged for more than twenty years. The new most precise measurement ever of the tau lepton mass is almost three times closer to the original Koide's rule prediction than the old world average.

To recap, the 2020 tau lepton mass measurement, from Belle II is 1777.28 ± 0.75 (stat.) ± 0.33 (sys.) MeV/c^2. The combined error is ± 0.82 MeV/c^2 (which is 0.38 sigma greater the the Koide's rule prediction). The current Particle Data Group value for the tau lepton mass is 1776.86 ± 0.12 MeV/c^2 (which is 0.91 sigma below the Koide's rule prediction). A roughly one part per 5733 match to a 2020 experimental measurement from a 1981 formula isn't shabby.

The fact that you can still get a very close fit to the current value with the original Koide's rule from 1981, and that you can get better fits than a lot of theoretically proposed explanations of the mass hierarchy of all of the Standard Model fermions with extensions of it, even if they are only close first order approximations, suggests that Koide's rule is "on the right track" and is providing decent intuition about the relationship, even if it is not "totally correct" at least in its extended versions.

There are lots of approximate formulas in physics that aren't exactly right that still have value by providing insight into the nature of the biggest part of the relationship between two quantities and by providing a first order approximation that is often "good enough."

As another example, even the proponents of MOND agree that the standard MOND formula is just a toy model that is definitely wrong. It isn't relativistic, for example, and it doesn't describe galactic clusters well. But it is still useful, even if you are convinced that LambdaCDM is the correct reason for dark matter phenomena, in helping you to understand that dark matter phenomena are largely confined to areas where local gravitational fields are extremely weak, while they are negligible in areas where local gravitational fields are moderate to strong.

This intuition, as well as the intuition that there is a strong connection between baryonic matter distributions and inferred dark matter halo densities and distributions, has proven far more predictive than first principles LambdaCDM predictions.

Thus, MOND is useful, even if it is totally wrong in mechanism, for providing a compactly packaged intuition about how a phenomena manifests over a large domain of applicability (roughly all systems galaxy sized or smaller), so theories that have been predicted and withstood the test of time and are consistent with the data, shouldn't be dismissed lightly, even if they are not unique explanations or explanations that will ultimately prove to be the correct ones.

I put Koide's rule and its extensions in the same boat. It could be wrong. But it is helpful in providing intuition about the kind of mathematical relationships that could explain the masses of the fundamental fermions that is reality based and predictive, and no formula that works can be all that different.

People are fond of saying that correlation is not causation, and that's true. But it is also true that correlations generally do have causes. They made by hidden third factors you haven't considered or may be due to some factor that isn't very profound, but they almost always have causes when the correlation is strong.
 
Last edited:
  • #10
"Koide gets the right answer; he must be onto something!" is undermined if other people with different ideas also get the "right answer". They're not all onto something.

Dragging MOND into this doesn't help. It kust makes it look like you've never found an iconoclastic theory you didn't like.
 
  • #11
Actually I am not sure if any of the two formulae can be compared to Balmer. I do not know the history of Balmer formula, but I keep hearing that it is purely empirical. Koide formula was based in a composite model, and its two masses version, for up down and strange, was based in a particular assignent for the CKM matrix. As for Breakstone formula, it is empirical but based in the perturbative formulas that had been postulated for the electron/muon quotient, based here and there in some powers of the fine structure constant.
 
  • Like
Likes ohwilleke
  • #12
by the way I think that it is still possible that Koide formula is for a composite model: the Koide tuple of pi (or K?), D and B mesons

[tex]{(0 + √ M_{\pi^0} + √ M_{D^0} )^2 \over 0 + M_{\pi^0} + M_{D^0} }= 1.5017[/tex]

[tex]{(− √ M_{π^0} + √ M_{D^0} + √ M_{B^0} )^2 \over M_{π^0} + M_{D^0} + M_{B^0} }= 1.4923[/tex]
 
Last edited:
  • Like
Likes ohwilleke
  • #13
Vanadium 50 said:
"Koide gets the right answer; he must be onto something!" is undermined if other people with different ideas also get the "right answer". They're not all onto something.

Dragging MOND into this doesn't help. It kust makes it look like you've never found an iconoclastic theory you didn't like.

The standard for "they're onto something" is considerably lower when nobody has any theory receiving widespread acceptance at all to explain the relationship in the status quo.

Koide's rule may be a conjecture, but it is hardly iconclastic. There is no other formula to explain any of the fundamental particle masses, other than the relative masses of the W and Z bosons, that is true to a lower degree of uncertainty or that has endured so long. And, it leaves no room for doubt or wiggle room and moving the bar when each new round of experimental results come in.

For example, it has a better track record than the sixteen other efforts to do something similar that I keep in a running list of them (including another by Koide about quarks).

Lambda CDM's myriad flaws are well known and its status as a widely accepted paradigm is slipping, on multiple different fronts, with new proposals for dark matter and for modified gravity theories that haven't stood the test of time or been compared to the data rigorously against any data set proposed every week.

Another conjecture in the same boat that is consistent with the data to within two sigma error bars is the May 17, 2013 conjecture of G. Lopez Castro and J. Pestieau (LC & P) that the sum of the squares of the masses of the fundamental particles of the Standard Model is equal to the square of the Higgs vacuum expectation value, or alternately, the the sum of the Higgs Yukawas (or their functional equivalents for fundamental Standard Model particles whose coupling to the Higgs boson aren't called Yukawas) is equal to exactly 1. It suggests a plausible extension of electroweak unification theory that would suggest something about the source of the masses of the fundamental particles (which everyone in the field acknowledges is a huge open question that it would be nice to answer).

Now, someone stumbling along could easily propose a less elegant version of LC&P that just looks at the three fundamental bosons of the Standard Model, and the square of the top and bottom quarks and the tau lepton. This is uglier, but because the first and second generation quarks are so light by comparison, you probably can't distinguish them observationally at this point. The uncertainty in the top quark mass dwarfs all other other uncertainties in the relationship. This doesn't mean it is mere numerology or not worth noticing. The existence of an incomplete version that can't be distinguished, however, doesn't make it "bogus" or "totally wrong". It is a first order approximation of a better theory. And, we can track all similar relationships as new data appear to see if they continue to fit the data, if the fit improves as precision improves, or if the fit grows worse over time.

These are quantities that all related to each other within electroweak theory through their Higgs boson interactions already, and it provides a framework in which "naturalness" and the "hierarchy problem", becomes much easier to make sense of. Why is the Higgs boson 125 GeV? Because given the other measured fundamental particle masses and Fermi's constant, it has to be. Particles would have masses at the electroweak scale because a simple functional relationship compels that result.

The fact that the source of the true relationships hasn't been rigorously established and that more precise measurements could, in principle, disprove it in the future, doesn't mean that it is "bogus." Most theories eventually proven to be true with observational evidence start out that way.

This isn't a case of someone proposing an alternative to the Standard Model in an area where it provides an answer. And, while there a lots of direct tests of GR that have been done, none categorically rule it out in very weak fields as alternative to dark matter (inaccurate claims of proponents of the Bullet Cluster as doing so notwithstanding).

The nature of the beast when there is no widely accepted solution is that there are going to be lots of proposals and all of them that are consistent with the data deserve attention unless there are obvious theoretical grounds to believe that they are unsound and coincidental (e.g. series of numbers derived from words in the Bible or supernatural explanations).

FWIW, there are lots of iconclastic theories I don't like. Specifically, theories which aren't consistent with the observational evidence within its domain of applicability, and theories that don't make any predictions. Likewise, I'm much more skeptical of post-dictions that are recent, as opposed to those that have a good track record over a period of time after it is proposed of fitting the data.

Supersymmetry, supergravity, string theory, technicolor, two Higgs doublets theories, see saw models of neutrino mass, the fourth generation Standard Model extension, leptoquarks, extra dimensions, and Pati-Salam Unification, to name a few, all fail that test, even though all of them are considered very "respectable."

Newtonian gravity and Maxwell's equations are likewise wrong in the sense of not being "totally correct", but also not "totally wrong" and make very good approximations within their domain of applicability.

So are all manner of phenomenological relationships with a theoretical motive but not a theoretical proof in condensed matter physics and physical chemistry, where it is simply too complex to work out a relationship from first principles.

Another good analogy to Koide's rule is Kepler's laws of planetary motion. It was a purely phenomenological relationship at the time, but 68 years later, Newton proved that it could be derived from his simple law of gravity. Having it in place made the generalization later much easier.

Similarly, the classical ideal gas law, was preceded by Boyle's law, Charles's law, Avogadro's law, and Gay-Lussac's law from which the more comprehensive law was pieced together, and these classical idealized first order approximations, in turn, were important in formulating the Van der Waals equation which while it isn't full fledged QED, is still closer to physical reality in cases of "non-ideal" gases in the real world than the ideal gas law.

It isn't hard to see LC&P and Koide's rule, and some extensions of it, coming together someday in a combined whole that explains all of the fundamental masses of the Standard Model and the weak force coupling constant with far fewer free parameters.

Various iterations of the nuclear binding forces started out as pure phenomenological relationships, and still aren't derived from first principles in QED+QCD, although they've gotten more theory drive over time as we've grown to understand more. None of them are "totally right" but that doesn't mean that they are "totally wrong." The fact that there are several iterations of this formula doesn't mean that any of them are "bogus" or "totally wrong."

Keeping an open mind only makes sense in the case of phenomenological relationships, from a well respected theoretical physicist in this part of fundamental physics, that is consistent with the data and has a theoretical motivation, in an area where everybody is struggling to find any kind of relationship that can fit at even the percent level, let alone the part per 5,733 level, that has held true for 40 years and gotten closer to the experimentally measured value rather than more distant from it, with greater precision and accuracy.

1615850778617.png
 
Last edited:
  • Like
Likes arivero

What is the latest tau lepton measurement consistent with Koide's rule?

The latest measurement of the tau lepton is consistent with Koide's rule, which states that the masses of the three charged leptons (electron, muon, and tau) are related by a simple mathematical formula. The measurement shows that the ratio of the tau lepton mass to the muon mass is approximately 1.00001, which is consistent with Koide's rule.

What is Koide's rule?

Koide's rule is a mathematical relationship proposed by Japanese physicist Yoshio Koide in 1981. It states that the masses of the three charged leptons (electron, muon, and tau) are related by the equation (me + mμ + mτ)2 = 2(me2 + mμ2 + mτ2), where me, mμ, and mτ are the masses of the three leptons.

What is the significance of the latest tau lepton measurement consistent with Koide's rule?

The latest measurement of the tau lepton consistent with Koide's rule provides further evidence for the validity of this mathematical relationship. It suggests that there may be a deeper underlying symmetry in the masses of the charged leptons, which could potentially lead to a better understanding of the fundamental forces in nature.

How was the latest tau lepton measurement consistent with Koide's rule obtained?

The latest measurement was obtained using data from the Belle II experiment at the KEK laboratory in Japan. The researchers analyzed collisions between electrons and positrons and measured the mass of the tau lepton using a technique called "recoil mass spectroscopy". This measurement was then compared to the masses of the electron and muon, and the results were found to be consistent with Koide's rule.

What are the implications of the latest tau lepton measurement consistent with Koide's rule for future research?

The latest measurement provides further support for the idea that there may be a deeper underlying symmetry in the masses of the charged leptons. This could guide future research in the field of particle physics and potentially lead to new discoveries about the fundamental nature of the universe. It also highlights the importance of precise and accurate measurements in understanding the laws of nature.

Similar threads

  • Poll
  • Beyond the Standard Models
Replies
5
Views
1K
Replies
1
Views
1K
Replies
4
Views
1K
  • Beyond the Standard Models
Replies
19
Views
5K
  • Beyond the Standard Models
Replies
7
Views
720
  • Beyond the Standard Models
8
Replies
271
Views
98K
  • High Energy, Nuclear, Particle Physics
Replies
30
Views
2K
  • Beyond the Standard Models
Replies
2
Views
2K
  • High Energy, Nuclear, Particle Physics
Replies
4
Views
2K
  • Beyond the Standard Models
Replies
9
Views
487
Back
Top