What is new with Koide sum rules?

  • Thread starter Thread starter arivero
  • Start date Start date
  • Tags Tags
    Rules Sum
  • #251
Has someone reviewed this one?

https://arxiv.org/abs/2108.05787 Majorana Neutrinos, Exceptional Jordan Algebra, and Mass Ratios for Charged Fermions by Vivan Bhatt, Rajrupa Mondal, Vatsalya Vaibhav, Tejinder P. Singh

It is obscure, not easy read. But it uses, or finds, the polynomial equation form of Koide formula. Acording a blog entry, they will at some time ship a v2 with enhanced readability. Meanwhile T.P. Singh seems to update versions of a similar paper faster here, and has a previous blog post here about it.
 
Last edited:
  • Like
Likes CarlB and ohwilleke
Physics news on Phys.org
  • #252
They don't aim to produce the Koide formula specifically. They just try to match mass ratios using combinations of eigenvalues of various matrices, and then use the Koide formula as an extra check.

The formulas for electron, muon, tau ratios are equations 56, 57 (an alternative that doesn't work as well is in equations 62, 63). The numbers they combine to create these formulas, are found in figure 1, page 21. They have no explanation for the particular combinations they use (page 23: "a deeper understanding... remains to be found"; page 25: "further work is in progress").
 
  • Like
Likes arivero and ohwilleke
  • #254
One of the issues that has been raised about Koide's rule is that masses run with energy scale, unless you interpret it as applying only to pole masses.

It has also been noted that Koide's rule (and extensions of it) are really just functions of the mass ratios of particles and not their absolute masses.

With that in mind, some observations about the running of masses and mass ratios at high energies in the Standard Model in a recent preprint bear mentioning:
The CKM elements run due to the fact that the Yukawa couplings run. Furthermore, the running of the CKM matrix is related to the fact that the running of the Yukawa couplings is not universal. If all the Yukawa couplings ran in the same way, the matrices that diagonalize them would not run. Thus, it is the nonuniversality of the Yukawa coupling running that results in CKM running.
Since only the Yukawa coupling of the top quark is large, that is, O(1), to a good approximation we can neglect all the other Yukawa couplings. There are three consequences of this approximation:
1. The CKM matrix elements do not run below m(t).
2. The quark mass ratios are constant except for those that involve m(t).
3. The only Wolfenstein parameter that runs is A.
The first two results above are easy to understand, while the third one requires some explanation. A is the parameter that appears in the mixing of the third generation with the first two generations, and thus is sensitive to the running of the top Yukawa coupling. λ mainly encodes 1–2 mixing — that is, between the first and second generations — and is therefore insensitive to the top quark. The last two parameters, η and ρ, separate the 1–3 and 2–3 mixing. Thus they are effectively just a 1–2 mixing on top of the 2–3 mixing that is generated by A. We see that, to a good approximation, it is only A that connects the third generation to the first and second, and thus it is the only one that runs.
The preprint is Yuval Grossman, Ameen Ismail, Joshua T. Ruderman, Tien-Hsueh Tsai, "CKM substructure from the weak to the Planck scale" arXiv:2201.10561 (January 25, 2022).

The preprint also identifies 19 notable relationships between the elements of the CKM matrix at particular energy scales with one in particular that is singled out.

Screen Shot 2022-01-27 at 7.33.03 PM.png


At low energies, A2 which is the factor by which the probability associated with the CKM matrix entries that it is present in is consistent with being exactly 2/3rds.

The parameter "A" grows by 13% from the weak energy scale to the Planck energy scale, which means that A2 is about 0.846 at the Planck energy scale (about 11/13ths).

FWIW, I'm not convinced that it is appropriate to just ignore the running of the other Wolfenstein parameters, however, since if A increases, then one or more of the other parameters need to compensate downward, at least a bit, to preserve the unitarity of the probabilities implied by the CKM matrix which is one of its theoretically important attributes.

For convenient reference, the Wolfenstein parameterization is as follows:

1643335193829.png
 
Last edited:
  • #255
If we take Koide's observation that his equation is perfect only at low energy limit, it calls into question the usual scheme of trying to understand the Standard Model as the result of symmetry breaking from something that is simplest at the high temperature limit.

Contrary to that symmetry breaking assumption, the history of 100 years work on the Standard Model is that as energies increase, our model becomes more complicated, not simpler. Maybe Alexander Unzicker is right and we're actually abusing symmetry instead of using it. As you generalize from symmetries to broken symmetries you increase the number of parameters in what is essentially a curve fitting exercise. People complain about the 10 ^ 500 models in string theory but the number of possible symmetries is also huge, given our ability to choose among an infinite number of symmetries each with an infinite number of representations.

My paper which shows that the mixed density matrices have more general symmetries seems to have finally reached the "under review" status at Foundations of Physics. It had been "reviewers assigned" since I sent it in in May last year, other than a few days but it went to "under review" on January 17 and is still there. I suppose they've got reviews back and are arguing about it. That paper's solution to the symmetry problem is to use mixed density matrices which can cover situations where the symmetry depends on temperature which is just what is needed for the Standard Model. But density matrices are incompatible with a quantum vacuum; instead of creation and annihilation operators you'd have to use "interaction operators" where, for example, an up-quark is created, a down quark annihilated and simultaneously a W- is created. But you couldn't split these up into the individual operators for the same reason you don't split density matrices into two state vectors (from the density matrix point of view).
 
  • Like
Likes ohwilleke and arivero
  • #256
Indeed SO(32) is huge if interpreted in the usual way, but I was surprised that the idea of the sBootstrap implies very naturally SO(15)xSO(15)

As for the low energy limit... I wonder if it relates to QCD mass gap. We have the numerical coincidence of 313 MeV

(EDIT: https://arxiv.org/abs/2201.09747 calculates 312 (27) MeV, but it compares with the lattice estimate 353.6 (1.1) and with longitudinal Schwinger with gives 320 (35). And then we have the issue of renormalization scheme )
 
Last edited:
  • #257
CarlB said:
density matrices are incompatible with a quantum vacuum; instead of creation and annihilation operators you'd have to use "interaction operators" where, for example, an up-quark is created, a down quark annihilated and simultaneously a W- is created. But you couldn't split these up into the individual operators for the same reason you don't split density matrices into two state vectors (from the density matrix point of view).
Is this formalism of nonfactorizable thermal interaction terms already described somewhere? Or do we have to wait for the paper?
 
  • #258
Zenczykowski has written a follow-up to the MOND/Koide paper mentioned in #246, "Modified Newtonian dynamics and excited hadrons". This one does not mention Koide relations at all, so it's getting a little off-topic, but so we can understand his paradigm, I'll summarize it.

In hadronic physics, there is a "missing resonance" problem. The ground states of multiquark combinations are there as predicted by QCD, but there are fewer excited states than e.g. a three-quark model of baryons would predict. The conventional explanation of this seems to be, diquarks: quarks correlate in pairs and so in effect there are only two degrees of freedom (quark and diquark) rather than three, and therefore there are fewer possible states. (As another explanation, I will also mention Tamar Friedmann's papers, "No radial excitations in low energy QCD", I and II, which propose that radial excitations of hadrons don't exist,, and that they shrink rather than expand when you add energy.)

Zenczykowski's idea is that space is partly emergent on hadronic scales, e.g. that there are only two spatial dimensions there, and that this is the reason for the fewer degrees of freedom. This is reminiscent of the "infinite momentum frame", various pancake models of a relativistically flattened proton, and even the idea of a space-time uncertainty relation (in addition to the usual position/momentum uncertainty) that often shows up in quantum gravity.

The weird thing he does in the current paper, is to draw a line on a log-log graph of mass vs radius, indicating where the Newtonian regime ends in MOND, and then he extrapolates all the way down to subatomic scales, and argues that hadrons lie on this line! So that's how the "square root of mass" would enter into both the MOND acceleration law, and the mass formulae of elementary particles.
 
  • Wow
  • Like
Likes ohwilleke and CarlB
  • #259
Michael; What I've been concentrating on is QFT in 0 dimensions, that is, on things that happen at a single point in space or equivalently, things that happen without any spatial dependence. Without spatial dependence you cannot define momentum and without time dependence you cannot define energy but I see those as complications that can be included later.

By fermi statistics, only one or zero of any fermion can exist there, subject to spin. So you could have a spin-up electron and spin-down electron and a positron of any spin and that would be 3 particles -- the rest of the leptons and quarks would be absent. In addition to mixing (by superposition) spin-up with spin-down you can also mix colors and generations. But you can't mix particles with different electric charge.

This fits with the density matrix calculations in my papers; superpositions between different electric charges are not just forbidden by a "superselection sector" principle but they are in addition not even elements of the algebra. That is, the particles are the result of superpositions over symmetry. There are just enough degrees of freedom for the observed particles and their superselection sectors; there aren't any degrees of freedom left over to do something weird with it like a superposition of a neutrino and quark.

If you model the particles with state vectors, under this assumption you've got a single state vector with sectors that don't mix. That implies that the raising and lowering operators in a single superselection sector are just square matrices with (at least) an off diagonal 1 (or matrices equivalent to this on transformation so you can have a matrix that converts spin+x to spin-x). I think that handles the "interaction operators" that stay within a superselection sector such as a photon or gluon, but it would be outside of the algebra when considering something that changes the superselection sector such as the weak force. An example of a square matrix that defines an interaction is the gamma^\mu that is used for a photon in QED; but that stays inside a superselection sector so it's easy. Also it comes with a coupling constant that isn't obvious how to calculate.

To get a interaction that changes superselection sector (like the weak force) implies a mathematical object that is outside of the symmetry algebra. For what I'm working on the symmetry algebra is the octahedral group (with 48 elements). That is a point symmetry and I'm thinking it implies that space is on a cubic lattice. And the group has a mysterious tripling to give 144 elements so that the generation structure appears. Since the weak force changes superselection sector it has to be outside the algebra and cannot obey the symmetry and so the weak force mixes generations.

Anyway, I haven't figured it out. Every now and then I get an idea and a week of my life disappears in attempts to make calculations but that's slightly better than not having any idea what to calculate.
 
  • #260
Mitchell, that paper Modified Newtonian Dynamics and Excited Hadrons was quite a read; a paper I will read again as I'm sure it has insights I've overlooked. Zenczykowski has quite a lot of fascinating papers.

I like the concept of "linearization". The way I interpret this, Nature is naturally squared as, for example, energy is the square of amplitude. But Man prefers things that can be conveniently computed by linear methods so he linearizes things and this is essentially taking the square root. Thus Nature's density matrices are converted into state vectors.

What I don't quite possibly understand is where x^2 + p^2 comes from. I'm guessing this means the particle is being represented, one way or another, as a harmonic oscillator. Any ideas?

The Zenczykowski paper may not mention the Koide formula but dang it sure seems to come near to it. The idea of there being two sets of Pauli spin matrices (for momentum and position) seems to imply 2x3 = 6 degrees of freedom that are being rearranged so that generations come in pairs of triplets. As in charged leptons + neutrinos, or up-quarks and down-quarks.

Some years ago I was interested in Koide triplets among hadrons. For this I was looking at states with the same quantum numbers and I found quite a number of pairs of triplets. I wrote it up here and never published it. The fits begin on page 27:
http://www.brannenworks.com/koidehadrons.pdf

You can ignore the derivation, but it involves the discrete Fourier transform. My most recent paper uses the non commutative generalization of the discrete Fourier transform to classify the fermions so these are related ideas. What's missing from these papers is an explanation of how a discrete lattice gives apparent Lorentz symmetry which is nicely explained in Bialynicki-Birula's paper: "Dirac and Weyl Equations on a Lattice as Quantum Cellular Automata" https://arxiv.org/abs/hep-th/9304070
 
  • #261
"Zenczykowski's idea is that space is partly emergent on hadronic scales, e.g. that there are only two spatial dimensions there, and that this is the reason for the fewer degrees of freedom."

This is definitely a thing. Alexandre Deur, when working on a MOND-like theory that is heuristically motivated by analogizes a graviton based quantum gravity theory to QCD (in the gravity as QCD squared paradigm) (even though one can get to the same results with a classical GR analysis in a far less intuitive way) talks about dimensional reduction (sometimes from 3D to 2D, and sometimes to 1D flux tubes) in certain quark-gluon systems as a central conclusion of mainstream QCD.

Put another way, emergent dimensional reduction when the sources of a force carried by a gauge boson has a particular geometry, is a generic property shared by non-Abelian gauge theories in which there is a carrier boson that interacts with other carrier bosons of the same force, although the strength of the interaction, and the mass, if any, of the carrier boson, will determine the scale at which this dimensional reduction arises. This emergent property arises from the self-interactions of the gauge bosons that carry the force in question. The scale is the scale at which the strength of the self-interaction and the strength of the first order term in the force (basically a Coulomb force term) are close in magnitude.

Generically, this self-interaction does not arise (due to symmetry based cancellations) and the dimensional reduction does not occur if the sources of the force are spherically symmetric in geometry. To the extent that the geometry of the sources of force approximate a thin-disk, there is an effective dimensional reduction from 3D to 2D at the relevant scale. To the extent that the geometry of the sources of the force are well approximated as two point sources isolated from other sources of that that force, you get an effective dimensional reduction from 3D to 1D with a one dimensional flux tube for which the effective strength of the force between them is basically not dependent upon distance (in the massless carrier boson case).
 
Last edited:
  • #262
mitchell porter said:
Piotr Zenczykowski has appeared in this thread before (#74, #93). Today he points out that the modified gravitational law of "MOND" can be expressed in terms of the square root of mass, something which also turns up in Koide's formula.
yes, very interesting indeed. Square root of mass plays a decisive role in understanding mass ratios and the Koide formula https://arxiv.org/abs/2209.03205v1 and in fact this is the very reason square root of mass appears in MOND.
 
  • Like
Likes arivero and ohwilleke
  • #263
Tejinder Singh said:
yes, very interesting indeed. Square root of mass plays a decisive role in understanding mass ratios and the Koide formula https://arxiv.org/abs/2209.03205v1 and in fact this is the very reason square root of mass appears in MOND.
But, of course, the square of rest mass also matters.

The sum of the square of the Standard Model fundamental particle rest masses is consistent with the square of the Higgs vev at the two sigma level, which probably isn't a coincidence and is instead probably a missing piece of electroweak theory (and also makes the Higgs rest mass fall into place very naturally).

In other words, the sum of the respective Yukawa or Yukawa equivalent parameters in the SM, that quantify the Higgs mechanisms proportionate coupling to each kind of fundamental particle that gives rise to its rest mass in the SM, is equal to exactly 1.

If electroweak unification and the Higgs mechanism were invented today, I'm sure that the people devising it would have included this rule in the overall unification somehow.

If this is a true rule of physics and not just a coincidental relationship (and it certainly feels like a true rule of physics in its form), this is also excellent evidence that the three generations of SM fermions and the four massive SM fundamental bosons (the Higgs, W+, W-, and Z) are a complete set of fundamental particles with rest mass in the universe (although it would accommodate, for example, a massless graviton or a new massless carrier boson of some unknown fifth force), especially when combined with the completeness of the set of SM fundamental particles that follows from observed W boson, Z boson and Higgs boson decays. The W and Z boson data are strong consistent with the SM particle set being complete up to 45 GeV, and the Higgs boson decays would be vastly different if there were a missing Higgs field rest mass sourced particle with a mass of 45 GeV to 62 GeV. The sum of Yukawas equal to one rule and current experimental uncertainties in SM fundamental particle masses leaves room for missing Higgs field rest mass sourced SM particles with masses of no more than about 3 GeV at most. This low mass range is firmly ruled out by W and Z boson decays.

These observations are part of why I am with strongly with Sabine Hossenfelder on having a Bayesian prior that there is a very great likelihood that there are no new fundamental particles except the graviton and perhaps something like a fundamental string that could give rise to other particles of which the SM set plus the graviton is the complete set.

Skeptics, of course, can note that the contributions of the top quark on the fundamental SM fermion side, and the Higgs boson and weak force gauge bosons on the fundamental SM boson side are dominant so that the contributions of the three light quarks, muons, electrons, and neutrinos, as well as the massless photons and gluons, are so negligible as to be completely lost in the uncertainties of the top quark and heavy boson masses, and thus just provide speculative theoretical window dressing until our fundamental particle mass measurements are vastly more precise.

But the big picture view as a method to the Higgs Yukawa values madness does reduce the number of SM degrees of freedom by one if true and is suggestive of a deeper understanding of electroweak unification and the Higgs mechanism that is deeply tied to the same quantities upon which Koide's rule and its extensions act.

The Higgs vev in turn, is commonly expressed as a function of the weak force coupling constant and the W boson mass, suggesting a central weak force connection to mass scale of the fundamental particles, although not necessarily explaining their relative masses (although electroweak unification explains the relative masses of the W and Z bosons to each other).

And, of course, it is notable that the only fundamental SM particles without rest mass (i.e. photons and gluons) are those that don't have a weak force charge again pointing to the deep connections between the weak force and fundamental particle masses in the SM.

These points are also a hint that the source of neutrino mass may be more like the source of the mass of the other particles than we give it credit for being.

The lack of rest mass of the gluons also presents one heuristic solution to the so called "strong CP problem." The strong force, the EM force, and gravity don't exhibit CP violation because gluons, photons and hypothetical gravitons must all be massless and massless carrier bosons of a force don't experience time in their own reference frame. And, since CP violation is equivalent to T (i.e. time) symmetry violation, forces transmitted by massless carrier bosons shouldn't and don't have CP violation. In contrast, the weak force, which has a massive carrier boson (the W+ and W-) is the only force in which there is CP violation and hence T symmetry violation, since massive carrier bosons can experience time. (Incidentally, this also suggests that if there were a self-interacting dark matter particle with a massive carrier bosons transmitting a Yukawa DM self-interaction force that it would probably show CP violation, not that I think SIDM theories are correct.)

Alexandre Deur's work demonstrates one approach from first principles in GR of how the square root of mass can work its way into the phenomenological toy model of MOND. This tends to suggest that there is no really deep connection between MOND and Koide's rule, even if the connection isn't exactly a coincidence. After all, MOND is acting not just on fundamental particle masses arising via the Higgs mechanism as Koide's rule does. It also acts on, all kinds of mass-energy, such as the mass arising from gluon fields in protons, neutrons and other hadrons which has nothing to do with the Higgs mechanism rest masses that Koide's rule and its extension relate to. Gluons field masses arise from the magnitude of quark color charges and the strong force coupling constant instead.

Alas, no comparable first principles source for Koide's rule is widely shared as an explanation for it although a few proposals have been suggested. My own physics intuition is that Koide's rule and its extension to follow an ansatz based on a dynamic non-linear balancing the charged lepton flavors, and quark flavors, respectively via flavor changing W boson interactions governed by the CKM matrix and lepton universality (with the Higgs mechanism really just setting the overall mass scale for the fundamental SM particles), and I can see the bare outlines of how something along that lines might work, but I lack the mathematical physics chops to fully express it.
 
Last edited:
  • #264
Square root mass is also in equation (2) of the excellent paper "River Model of Black Holes" Hamilton & Lisle Am.J.Phys.76:519-532,2008: https://arxiv.org/abs/gr-qc/0411060

The paper is about a model of black holes on flat space-time. It's what you conclude black hole must be if you follow the model of GR using geometric algebra (gamma matrices) from the Cambridge geometry group. The non rotating version is called "Gullstrand-Painleve" coordinates so I treated it, along with Scwharzschild coordinates in my paper: https://arxiv.org/abs/0907.0660

The idea is that black holes act as if space is a river that flows into the hole. The square root of the black hole mass is in the velocity of the river.

Meanwhile, I'm working on a paper that defines a new formulation of quantum mechanics that includes statistical mechanics and the intermediate transitions from wave to particle in wave / particle duality.
 
  • #265
reviewing the wikipedia I think that we never mentioned koide original definition of the formula.

The original derivation in "Quark and lepton masses speculated from a Subquark model" is very nice. It just says to assume that $$m_{e_i} \propto (z_0 + z_i)^2 $$ with the conditions $$z_1+z_2+z_3=0$$ and $$\frac 13(z_1^2+z_2^2+z_3^2)=z_0^2$$

A interesting corollary is that the "two generations version" is simply a pair with a massless particle and a massive one.
 
Last edited:
  • #266
Elaborating on the original paper, could be interesting to rewrite Koide equation as a less spectacular "Koide Postulate"
$$ \operatorname {Tr} D^2 = \operatorname {Tr} Z^2 $$
where D and Z are respectively diagonal and traceless matrices that decompose the Yukawa matrix via
$$ A= D+Z$$
$$ Y = AA^+$$
If ##A## is also diagonal, and then so D and Z, we recover traditional Koide formula. But it is still interesting to look to the Trace equation. Generically we do:
$$A = \pm \sqrt Y ;\; D= {\operatorname {Tr} A \over 3} Id_3 ;\; Z= A- D$$
and then we compare. For the charged leptons with tau mass the old 1776.86 pm 0.12 we get

##\operatorname {Tr} D^2=941.52 \pm 0.05 MeV##
##\operatorname {Tr} Z^2 =941.51 \pm 0.07 MeV##
If we use the lepton masses at the charm scale as said in the XZZ paper, then as you know the Koide equation is not in the error bars anymore, we get 0.667850+/-0.000011 instead of just two thirds. But it is interesting that in this approach the hit is taken more by the diagonal part, that goes down to 938.27+/-0.08 MeV, while the traceless part goes only a bit up to 941.60+/-0.12 MeV. At the bottom scale, both sides go down proportionally, running until GUT scale 890 vs 894 MeV.

EDIT: It can be worthwhile to remark that the old equation is recovered because:
$$\operatorname {Tr} Z^2 - \operatorname {Tr} D^2 = \operatorname {Tr} A^2 - \operatorname {Tr} 2AD = (m_e+m_\mu+m_\tau) - 2 (\sqrt m_e+\sqrt m_\mu+\sqrt m_\tau)^2/3 $$
So the equation can be also reformulated as
$$ \operatorname {Tr} A^2 = \operatorname {Tr} \{A,D\} $$ with ##[A,D]=0##
 
Last edited:
  • Like
Likes ohwilleke and mitchell porter
  • #267
arivero said:
$$ \operatorname {Tr} D^2 = \operatorname {Tr} Z^2 $$

ADDENDA: A consequence of this line of thought is that it allows us to reformulate R. Foot observation in the most abstruse algebraic way possible. Consider that being at 45 degrees of 1,1,1 means that the projection into the diagonal and its orthogonal have the same size. Consider the 3x3 matrices as vector space with the Hilbert-Schmidt inner product ##<A,B>= Tr(A^+ B)## associated to the norm ##||A||_{HS}=Tr(AA^+)##. The diagonal matrix ##D## above is just the projection ##A^\|## of ##A## into the line of identity multiples and it is the projection we visualize in Foot's cone. So

We call Koide ansatz to the postulate that there exists a decomposition $$Y = A A ^+$$ of the yukawa mass matrix of the charged leptons such that $$\|A^\parallel\|_{HS}=\|A^\perp\|_{HS}$$ for the projection into the line of multiples of identity, using the Hilbert-Schmidt inner product ##<U,V>= Tr(U V^+)## and its associated norm.

When ##A## is self-adjoint, the ansatz produces Koide formula.

A possible pursue in this approach could be to investigate the normality of ##A##. It is easy to consider non-normal 3x3 matrices knowing that every non diagonal triangular matrix is non normal. And then we have two yukawa mass matrices ##Y_0= A A^+##, ##Y_1= A^+ A## with the same mass values.

The normal but not self-adjoint case is also interesting. Some of the work of CarlB on circulant matrices could reappear here. Besides, it invites to consider generalisation of the square root of a mass to include complex phases, using ##z=\sqrt m e^{i \phi}## and then ##m=z z^* ##

Ah, Koide used this format for his formula in this paper: https://arxiv.org/abs/hep-ph/0005137v1
 
Last edited:
  • #268
Sorry for throwing in a very tangential reference. But I tried to find a precedent for caring about the Hilbert-Schmidt norm of the yukawa matrices. The only thing I found was a paper from 1997, part of a research program in which they try to construct quantum field theories whose beta functions vanish. It's weird but tantalizing, because we have to care about beta functions too. Maybe we need to think about the new ansatz, in the context of RG flows in the space of couplings.
 
  • Like
Likes ohwilleke and arivero
  • #269
Yeah, Hilbert-Schmidt norm is very pedantic :smile:. But once one goes to Tr A A^+, a lot of stuff can appear.

Still I am worried I do not know how lo look fundamental books on this "theory of trace invariants". For instance I was very surprised when I considered the complex generalisation; the more general matrix such that Tr Z Z^+ = 1 and Tr Z = 0. I was expecting it to be just some "unphysical phases".

EDIT: lets ask stackexchange too https://math.stackexchange.com/questions/4734443/more-general-traceless-normalized-matrix Of course they pointed me that any traceless matrix makes the role up to normalisation... not very helping about how to parametrise the eigenvalues. Sure if I ask they will tell me "just the dimension of the matrix, minus one".
 
Last edited:
  • #270
This thread was launched by the idea of a "waterfall" of Koide-like relations that relate the masses of all the quarks as well as the charged leptons. An esoteric idea buried in that paper (in part 3), is that the more fundamental version of this "waterfall" starts with a massless up quark, but that instantons add a finite correction to the up quark mass, a correction which then propagates through the waterfall and gives rise to the observed values of the masses.

The idea that the up quark is fundamentally massless was proposed as a solution to the strong CP problem (why the theta angle of QCD is zero), but lattice QCD calculations imply that there must be a nonzero fundamental mass, in addition to any mass coming from QCD instantons. However, this just means that the up yukawa must be nonzero at the QCD scale. It is still possible that the up mass comes from instantons of a larger gauge group for which SU(3) color is just a subgroup.

"Non-Invertible Peccei-Quinn Symmetry and the Massless Quark Solution to the Strong CP Problem" illustrates this for the example of SU(9) color-flavor unification. Actually they talk about a massless down quark, but they state it could work for the up quark as well, and they cite some 2017 papers (references 86-87) which feature a massless up quark in the context of SU(3)^3. Also see their reference 2, which posits a similar origin for neutrino masses, and illustrates that these instantons can be thought of as arising from virtual flavored monopoles.
 
  • #271
mitchell porter said:
This thread was launched by the idea of a "waterfall" of Koide-like relations that relate the masses of all the quarks as well as the charged leptons. An esoteric idea buried in that paper (in part 3), is that the more fundamental version of this "waterfall" starts with a massless up quark, but that instantons add a finite correction to the up quark mass, a correction which then propagates through the waterfall and gives rise to the observed values of the masses.

The idea that the up quark is fundamentally massless was proposed as a solution to the strong CP problem (why the theta angle of QCD is zero), but lattice QCD calculations imply that there must be a nonzero fundamental mass, in addition to any mass coming from QCD instantons. However, this just means that the up yukawa must be nonzero at the QCD scale. It is still possible that the up mass comes from instantons of a larger gauge group for which SU(3) color is just a subgroup.

"Non-Invertible Peccei-Quinn Symmetry and the Massless Quark Solution to the Strong CP Problem" illustrates this for the example of SU(9) color-flavor unification. Actually they talk about a massless down quark, but they state it could work for the up quark as well, and they cite some 2017 papers (references 86-87) which feature a massless up quark in the context of SU(3)^3. Also see their reference 2, which posits a similar origin for neutrino masses, and illustrates that these instantons can be thought of as arising from virtual flavored monopoles.
Thanks for the heads up.

The Many Experimentally Determined Constants Of The SM Belong In The Electroweak Sector - The QCD Sector Isn't The Right Place To Look For Answers To Koide-Like Questions

The idea that the quark masses are related to the SU(3) QCD interactions, at all, as opposed to being basically an electroweak phenomena, however, doesn't seem right.

QCD has nothing to do with the CKM matrix, the PMNS matrix, the charged lepton masses that follow the original Koide's rule, W boson and Z boson decays, or the SM Higgs mechanism which has been demonstrated well at the LHC.

QCD related hadron mass doesn't even have the same origin (or even a similar origin) to the masses of the charged fundamental fermions and massive fundamental bosons that arise from the Higgs mechanism. QCD is doing its part of the mass generation thing in composite particles, dynamically, in a quite feasible to calculate with lattice QCD way already.

Rather than QCD instantons, starting with a zero mass or small self-interaction origin mass for the up quark (indeed, the up quark, down quark, electron, and lightest neutrino mass eigenstate are all reasonably close to what they should be due to self-interactions), and then modifying it with loop level modifications (much like the electroweak part of the muon g-2 calculation), seems so much more in tune with the way all of the other relevant parts of the Standard Model work.

Similarly, the seductive LC & P relationship (i.e. that the sum of the squares of the masses of the fundamental particles is equal to the square of the Higgs vev), is still true to within about 2 sigma or a hair more for a very slight statistical tension (almost all of the uncertainty arising from uncertainty in the top quark pole mass and the Higgs boson mass, which combined, are a little light at current best fit values). But this relationship really only makes sense in the context of the electroweak part of the Standard Model model ignoring QCD.

Even if the simple LC & P relationship comes into a greater tension with the best fit fundamental particle masses with new data, it doesn't take much of a BSM fix to solve that in a situation where there are a lot more potential moving parts.

A single BSM 3 GeV gauge boson (perhaps serving a similar role for neutrino mass to the role that the Higgs boson serves for all of the other fundamental particle masses), for example, would be enough to bring the current 2 sigma deviation from best fit values to a perfect LC & P fit.

The two sigma range for the top quark pole mass according to the last paper with combined LHC data sets is 171.86-173.18 GeV with a best fit value of 172.52 GeV. This result is essentially the same as the Particle Data Group value, but cuts the uncertainty in half. If the true value is on the high end of this range, and the true value of the Higgs boson is on the high end of the range it is experimentally permitted to have, then either the need for a BSM particle vanishes entirely or the mass needed to make it balance gets much smaller than 3 GeV.

In contrast, there's no way to fix any LC & P (or other) discrepancies between theory and experiment in the QCD part of the SM because it doesn't have enough moving parts.

The strong force coupling constant is really the only experimentally measured physical constant in the SU(3) QCD sector of the SM. What is it?

α(s)(n(f)=5)(M(Z))=

0.1171(7) at the renormalization group summed perturbation theory (RGSPT) value;

0.1179(9) at the Particle Data Group (PDG) value,

0.1184(8) at the 2021 Flavor Lattice Averaging Group (FLAG) value.

These values are consistent with each other at the usual two sigma level and are each precise to a bit under the one percent level. There are lots of deep intrinsic barriers to making that value more precise because strong force propagator loops converge so much more slowly and with so much more computational effort than electroweak propagator loops do (and reach peak precision at a much lower level before they start to diverge in perturbative QCD). And, I'm not aware of any real strong theoretical hint at what value it should have form any theory.

So, there just isn't much to work with there and perhaps unsurprisingly as a result of this simplicity there isn't even any significant amount of BSM variant theorizing about the QCD sector of the SM model. It isn't fruitful because the experimental and lattice data isn't precise enough to confirm or deny any reasonable variant on it.

The beta function of the strong force coupling constant is deterministically set by renormalization theory without any experimental input, and the color charges possibilities and relative values are similar fixed in the theory at small integer or ratio of small integer values, that are confirmed by experiment to high precision and by the need for theoretical consistency.

So why turn to QCD to explain the unexplained values of the fundamental constants or to reduce the degrees of freedom in the model?

In contrast, eight of the SM constants are CKM/PMNS matrix parameters in the electroweak sector of the model that basically describe W boson interactions, twelve are Higgs boson Yukawas in the electroweak sector, and two are electroweak coupling constants. The Higgs vev doesn't have any measurable QCD contributions either. The three neutrino mass eigenstates may not be Higgs Yukawas, but they certainly have nothing to do with QCD with which neutrinos don't even interact at the tree level.

What's one more experimentally measured physical constant in the electroweak sector if you need it to balance the books and make a credible prediction of new physics, in a sector where you already have 25 experimentally measured physical constants (less one or two degrees of freedom because they aren't fully independent of each other)?

The genius of Koide sum rules, if you can make them work, is that it can, in principle, greatly reduce the number of independent degrees of freedom associated with those 25 experimentally measured physical constants in the electroweak sector, eliminating seven or more of them, in addition to the one or two that we can already trim in the existing SM electroweak sector due to related electroweak quantities like the EM and weak force coupling constants, and W and Z boson masses.

The Strong CP Problem Is A Non-Problem

I also continue to be unimpressed with the notion that the Strong CP problem is really a problem at all.

Nature can set its physical constants to anything it wants to in the SM. It is sheer arrogance to impose our expectations on those values and the quest for "naturalness" driving this "problem" has been perhaps the most fruitless and most scientific effort consuming scientific program since we tried to explain planetary motions with epicycles.

Also, the fact that gluons are massless in SM QCD alone, for example, by symmetry and the fact that massless particles don't experience time in their own reference frame because they travel at exactly the speed of light, alone, makes any possibility other than zero CP violation in QCD very hard to justify or consider to be "natural".

We don't theoretically need a zero mass up quark to get that result. So, finding a cheat by which we can get a zero mass up quark isn't very impressive either.
 
Last edited:
  • #272
mitchell porter said:
Also see their reference 2
I meant reference 3.
ohwilleke said:
QCD related hadron mass doesn't even have the same origin (or even a similar origin) to the masses of the charged fundamental fermions and massive fundamental bosons that arise from the Higgs mechanism.
It's been noted here for many years that in Carl Brannen's eigenvalue formulation of the Koide formula, the mass scale is determined by a quantity equal to the mass of a "constituent quark", i.e. a quark in the context of a nucleon, dressed with whatever extra stuff is responsible for most of the nucleon mass. (To get this quantity from Brannen's paper, look up μ in equation 14, which has dimensions of sqrt(eV), and square it.)

Now interestingly, one of the theories of confinement in QCD (such as confinement of quarks inside a nucleon), is monopole condensation in the QCD vacuum. These aren't massive persistent monopoles like in grand unified theory, but rather configurations of the gauge field, that can even be gauge-dependent in some versions of this idea.

Meanwhile, the paper referenced in #270 proposes that fundamental fermions get a contribution to their Higgs-generated mass, via virtual monopole loops - see their Figure 2 on page 26.

However, these monopoles are not just SU(3) color monopoles, they are SU(9) color-flavor monopoles. And this could bring us back to the electroweak sector, since electroweak interactions are the flavor-changing interactions in the standard model.
 
  • #273
Recently I have been reviewing Koide 1981, the preon theory of lepton (and quark) masses. Let me review what Koide did:
  • He postulates that a charged lepton is a composite lh^i (i=1,2,3) of a flavour preon and a generation preon
  • The flavour preon l has subcharge z_0 and the generation preons h^i have subcharge z_i of whatever interaction that keeps the preons in place.
  • The energy of the composite is the sum of self-energies of each preon and the energy of interaction. Here Koide claims that E_i = m_i c^2 = K(a) {z_0^2\over 2} + K(a) {z_i^2\over 2} + K(a) z_0 z_i where we can allow the force coupling to depend of a cutoff or scale a with the requirement of having the same dependency in the three pieces. Note that in the model the generation preon is a boson and the flavour preon is a fermion.
  • The sum of generation preon charges is zero, z_1+z_2+z_3. This is a typical group theoretical requisite, also related to anomalies etc. Fine here.
  • The square of flavour preon charge is the average of the square of generation preon charges. 3 z_0^2 = z_1^2 + z_2^2+z_3^2. From the usage it looks that Koide imagines this to be a sort of normalisation condition.
And with all of this, the trick is done, let B(a)=K(a)/2c^2 and run the math:
<br /> m_i = B(a) (z_0+z_i)^2<br /><br /> (\sum \sqrt {m_i})^2 = 9 B(a) z_0^2<br /><br /> \sum m_i = 6 B(a) z_0^2<br /><br /> { \sum m_i \over (\sum \sqrt {m_i})^2 } = {2 \over 3}<br />

(Edit: a good integer approximation could be z1=-48, z2=-21, z3=69, z0=50 but well I do not see how it could come from group theory. Perhaps Eddington o Krolikowski)

Most of the stuff in the old papers is not about this formula but one for quark mixing; in order to get it, Koide postulates that the generation boson themselves are composites of SU(3) generation fermions, that the bosons that go to the leptons are the ones in the antitriplet of 3 \times 3 = 6 + \bar 3, and that the bosons that go to the quarks are the mix of the singlet and the two octet neutrals that we get from 3 \times \bar 3 =8 + 1
 
Last edited:
  • #274
Oh well, so the sBootstrap implies koide formula.

1721406948266.png


(if one can fake the SU(2) charge so that it works as z_0)
 
  • #275
arivero said:
Oh well, so the sBootstrap implies koide formula.

View attachment 348581

(if one can fake the SU(2) charge so that it works as z_0)
FYI, the image is very fuzzy and hard to make out, even when you click on it.
 
  • #276
It was a fast note to remember that I have taken more than ten years to notice how to recover Koide from group theory, while it is trivial given SU(3). So I am a bit ashamed.

All the action has been happening in the other thread. There we saw that the sBootstrap solution can be obtained out from -or is equivalent to- an SU(15) group, that optionally lives inside SO(32). We want to break it down to SU(3) colour times SU(3) flavour times SU(2) flavour, which we label as r,g,b times d,s,b times u,c to keep track of the diquarks and dibosons. To do this, we factor colour first so we are left with an anticoloured 15 of SU(5), an coloured 15 conjugate, and a neutral 24. Now we look the flavour irreducible representations.

The 15 has a triplet (1,3) with the horrible +4/3 scalars, a sextuple (3,2) with the three families of down squarks, and a sextet (6,1) with the three families of up squarks.

The 24 has neutral triplet (1,3), a neutral singlet (1,1) and a neutral octet (8,1). So the 12 sneutrinos. And then two sextuples (3,2) and (3, 2) that are our charged leptons! and pretty organised in SU(3), so now we just do the trick mentioned in #273: we assign for z1, z2, z3 any combination of the coordinates of the SU(3) roots; we ask z0 to get the value of Koide postulate, which we are free to do as it is the charge along the SU(2) symmetry, and voila, the masses meet Koide formula.
 
  • #279
arivero said:
View attachment 348854

Updated the section on masses also in arxiv, v3 of https://arxiv.org/abs/2407.05397
A key would be appreciated. Are the entries in the body of the chart in Mev?

What do the columns represent?

Also, I love this illustration from the pdf:

Screenshot 2024-07-25 at 2.13.04 PM.png
 
  • #280
I read the article earlier this week I like it and have a copy for useful references. Haven't looked at the updated tables yet though
 
  • #281
I think that the drawing is at least the second version from De Rujula, but the only one he found in his archive as it is used in some publication. I saw another version during a talk in my hometown, I was in the first course of the undergraduate physics studies, and someone did a series of talks addressed to secondary school teachers. Around 1985-1986 then.

Yes, all the masses are in MeV. What we are doing here is just apply the mass formula M(a,b)= k (z_a+z_b)^2. When one of the charges is constant, say z_q, and the other three sum zero z_1+z_2+z_3=0, then we have k(z_q+ z_i)^2 is a koide formula if and only if 3 z_q^2 = z_1^2+z^2+ z^3. The zero sum rule is the tradicional cosine of the most common version here in the thread. I have used alpha instead of delta for the phase because I am doing more jumps than usual. Koide phase is not only periodic 2 pi /3; it also allows reflections on pi/4 and pi/2. So I am not using the usual phase for the leptons but a reflected one.

Besides checking that I am recovering the usual koide formula, I was interested on the singular points and what kind of particles I get. I went with MeV masses because I was tired of exact roots of two, roots of three etc.

The particles (or pairs of preons) are organised according representations of SU(3)xSU(2). When the SU(2) is a singlet, it means the two particles are from the SU(3), ie they are pairs from the ones I call d,s,b. When the SU(2) is a doublet, one of the particles is a "c" or a "u". For these particles, the charge "z_c" does not come from the cosine but from the rule of the average of squares. I mean, the are the "z_0" of usual koide formula.

In the case pi/4 the charge of "s" coincides also with z_0, so what happens is that we get a massless lepton and that the s particle can also act to produce a pretty exotic koide tuple in the quark sector ds,ss,bs

(More properly, the "antiquark" sector, as a pair such as ds is an antiquark)

I am impressed by the "sneutrino" sector, as I had not anticipated the obvious thing that half of the
scalar neutrinos are massless, independently of the koide phase.
 
Last edited:
  • #282
Mitchel asked me if I can tell something about the preons in each tuple. I do not see any connection beyond the factor three we knew, and the coincidence with the current quark mass, 313MeV.

What we have, with all the masses in GeV and all current values from pdg, is

mt=(√29.67+√59.13)^2
mb=(√29.67-√11.57)^2 =(√0.925+√1.174)^2
mc=(√29.67-√18.65)^2 =(√0.925+√0.028)^2
ms =(√0.925-√1.61)^2

tau=(√0.3139+√0.5972)^2
mu=(√0.3139-√0.05531)^2
e=(√0.3139-√0.289)^2

EDIT: on inspection, there are some near integer multiples:
2 * 29.67 -> 59.13 <- 99*0.5972
2* 0.0277379 = 0.0554759 -> 0.0553085
2* 0.5972 -> 1.174
2* 0.289 -> 0.5972
4* 0.289 -> 1.174
other worse
33*0.028 -> 0.925
40*0.289 -> 11.57
...
and including the original quarks and leptons:
7 * 0.5972 -> mb
3 * 0.5972 -> tau
mt/ms -> mproton/me -> 0.925/me
4*0.3139 -> mc
3*ms -> 0.289

Not sure how relevant; or perhaps I should also check with mc_predicted, ms_predicted instead of measured. Note that the tau, mu, e is precise enough to discard exactitude of all of these two factors. They look just as QCD-like quantities: 289 MeV, 55 MeV, 597 MeV, 313 MeV (the average of all the other three, as required by Koide)...
 
Last edited:
  • #283
As I mentioned in the sBoostrap thread, I had put in a preprint some calculations of Koide masses using the original composite idea and they have happened to be published as Eur. Phys. J. C 84, 1058 (2024). https://doi.org/10.1140/epjc/s10052-024-13368-3, so as a collateral effect we now have another published paper that mentions:
  • the waterfall, in a footnote.
  • the tuples (0ds), (scb) and (cbt).
  • the relation sum(scb) = 3 sum (leptons)
 
  • Like
  • Love
Likes ohwilleke and mitchell porter
  • #284
arivero said:
As I mentioned in the sBoostrap thread, I had put in a preprint some calculations of Koide masses using the original composite idea and they have happened to be published as Eur. Phys. J. C 84, 1058 (2024). https://doi.org/10.1140/epjc/s10052-024-13368-3, so as a collateral effect we now have another published paper that mentions:
  • the waterfall, in a footnote.
  • the tuples (0ds), (scb) and (cbt).
  • the relation sum(scb) = 3 sum (leptons)
Congratulations!
 
  • #285
A few comments on the waterfall, from a broadly orthodox model-building perspective.

The heart of the waterfall is the idea that the quarks "tbcsud" (I'm on my phone and won't attempt sophisticated formats) form a set of four sequential overlapping "Koide triplets" tbc, bcs, csu, sud, each of which satisfies the Koide formula.

Thanks to R. Foot, we know that a Koide triplet of masses can be represented as a certain type of three-dimensional vector. The waterfall as a whole would have some kind of representation in terms of overlapping Foot vectors in a six-dimensional space, a "spiral staircase" shape. One might hope that this shape represents the minimum of a potential, or even an actual arrangement of branes in extra physical dimensions.

However, in the standard model, the quark masses are eigenvalues of the up and down yukawa matrices. Assuming that the yukawas are dynamically determined, one would therefore be looking for a symmetry and/or dynamics in which the eigenvalues of those matrices are coupled appropriately.

Suppose we go further and ask how this looks from the perspective of grand unification, like SU(5) or SO(10). Naive SU(5) comes out wrong since it implies that the masses of d, s, b quarks are the same as the masses of electron, muon, tau. Usually it is hoped that this is ameliorated by the running of the yukawas, sometimes with the influence of some extra new fields included. The Georgi-Jarlskog ansatz is one that has been mentioned a few times here.

With respect to the construction of quark Koide triplets in such a context, the three fermion generations are independent and so one would be free to aim at triplets like tcu or bsd. But the waterfall calls for Koide triplets that have two quarks from the same generation. It's not clear to me yet, if there are extra constraints when trying to construct the waterfall from yukawas in SU(5) unification...

Meanwhile, I will mention another aspect of the waterfall, the relationship between the bcs triplet and the original electron, muon, tau triplet. This contains 5 out of the 6 fermions involved in the SU(5) relationship mentioned earlier, but with charm quark replacing down quark. It also involves a factor of 3, such as shows up in Georgi-Jarlskog (where it's due to the three colors of SU(3)). Could one possibly obtain the waterfall relationship from a twisting or deformation or modification of the SU(5) GUT relationship?

Finally, I'll mention grand unification in the context of "F theory", which is a corner of string theory in which phenomenology involves intersecting branes in the extra dimensions. Gauge fields are associated with individual branes, other particles (fermions and Higgs) at the intersection of two branes, and yukawa interactions (of Higgs, left fermion, right fermion) at the point intersection of three branes.

I mention this because it's a highly geometric framework in which one could try to embed or realise something like the staircase of Foot vectors mentioned above. But to realise the waterfall there, you'd have to deal with the peculiarities already mentioned - the coupling of eigenvalues, and the non-standard but SU(5)-like relationship to the yukawas of the charged leptons.
 
  • #286
mitchell porter said:
A few comments on the waterfall, from a broadly orthodox model-building perspective.

The heart of the waterfall is the idea that the quarks "tbcsud" (I'm on my phone and won't attempt sophisticated formats) form a set of four sequential overlapping "Koide triplets" tbc, bcs, csu, sud, each of which satisfies the Koide formula.

Thanks to R. Foot, we know that a Koide triplet of masses can be represented as a certain type of three-dimensional vector. The waterfall as a whole would have some kind of representation in terms of overlapping Foot vectors in a six-dimensional space, a "spiral staircase" shape. One might hope that this shape represents the minimum of a potential, or even an actual arrangement of branes in extra physical dimensions.

Improving On The Waterfall and Conceptual Implications Of This Improvement

One notable point is that you can significantly improve the fit of the Koide triple waterfall by adding an adjustment for the third quark to which the middle quark in the waterfall can transform via a W boson interaction with the mass of that third quark and the probably of a W boson transformation to it (from the relevant CKM matrix element squared).

I worked that out here. The waterfall values are:

Inputs
me = 0.510998910 MeV ± 0.000000013 (i.e. one part per 39,307,608)
mμ = 105.6583668 MeV ± 0.0000038 (i.e. one part per 2,780,483)

Outputs
m = 1776.96894(7) MeV (Tau) - PDG 1776.82 +/- 0.16 (i.e. one part per 11,105) (0.93 SD)
mt = 173.263947(6) GeV (top) - PDG 173.070 +/- 0.888 (i.e. one part per 194) (0.22 SD)
mb = 4197.57589(15) MeV (bottom) - PDG 4180 +/- 30 (i.e. one part per 139) (0.58 SD)
mc = 1359.56428(5) MeV (charm) - PDG 1275 +/- 25 (i.e. one part per 51) (3.38 SD)
ms = 92.274758(3) MeV (strange) - PDG 95 +/- 5 (i.e. one part per 19) (0.55 SD)
md = 5.32 MeV (down) - PDG 4.8 +/- 0.4 (i.e. one part per 12) (1.3 SD)
mu = 0.0356 MeV (up) - PDG 2.3 +/- 0.6 (i.e. one part per 4) (2.26 SD)

Koide ratios of PDG mean values of selected triples is as follows:
t-b-c 0.6695
b-c-s 0.4578
c-s-u 0.622
s-c-d 0.60563
s-u-d 0.564

The worst fits (the central c quark and the central u quark) are cases where the probabilities of the central quark transforming to a third quark are the greatest. The best fit t-b-c are cases where the probabilities of the central quark transforming to the third quark are smallest.

The values with the third quark adjustment are:

mt=172.743 GeV PDG 173.070 +/- 0.888 per t-b-c avg adj down with ts (0.16%) and td (7.52*10^-5)
mb=4193 MeV PDG 4180 +/- 30 per b-c-s adjusted down with ub (0.11%)
mc=1293 MeV PDG 1275 +/- 25 per b-c-s adjusted down with cd (4.9%)
ms=92.55 MeV PDG 95 +/- 5 per b-c-s avg adj up with ts (0.16%) and down with us (4.97%)
md= 5.12 MeV PDG 4.8 +/- 0.4 per s-c-d avg adj of up with td (7.52*10^-5) and down with ud (94.9%)
mu=4.60 MeV PDG 2.3 +/- 0.6 per s-u-d avg adj up with ub (0.11%) and up with us (4.97%)

The adjustment bring all of the formula values for quark masses (and the tau lepton) except the up quark to within 0.8 standard deviations of the experimental values and to the right order of magnitude in the case of the up quark - now off by a factor of 2 rather than a factor of 64.6 - much closer to the mark on a percentage basis - without any experimental inputs other than the electron mass, muon mass and several of the four parameter CKM matrix element values! Thus, the formula comes very close to reproducing the Standard Model values despite dispensing with 7 of the experimentally measured parameters of the Standard Model. . . .

Before v. After Adjustments Experimental Standard Deviations Between Theory and Experimental Value
top 0.22 v. 0.368 SD
bottom 0.58 v. 0.433 SD
charm 3.38 v. 0.72 SD
strange 0.55 v. 0.49 SD
down 1.3 v. 0.8 SD
up 2.26 v. 3.83 SD

Conceptually, the idea that this suggests is that the quark masses are the result of dynamical balancing via W boson interaction (real and virtual) of all possible transformations of a particle into a different particle, weighted by the probabilities of each possible transformation, according to a balancing formula of which Koide's Rule is a special case.

Koide's rule is so close to perfect (in contrast with extended Koide's rule for the quarks) because of charged lepton universality, which making the weighting the different transformation possibilities trivial, and because the neutrino masses are so negligible that they don't materially tweak the values reached with the charged leptons alone.

This also suggests that, conceptually, the CKM matrix probabilities are conceptually prior to the quark masses. Likewise, charged lepton universality is conceptually prior to the charged lepton masses.

If I had a bit more mathematical and particle physics chops, I would think that it would be possible to take this basic intuition and craft it into a more rigorous and natural mathematical formulation and to simultaneously solve for all of the quarks at once (as the approach I used is really a tree level adjustment that doesn't take into account "higher loop" effects where the adjustments affect other adjustment and core Koide value calculations).

The respective roles of the Higgs boson and W boson in setting fundamental fermion masses

This view essentially gives the W boson more importance and the Higgs boson and field less importance in setting the particular fundamental fermion masses.

The Higgs vev which is a function of the W boson mass and the weak force coupling constant, sets the mass scale for all of the fundamental Standard Model particles collectively (the sum of the Higgs field Yukawas of the fundamental Standard Model particles is within two sigma of exactly 1, so each Yukawa is basically allocating a percentage of the Higgs vev to that particular particle), under the phenomenological LP & C relationship.

But the value of those Yukawas is dynamically balanced between particles based upon their W boson transformations into each other.

The Quark and Charged Lepton Mass Coincidence

The coincidence that allows the mass scale of the quarks to be derived from the mass scale of the charged leptons is something of a mystery in this model. But there is probably something to it because it works.

Maybe W and Z boson decay probabilities, which have a three x factor for quarks due to their three color charge variants plays a part in this coincidence.

Footnote On Neutrino Masses

It is also notable that the ratio of the lightest neutrino mass to the electron mass is probably on the same order of magnitude as the ratio of the weak coupling constant to the electromagnetic coupling constant, perhaps suggesting perhaps that the first generation lepton masses may be a function of their self-interactions (something that has been suggested before in the case of the electron). Of course, as Brannan correctly discerned, the neutrino masses can't form a good Koide triple without a sign change for one of those masses. Still, I'm optimistic that the neutrino masses could be due to W and Z boson interactions with neutrinos, rather than the Higgs mechanism, which would allow them to have Dirac mass despite not having both left and right handed versions.

Notably, all fundamental SM particles that have non-zero masses have weak force interactions, while all fundamental SM particles that have zero masses do not interact via the weak force. This is also suggestive of a W boson role in establishing those masses.
 
Last edited:
  • #287
ohwilleke said:
mc = 1359.56428(5) MeV (charm) - PDG 1275 +/- 25 (i.e. one part per 51) (3.38 SD)
yep I did not go very deep on this on the paper; it was supposed to be a letter and I was expecting the referees to ask me to delete some section. But it is amusing that if one just ignores or corrects the charm failure, it goes better.

It seems to me that your idea is to relate the failures to mixing. That would be very in the line of the first paper in the saga, H. Harari H. Haut J. Weyers Phys. Lett. B 78 459–461 (1978), I believe.
 
  • #288
arivero said:
yep I did not go very deep on this on the paper; it was supposed to be a letter and I was expecting the referees to ask me to delete some section. But it is amusing that if one just ignores or corrects the charm failure, it goes better.

It seems to me that your idea is to relate the failures to mixing. That would be very in the line of the first paper in the saga, H. Harari H. Haut J. Weyers Phys. Lett. B 78 459–461 (1978), I believe.
Thanks for the reference.
 
  • #289
We are lacking a mechanism to produce a Koide relation among masses in a triple like (uds), as opposed to triples like (uct). The latter masses are eigenvalues of a single mass matrix, and one can express the formula in terms of the properties of a single matrix, e.g. Carl Brannen's work. (uds) mingles eigenvalues from up and down mass matrices. The (uds) mass values imposed by symmetries in Harari et al form a Koide triple, but not in a stable way (i.e. the property is not preserved if you change the parameters).

We should investigate e.g. dependence of matrix eigenvalues on parameters, as a step towards understanding joint dependence of eigenvalues of two matrices on the same parameters, as a step towards finding ways to enforce a waterfall relation.

@ohwilleke suggests a "dynamic balancing" involving the weak interaction, which is at least capable of coupling u-type and d-type quarks. (I would suggest focusing, not just on the W boson, but on its spin-0 component specifically, which after all comes from the Higgs field.)

Now, we already know effective mass and charge can be renormalized by virtual particles. However, this doesn't involve the kind of reciprocity among particle species that Andrew's slogan suggests to me. Reciprocity is more reminiscent of the flavon idea, according to which yukawas aren't just parameters, but are actually vevs of dynamical scalar fields that can interact with each other via some potential.

So I'm inclined to think in terms of some Higgs-flavon interaction potential (and from a stringy perspective, all these vevs might be moduli, i.e. sizes of cycles in the extra dimensions, distances and angles between branes, magnitudes of stringy p-form "fluxes", and so forth).
 
  • #290
A practice problem would be:

Suppose you have two 2x2 matrices f_ij and g_ij which diagonalized are diag(a,c) and diag(b,d).

What kind of "interaction potential" involving the fs and gs, results in (abc) and (bcd) satisfying the Koide formula?

edit: Also, a further thing to think about. I have thought of a Koide triplet across the fermion generations, like the original (e mu tau), as more natural, because (considered as mass matrix eigenvalues) those quantities all come from the same yukawa matrix, whereas these "sequential" triplets like (scb) mingle parts of different yukawa matrices.

However, eigenvalues always come with eigenvectors. So if we really need a "matrix" perspective on these unnatural sequential triplets, we can assemble a corresponding matrix out of the eigenvectors from the original yukawa matrices.

If that's too abstract: We are dealing with two yukawa matrices, one for up-type quarks, the other for down-type quarks. The quark masses are eigenvalues of the yukawa matrices. As eigenvalues, they will have accompanying eigenvectors. Each triplet in the waterfall consists of two quarks of one type, and one quark of the other type. So for each such triplet, we can assemble an "artificial matrix" out of the corresponding eigenvectors. The question then, is whether a potential expressed in terms of these "artificial matrices" (which would be most natural for obtaining the waterfall), could be re-expressed in terms of the physical matrices?

(This chain of thought inspired by reading about the "eigenvector-eigenvalue identity" that was rediscovered by neutrino physicists.)
 
Last edited:
  • #291
It may be easier if we go in the other direction. So: start with the sequence of six masses implied by @arivero's waterfall, whether the "unperturbed" form (page 4) or the more realistic "perturbed" form (page 3). It consists of four successive overlapping Koide triplets.

Now suppose we follow @CarlB and associate each triplet with a 3x3 circulant matrix, with the masses as eigenvalues. Circulants have the neat property that their eigenvectors are always the same - the eigenvectors of an nxn circulant are particular permutations of the complex nth roots of 1.

So (again following Brannen 2006, page 1), if we call the eigenvectors of a 3x3 circulant |1>, |2>, |3>, then the eigenvalue-eigenvector pairs for any given form of the waterfall can be e.g. ##m_t## |1>, ##m_b## |2>, ##m_c## |3>, ##m_s## |1>, ##m_u## |2>, ##m_d## |3>. As may be seen, each triplet has all the necessary basis vectors.

Furthermore, if we now assemble from these, the analogs of the physical yukawa matrices - ##m_t## |1>, ##m_c## |3>, ##m_u## |2> and ##m_b## |2>, ##m_s## |1>, ##m_d## |3> - also turn out to be circulant! Study of a circulant form for the physical yukawas dates back at least to Harrison & Scott 1994.
 
  • #292
I'm giving a talk Jan 9, 2025 at the Joint Mathematics Meeting which this year is held in Seattle that has to do with the Koide rules. I've put it up here: https://www.brannenworks.com/aca/ComplexTime.pdf but I expect to get some changes to it at the meeting having to do with the adjoint rule for gauge bosons when instead of a Lie group, one instead is dealing with a finite subgroup of a Lie group. Here the Lie group is weak SU(2) and the hope I have is that I can get the weak mixing angle as a derivation.
 
  • Like
Likes arivero, ohwilleke and mitchell porter
  • #293
Via @arivero, I recently ran across the claim that people are using ChatGPT's "o1" model all wrong - it's not there for chatting, it's a "report generator" that you should prompt with as many details and as precise instructions as possible. So I decided to try it out on my most recent thoughts about two of our favorite extensions of the Koide relation. This is what it came up with after it "thought about flavor symmetries and Koide's formula for 46 seconds". It didn't outright come up with a new Lagrangian - I didn't ask it to - but I did ask it what flavor symmetries should be considered, in order to explain @arivero's waterfall, and in my opinion the answer is what you might get from a graduate student after a quick literature review - which is pretty good for a general-purpose conversational AI that replies in less than one minute!
 
  • Like
Likes arivero and ohwilleke
  • #294
mitchell porter said:
Via @arivero, I recently ran across the claim that people are using ChatGPT's "o1" model all wrong - it's not there for chatting, it's a "report generator" that you should prompt with as many details and as precise instructions as possible. So I decided to try it out on my most recent thoughts about two of our favorite extensions of the Koide relation. This is what it came up with after it "thought about flavor symmetries and Koide's formula for 46 seconds". It didn't outright come up with a new Lagrangian - I didn't ask it to - but I did ask it what flavor symmetries should be considered, in order to explain @arivero's waterfall, and in my opinion the answer is what you might get from a graduate student after a quick literature review - which is pretty good for a general-purpose conversational AI that replies in less than one minute!
Share the answer! Also, 46 seconds is a quite long time for an AI answer. You made it work much more than the usual user to come up with it. UPDATE: Sorry. I didn't see the links to the AI answers at first.
 
Last edited:
  • #295
I clicked on the link and saw the impressive ChatGPT answer. I'd like to see the complete discussion. I've also found ChatGPT extremely useful, mostly for writing python code to make 3d models using the horribly complicated Blender software. It doesn't write code you can use, but instead gives you a starting point (which is so useful in writing code from scratch). I also like the fact that it's so polite to me, as I am to it.

And I gave my presentation, not very well attended. Right now I'm thinking that I should propose that the extension to Pauli matrices I gave in https://www.scirp.org/pdf/jmp_2020081416294422.pdf , which amounts to increasing the Pauli matrices plus unit matrix by crossing them with a finite group of size N which gives, for example, N sigma-x matrices, is sufficient to give all the standard model and gravity.

Finally, my paper got rejected at viXra as I truthfully told that I expected to update it before sending it for publication. So I sent it to arXiv and it's been "on hold" there for the last few days. The last time I had an arXiv submission put on hold it was stuck there for about 3 weeks and I was so PO'd that I yanked it after they finally decided it was good enough for them. If the paper ends up on arXiv instead of viXra I am going to risk dying from laughter.
 
  • #296
CarlB said:
Finally, my paper got rejected at viXra as I truthfully told that I expected to update it before sending it for publication. So I sent it to arXiv and it's been "on hold" there for the last few days. The last time I had an arXiv submission put on hold it was stuck there for about 3 weeks and I was so PO'd that I yanked it after they finally decided it was good enough for them. If the paper ends up on arXiv instead of viXra I am going to risk dying from laughter.
Makes you wonder what are the standards nowdays... :oldbiggrin:
As for me I never got "published" in ArXive, I think I first need to get an endorsement from someone who is active in academia.
Now you understand why I am mad... :cool:
 
  • #297
arXiv allows me to "self endorse" in gen-physics because I have a number of papers in gen-physics which is where they put crank papers that apparently follow the rules in some way. It might help that while a grad student working for the big gravity wave experiment at Hanford, I had my name stuck on a lot of big gravity wave experiments. These are papers with 100+ authors.
 
  • Like
Likes arivero and ohwilleke
  • #298
CarlB said:
arXiv allows me to "self endorse" in gen-physics because I have a number of papers in gen-physics which is where they put crank papers that apparently follow the rules in some way. It might help that while a grad student working for the big gravity wave experiment at Hanford, I had my name stuck on a lot of big gravity wave experiments. These are papers with 100+ authors.
I think I'll bypass the PhD route, I think I won't get funding anyways.
As life dreams slowly disappear, I believe I'll be good high school maths and physics teacher.
And there's summer vacation to keep track on advanced mathematics and physics that interest me.
If ever I'll find something genuine new in maths and physics I guess I'll publish it in ViXra or researchgate.

It seems I am already considered a crank, so why not enjoy the title... :oldbiggrin:
 
  • #299
You can also give a paper at mathematics conferences. I've done that twice now, most recently on January 9th. I see I didn't give a copy of my talk. It includes the spoken part and is quite short here:
https://www.brannenworks.com/aca/brannenjjm2025.pdf

It's for mathematicians, specifically at the harmonics talks so its oriented that way. The original paper for this can't be read much by physicists and is here:
https://www.scirp.org/pdf/jmp_2020081416294422.pdf

Right now I'm preparing 3 more papers for JMP at SCIRP. I like them because they give half decent reviews of my papers with useful changes and they are not expensive but make papers available free to anyone to read. And I figure that they're Chinese and likely will still be around 50 years from now when a lot of the respectable journals will be long gone. Their papers are pretty. A problem is that they tend to rewrite my papers a bit and correct the English to be Chinese English and it's difficult to argue with them about this.
 
  • #300
Aaaand arXiv rejected it after looking at it for 16 days. The basic problem is that it gives a new formulation of QM and if you've been in the business for a while and already know how it's supposed to be done, it's as tough reading as if you'd gone back to school and had to take introductory QM all over again. In other words a lot of work.

So I suppose what happened is that the grad students they have look at things approved it and then an older physicist took a quick look and rejected it.
 
Back
Top