All the lepton masses from G, pi, e

  • Thread starter arivero
  • Start date
  • Tags
    Lepton Pi
In summary, the conversation revolved around using various equations and formulae to approximate the values of fundamental constants such as the Planck Mass and the fine structure constant. The discussion also delved into the possibility of using these equations to predict the masses of leptons and other particles. Some participants raised concerns about the validity of using such numerical relations, while others argued that it could be a useful tool for remembering precise values.

Multiple poll: Check all you agree.

  • Logarithms of lepton mass quotients should be pursued.

    Votes: 21 26.6%
  • Alpha calculation from serial expansion should be pursued

    Votes: 19 24.1%
  • We should look for more empirical relationships

    Votes: 24 30.4%
  • Pythagorean triples approach should be pursued.

    Votes: 21 26.6%
  • Quotients from distance radiuses should be investigated

    Votes: 16 20.3%
  • The estimate of magnetic anomalous moment should be investigated.

    Votes: 24 30.4%
  • The estimate of Weinberg angle should be investigated.

    Votes: 18 22.8%
  • Jay R. Yabon theory should be investigate.

    Votes: 15 19.0%
  • I support the efforts in this thread.

    Votes: 43 54.4%
  • I think the effort in this thread is not worthwhile.

    Votes: 28 35.4%

  • Total voters
    79
  • #281
The electrons magnetic anomaly, being the result of a (very) complex
series, is:

0.00115965218085 (76).

Now could there be a direct analytical calculation of this series?
An interesting starting point seems to be:

[tex]\frac{e^{-e-e^{-1}}}{(2\pi)^2}\ =\ 0.0011570109\ \approx \ \frac{\alpha}{2\pi} + ...[/tex]

That's in the right range and it becomes more interesting if we look at the
error in the ratio between the two:

g/g' = 1.0022828 = 1.001140762

So in the error we again see our target value, which may be a sign that
our expression could be a simplified version of a more complex analytical
formula.Regards, Hans.
 
Last edited:
Physics news on Phys.org
  • #282
Mass relations between the vector bosons and the leptons.If we interpret the frequency [itex]m\ c^2/h[/itex] of the leptons as a precession,
then there must also be another frequency: The frequency at which it spins.

(Simply like: Spinning top frequency versus the frequency at which it
precesses. The harder we try to tilt the top, the faster it precesses.)

The first which comes to mind is the magnetic anomaly. This is the ratio
between the orbit frequency of a (lepton) in a magnetic field and the
frequency at which its spin direction precesses.

Now, we already found the following on this thread:


0.00115965__ = electron magnetic anomaly
0.00115869__ = muon / Z mass ratio


It gives a lepton/vector_boson mass ratio. Now, can a charge-less
particle precess in a EM field? What about light by light scattering via
charged virtual particles (vacuum polarization).

The following we also found earlier on this thread:


0.0000063522 = muon g vacuum polarization terms.
0.0000063537 = electron / W mass ratio.


This gives another lepton/vector_boson mass ratio. The numbers fit very
well. Remarkable is that the properties of the muon and Z explain the
electron/W mass ratio, while in the first case it was the other way around:
The properties of the electron and W explained the muon/Z mass ratio...And then the tau. where does this leave the tau-lepton? If this is also a
precession/spin ratio, then it would need a significantly stronger coupling
because its mass (=frequency) is 16.8183 times higher as that of the muon.

Well, we naively use the (photon) diagrams of the magnetic anomaly
in first order to try to see how large this coupling should be. (even though
the only particles which have similar propagators are the gluons)

[tex]\mbox{Anomaly}\ \ \ =\ \ \frac{\alpha_?}{2\pi} + ... [/tex]

This leads us to a new numerical coincident:


0.1224 ________ = required coupling constant.
0.1216 (0.0017) = the coupling constant [itex]\alpha_s(m_Z)[/itex]


So, the coupling constant required to give the tau / Z mass ratio,
assuming massles propagators, leads us to the strong coupling
constant at mZ energies...

Now, "who ordered" the s of strong here? Well, at least the use of
massless (gluon) propagator diagrams fits ... :^)Regards, Hans.PS: See also http://arxiv.org/abs/hep-ph/0503104
(and http://arxiv.org/abs/hep-ph/0604035 for [itex]\alpha_s(m_Z)[/itex])
 
Last edited:
  • #283
I have sort of missed this entire thread, and I am not going to read through 11 pages, so I apologise if this point has already been made.

arivero said:
[tex]
\alpha^{-1/2}+ (1+{\alpha \over 2 \pi }) \alpha^{1/2}=e^{\pi^2 \over 4}
[/tex]

If this equation is fundamental (and I agree that it is rather a huge coincidence if it is not) then the obvious question is: Why should there be a relation for the asymptotic low energy value of alpha? Why not alpha(Mz) or alpha(MGUT)? The latter would seem to make more sense to me, but clearly it won't work with the formula.

In other words, usually we think of the most fundamental physics existing at high energies, but this is a low energy equation.

Incidentally, I have seen a similar but slightly different form:

[tex]\alpha = \Gamma^2 e^{-\pi^2/2}[/tex]

with
[tex]\Gamma = 1+\frac{\alpha}{(2\pi)^0} \left(1+\frac{\alpha}{(2\pi)^1} \left(1+\frac{\alpha}{(2\pi)^2} \left(1+ ... \right. \right. \right. [/tex]

This is not quite as nice, but also gives the right low energy alpha. Both of these equations can't be fundamental(?), so there has to be at least one coincidence.
 
Last edited:
  • #284
Severian said:
If this equation is fundamental (and I agree that it is rather a huge coincidence if it is not) then the obvious question is: Why should there be a relation for the asymptotic low energy value of alpha? Why not alpha(Mz) or alpha(MGUT)? The latter would seem to make more sense to me, but clearly it won't work with the formula.

In other words, usually we think of the most fundamental physics existing at high energies, but this is a low energy equation.
Hi, Severian.

I would expect such a formula to be much more complicated and
depending on all what's out there in the vacuum, including the things
which are still hiding out there.

Looking at, say, the calculations of the magnetic anomaly of the muon,
then it's always the low energy limit of alpha which is used, and the
vacuum polarization comes in from explicit terms defined in mass relations
like A2(mμ/me), A4(mτ/me). After that you get all the hadronic terms
and electroweak terms.

So a running alpha (which includes all these vacuum polarization terms)
is dependent on a complex function of all kinds of SM parameters like
the lepton mass ratios for the QED only part and getting much worse
for the hadronic contributions.

The point is that, on this thread we're looking for simple numerical
coincidences which just might have a physical origin. So, the shorter
the expression is, the better. Complex things generally lead to complex
expressions and we try to avoid them.
Severian said:
Incidentally, I have seen a similar but slightly different form:

[tex]\alpha = \Gamma^2 e^{-\pi^2/2}[/tex]

with
[tex]\Gamma = 1+\frac{\alpha}{(2\pi)^0} \left(1+\frac{\alpha}{(2\pi)^1} \left(1+\frac{\alpha}{(2\pi)^2} \left(1+ ... \right. \right. \right. [/tex]

This is not quite as nice, but also gives the right low energy alpha. Both of these equations can't be fundamental(?), so there has to be at least one coincidence.

Both formula's are from here :smile: The second one was an attempt to
extend the first one into a series. The first expression is just the first 3
terms of the series.Regards, Hans
 
Last edited:
  • #285
Hans de Vries said:
Looking at, say, the calculations of the magnetic anomaly of the muon,
then it's always the low energy limit of alpha which is used, and the
vacuum polarization comes in from explicit terms defined in mass relations
like A2(mμ/me), A4(mτ/me). After that you get all the hadronic terms
and electroweak terms.

But there is a good reason that the alpha used in the magnetic moment of the muon (or the electron for that matter) is 1/137. It is a low energy observable. So it is really not the same thing as an equation that is supposed to derive alpha itself.

So a running alpha (which includes all these vacuum polarization terms)
is dependent on a complex function of all kinds of SM parameters like
the lepton mass ratios for the QED only part and getting much worse
for the hadronic contributions.

My theoretical prejudice would be that the number at high energies would have a simple form, while that at low energies was simply obtained by running to the low scale. The low energy value would then be the one which contains all the messy corrections. So I would be happier if you could you could reproduce an alpha at the GUT scale (or perhaps the Planck scale) which would provide the low energy value after running.

The point is that, on this thread we're looking for simple numerical
coincidences which just might have a physical origin. So, the shorter
the expression is, the better. Complex things generally lead to complex
expressions and we try to avoid them.

Fair enough. The coincidence I find most intriguing is the match between the Higgs vev and the top mass. Or why, to current experimental accuracy, is the top Yukawa exactly 1?

I see now that the two equations I quoted are the same (with one truncated). Silly me! Though I saw it on a different site (I will have a look for the link).
 
  • #286
Severian said:
My theoretical prejudice would be that the number at high energies would have a simple form, while that at low energies was simply obtained by running to the low scale.

...


Fair enough. The coincidence I find most intriguing is the match between the Higgs vev and the top mass. Or why, to current experimental accuracy, is the top Yukawa exactly 1?


Yes, in a way this thread defies the current prjudice by suggesting that it is possible to find relationships in the asymptotic low energy limit. This could be explained in some ways:

- For mass relationships, it could be that the high energy GUT masses are zero, thus the mass is generated radiatively at low energy. This has been tryed a lot of time in the seventies to get the electron mass out of the muon one, but abandoned after (or because of) the third generation.

- For coupling constants, some eigenvalue in the renormalisation group flow could be reached when running to low energy, sort of universality. Adler did some speculative tries on this sense.

- For relationships between coupling constants, it could be related to the symmetry breaking mechanism. Meaning, that the symmetry breaking mechanism triggers at a energy scale with the coupling constant meeting some algebraic relationships. I have no idea of a mechanism of such kind.

Of course, we could also consider that our prejudice about high energy GUT is just that, a prejudice. For instance, Koide's relationship, related to mass quotients across the three generations, seems hard to be married with a radiative generation principle.
 
  • #287
Hans de Vries said:
So, the coupling constant required to give the tau / Z mass ratio,
assuming massles propagators, leads us to the strong coupling
constant at mZ energies...
Do you mean alpha/2pi = mtau / mZ approx? Hmm the only thing I am afraid is about the experimental value of the strong coupling, a fuzzy bussiness.

Hans de Vries said:
Now, "who ordered" the s of strong here? Well, at least the use of
massless (gluon) propagator diagrams fits ... :^)

Actually we have already an uninvited aparision of alpha_s in the relationship between Z0 decay and Pi0 decay (somewhere in the middle ot the long long thread). It is hidden because we speak of the "pion decay constant", but this pion decay constant is actually a sum of radiative corrections coming from the strong force.
 
  • #288
I see. In other words you are telling that the relationship between tau and muon mass is as the one between electromagnetic and strong coupling constants. Gsponer did me a related observation time ago, that if we drove electron mass to zero, Koide relationship should imply a quotient between tau and muon, and this quotient was about the same magnitude that the nuclear strong force (the pion-mediated force between nucleons). Now, electron mass to zero with a fixed muon mass should be equivalent to electromagnetic coupling going to zero. Hmm.
 
Last edited:
  • #289
arivero said:
Actually we have already an uninvited aparision of alpha_s .

One might blame it on the infamous ninth gluon... :^) , the white/anti white one,
sometimes speculated to be the photon. ( which would leave it with the
wrong coupling constant)Regards, Hans
 
  • #291
Ganzfeld said:
For example...the combined mass of the particles of the meson octet is 3.14006 times the proton mass. Pretty close to pi. Amazingly, the combined mass of the particles in the baryon octet is around 9.8 times the proton mass. Close to pi squared.
Let me to evaluate these quantities because, as we know, these multiplets at at the heart of SU(3) flavour mass formulae.

The meson octect is [tex] \pi^0 \pi^+ \pi^- K^0 \bar K^0 K^+ K^- \eta_8[/tex]. The latter mixes with the singlet to produce \eta and \eta' which are the actually measured masses, and we can use \eta if we think that the mix is very small. But we could also consider the full non irreducible nonet or the extended 16-plet with the charmed particles.

The barion octect is [tex]p n \Lambda \Sigma^0 \Sigma^+ \Sigma^- \Xi^- \Xi^0[/tex]. No mixing issue here, but Sigma^0 is a very fascinating piece of physics on itself, particularly its decay mechanism. As for adding charm, we could, going then to a 20-plet.

figures are drawn in http://pdg.lbl.gov/2006/reviews/quarkmodrpp.pdf

Now the meson sum is

134.9766+139.57018+139.57018+547.51+497.648+497.648+493.677+493.677
=2944.277

2944.27696/938.27203=3.1379, which is 0.12% off from pi, thus qualifies for the thread. It fails pi for about 3.5 MeV, so the error bars can not be blamed, perhaps the mixing can.

938.27203+939.56536+1115.683+1192.642+1189.37+1197.449+1321.31+1314.83=9209.12139=
9.815 M_p, while pi square is about 9.87. So off by 0.55% this time but no mixing to be blamed here.
 
  • #292
Numerology

I do not know if somebody already said that, but I suspect that it is possible to write a computer program that will find a "numerical coincidence" (up to a reasonably specified accuracy) between any two (or more) specified numbers. Essentially, such a program tries various algebraic relations and combinations of small integer numbers and constants such as "pi" and "e", until it finds a "good" one. :yuck:
 
  • #293
Demystifier said:
I do not know if somebody already said that, but I suspect that it is possible to write a computer program that will find a "numerical coincidence" (up to a reasonably specified accuracy) between any two (or more) specified numbers. Essentially, such a program tries various algebraic relations and combinations of small integer numbers and constants such as "pi" and "e", until it finds a "good" one. :yuck:

Indeed this program has been written and it is quoted somewhere in the middle of the thread; even the output is available in a web page, sorted by decimal ordering. It was done by a couple of computer scientists and the motivation is to try to define some kind of complexity of an algebraic expression. Another researcher, I.J.Good, tryed to use the same approach to single out "low entropy" expressions. Such GIGO (measure garbage In to determine the Garbage out) methods usually trip on the electron/proton quotient :cry:

In some sense a problem with these programs is that they concentrate in algebra instead of geometry (nor to speak of dynamics). So [tex]6 \pi^5 [/tex] is reported as "less complex" than, for instance, [tex]e^\pi - {1 \over e^{\pi}}[/tex]

Let me add that the study if the rings generated by the rationals plus some finite set of irrational numbers are a very prolific field of study in algebra. Still, its truncation to some n-digits decimal expansion is not very studied as far as I know; a friend of me, J. Clemente, tried time ago to work out the algebraic setting of IEEE "real" numbers and we did not found too much bibliography on it.
 
Last edited:
  • #294
I am really happy to know that such a program has been made.
Thanks arivero. :smile:
 
  • #295
Via an old comment of Baez in Woits blog a year ago, I note the http://www.nbi.dk/~predrag/papers/finitness.html about the perturbative expansion of QED. It its weak form, it claims that the growth of diagrams is not combinatorial. In its strong form, it seems to claim that the coefficients are of order unity, a point that Kino****a considers refuted after the 8th-order calculation.

Lovers of gossip and theoretists of science would like to check also the remarks
http://www.nbi.dk/~predrag/papers/g-2.html
http://www.nbi.dk/~predrag/papers/DFS_pris.ps.gz

I do not know what to do of his "social experience". I remember, as undergradute student, how excited I was about the articles of Cvitanovic on Chaos theory, and how a time later I was not so fond of them. But he got to bail out into another physics area, at least. On the other hand, 20 years later, Kino****a (could someone edit it our of the politically correct spelling rules of PF, please!) is the leader of the perturbative calculation effort, and it is because of him, perhaps, that the (g-2) is still an important test of the standard model.

And yes, the 8 order term seems nowadays very much as -sqrt(3). The 6th order term is now 1.181241..., it was 1.195(26) already in the 1974 paper; the 0.922(24) refers to a particular subset, see http://www.nbi.dk/~predrag/papers/PRD10-74-III.pdf
 
Last edited:
  • #296
On sociology, it is sad to think that a "pet theory" had privated QED of one of its more dedicated calculators. Now consider if the observations of Hans about g-2 could attract some interest, or more probably to excite old memories of conflict. :frown:
 
  • #297
Did we include this one already?
http://federation.g3z.com/Physics/#MassCharge

I can't give you the name of the author as I don't know it. Mark Hopkins maybe?
 
Last edited by a moderator:
  • #298
CarlB said:
Did we include this one already?
http://federation.g3z.com/Physics/#MassCharge

I can't give you the name of the author as I don't know it. Mark Hopkins maybe?


It refers to Yablon, and some of the mass formulae have been discussed in usenet news.
 
Last edited by a moderator:
  • #299
arivero said:
It refers to Yablon, and some of the mass formulae have been discussed in usenet news.

Good to see you posting!

We should compare the recently improved W mass with the results of the
thread. The new world average for the W mass is now 80.398 (25) GeV.
http://www.interactions.org/cms/?pid=1024834
The value for the Z mass is still 91.1876 (21) GeV as far as I know.
We had two numerical coincidences for the mW/mZ mass ratio on this thread:

0.881418559878 ___ from the spin half / spin one ratio
0.881373587019 ___ the value arcsinh(1)

Using the more precise value of Z we can get values for the W mass:

Code:
80.374  ( 2)    Derived W mass from spin half / spin one ratio
80.370  ( 2)    Derived W mass from arcsinh(1) 
80.376  (19)    Experimental W mass: from sW on page 8 of hep-ph/0604035 
80.398  (25)    Experimental W mass: New world average
80.425  (38)    Experimental W mass: Old world average

This is certainly an improvement for both the values as well as the sigmas
which are around 1 for both now. (mid value difference: 0.030% and 0.035%)

The last value was discussed here:
https://www.physicsforums.com/showpost.php?p=958122&postcount=202Regards, Hans
 
Last edited:
  • #300
Hans de Vries said:
Good to see you posting!

Hi! Instead of a sabbatical I got increased workload, so I read the blogs and forums but I do not calculate :uhh:

In fact I had not checked the new values. So, before, the 1 sigma low point was 80.387 and now it is 80.377 so the results are better than in 2004. No surprise, as the word average is to be calculated with similar patterns than hep-ph/0604035. This means that the coincidence is here to remain, unexplained or not. Any future deviations could be covered with radiative corrections, if it comes from a fundamental theory.
 
  • #302
Nice talk, and very good slides. They give the talking points along with the images.

This past weekened I was looking for (one of my many alleged) copy of Georgi's book. I didn't find it.

The objective was to tie down the 3x3 circulant matrices as an example of an SU(3) symmetry. I managed to make some progress.

With the usual representation of SU(3), the canonical particles are the eigenvectors of the diagonalized operators. Turning these into density operators (like I always do), the states are:
[tex]\left(\begin{array}{ccc}1&&\\&0&\\&&0\end{array}\right)[/tex]
[tex]\left(\begin{array}{ccc}0&&\\&1&\\&&0\end{array}\right)[/tex]
[tex]\left(\begin{array}{ccc}0&&\\&0&\\&&1\end{array}\right)[/tex]

These are primitive idempotents. Call the associated spinors |1>, |2>, and |3>. Their eigenvalues are (+1,+1), (+1,-1), and (-2,0). We want to map these into the circulant primitive idempotents:

[tex]\frac{1}{3}\left(\begin{array}{ccc}1&w^{+n}&w^{-n}\\w^{-n}&1&w^{+n}\\w^{+n}&w^{-n}&1\end{array}\right)[/tex]
where [tex]w = \exp(2i\pi/3)[/tex] and n=1, 2, and 3. Call the three associated spinors |R>, |G>, |B>.

The mapping is then given by S = |R><1| + |G><2| + |B><3|, and its inverse and one takes

[tex]a -> S a S^{-1}[/tex]

where a is any of the 8 generators of SU(3).

On doing this, one finds that, sure enough, the diagonalized (commuting) SU(3) generators become commuting circulant matrices. And since the trace is conserved by the S mapping, the diagonal terms are zero. What one ends up with for the commuting circulant generators of SU(3) are:

[tex]\left(\begin{array}{ccc}0&1&1\\1&0&1\\1&1&0\end{array}\right)[/tex]
[tex]\left(\begin{array}{ccc}0&+i&-i\\-i&0&+i\\+i&-i&0\end{array}\right)[/tex]

where I have left off some unimportant multiplications by constants. Note that the above are Hermitian and circulant. The other generators of SU(3) end up non circulant, just as in the usual representation of SU(3) they are non diagonal.

Carl
 
  • #303
Alejandro, I've been looking at how one would apply Koide's mass formula to the mesons and baryons. There are some really cool things you can do with resonances, but the mass measurements of resonances are not that accurate so the statistics are not terribly convincing. However, the masses of the neutron and proton are very carefully measured so I will talk about them now.

In rewriting Koide's formula into eigenvector form, you may recall that there was an overall scale factor. It was equal to the average of the square roots of the charged leptons:

[tex]\begin{array}{ccc}
\textrm{Particle}&\textrm{Mass (eV)}&\sqrt{\textrm{eV}}\\
\textrm{Electron}& 510998.91& 714.84187762049867\\
\textrm{muon}&105658369.2&10279.02569312870236\\
\textrm{tauon}&1776990000&42154.35920518778328\\
\end{array}[/tex]

The sum of square roots is 53148.22677593698431 and so the average square root charged lepton mass is: 17716.07559197899477.

In my version of preons, the "average root" is the contribution of the valence preons to the amplitude of a mass interaction in the charged leptons. The mass comes about by squaring the amplitude. The rest of the amplitude comes from the sea preons, and can be negative or positive. The sea preon amplitude is sqrt(1/2) times the valence preon amplitude, but there are twice as many sea preons, hence the sqrt(2) in the Koide formula.

The idea is that the electron is light is not because it is made of parts that are light (compared to the muon and tau), but instead because the sea and valence contributions cancel almost completely. And the neutrinos are unnaturally light because their mass is coupled through 11 stages of sterile neutrinos. The sterile neutrinos aren't seen because they don't interact weakly (or strongly), but their light weight is not an indication that the particles they are made of have a weight different from the ones making up the charged leptons. Instead, the charged and neutral leptons are just orthogonal primitive idempotents (somehow).

Mass is additive, so when we compute masses based on objects interacting with forces no greater than the strong force, we have to add together masses, not square roots of masses. Accordingly, to get a generic preon mass, we square the above average square root to get:
313859334.4 eV = 313.859334 MeV.

There are three quarks in a baryon. Surprisingly, tripling the above mass gives a number remarkably close to the neutron and proton masses: 941.5780031 MeV.

From a year ago, I suspect that the neutrinos are lighter because of 11 sterile species of neutrinos, and this causes a 3^11 ratio in the average square root mass of the charged versus neutral leptons. I found that power of 3 by writing out the ratio in base 3. In base 3, the ratio 177082.0 works out to be 22222220121.0 (base 3) a number which is very close to 100000000000.0 (base 3), hence the 3^11 ratio.

The reason for looking for this is because one suspects that the natural probability in a system of three preons is 1/3, and that given several different ways that something can happen, one expects that one will pick up powers of this probability the same way that one picks up powers of alpha in QED. That is, tree level diagrams will contribute a certain negative power of 3, and the next level diagrams will be some power of 3 smaller, depending on the number of probabilities that have to be picked up.

Applying this sort of reasoning to the neutron and proton masses, I find the following odd coincidences:

(Neutron - Proton)/Proton
(939565360 - 938272029)/938272029 = 0.001378418
= .0000010000101122101 (base 3)

(Neutron - Proton)/Neutron
(939565360 - 938272029)/939565360 = 0.001376520
= .0000010000021121200 (base 3)

(Neutron - Proton)/3xPreon
(939565360 - 938272029)/941578003 = 0.001373578
= .0000010000002221000 (base 3)

To compare these numbers, let us write them together. I've grouped digits into sets of six to show the structure better. Note that the actual number of significant digits is a couple less than shown, but I haven't worked out the details over variation in experimental measurement of the input masses:

[tex]
\begin{array}{ccccc}
.000001 &000010 &112210 &1 &\textrm{/Proton}\\
.000001 &000002 &112120 &0 &\textrm{/Neutron}\\
.000001 &000000 &222100 &0 &\textrm{/3xPreon}\end{array}
[/tex]

Putting the probability p = 1/3, all the above have a leading term of [tex]p^6[/tex], and the next leading terms are of order [tex]p^{12}[/tex]. Of the three, note that the division by 3xPreon is the simplest, in the third group it has the sequence 222100 which suggests the base 3 number 222,222 ~ 1,000,000. That the preon mass division gives the simplest form for the mass is what we would expect if the Preon is to be the simplest and most basic quantum of mass.

Cleaning up the above, we can guess that the first three terms are:

[tex]3(p^6 + p^{12} - N p^{18})[/tex]

That is, the differences between proton and neutron mass work out to be something involving three diagrams with 6 vertices at first order, three diagrams with 12 vertices at second order, and possibly some multiple of three diagrams each with 18 vertices at third order. The reason for their being multiples of three diagrams at each order is because of RGB symmetry.

Other than this, I've found more amazing things with the masses of other mesons and baryons, specifically repetitions of the mysterious 0.22222204717 number, but the above mass formulas are easily more important

The AMU data for all but the tau mass are more accurate than the eV data shown above. And you can presumably compute the tau mass from the Koide formula. That means that you can redo the above computations with AMU data and get more accurate numbers.

What this gives is two levels of mass splitting in the baryons. The first a sort of fine-preon splitting with a mass of Preon/3^5 = 1.291602 MeV, and a next level of mass splitting with a value of Preon/3^11 = 1771 eV = .001771 MeV. Of course the first level of mass is just the preon mass itself.

The objective is to classify as many of the several thousand mesons, baryons, and resonances according to these sorts of mass splittings, and then look for patterns. In addition to the mass splittings, there is also a lot of information about phase angles that will eventually be useful, but I think that getting a good classification of the mesons and baryons by this sort of mass splitting will be a useful thing. One then looks for patterns in the relation between the quantum numbers of the states and the splitting counts.

Summing up the mass formula, this one is that the baryons masses are approximately

[tex]m_{N,P} = (1 + O(3^{-6}))(\sqrt{m_e}+\sqrt{m_\mu} + \sqrt{m_\tau})^2/3[/tex]

and their splitting is approximately

[tex]m_N - m_P = (3^{-6} + O(3^{-12})(\sqrt{m_e}+\sqrt{m_\mu} + \sqrt{m_\tau})^2/3[/tex]
 
Last edited:
  • #304
The analysis starts more crackpotty than usual...
17716.07559197899477, good heavens! It doesn't mean anything; all these digits are under the experimental error. Fortunately you do not use them in the the real discussion, but I was very tempted to stop reading here and I am sure some people has.

The main point in the first part is that you suggest the constituent mass of a u or d quark is nearby
[tex]
({\sqrt m_e + \sqrt m_\mu + \sqrt m_\tau \over 3})^2
[/tex]
and so the proton and neutron masses, having three of such quarks, are near

[tex]
({\sqrt m_e + \sqrt m_\mu + \sqrt m_\tau})^2 \over 3
[/tex]

Ok, it could be. Or it could simply to reflect the already misterious fact of having the mass of the tau in the GeV range.The second part of your posting is your quest for powers of 3. There, the last of the relationships,
939565360 - 938272029)/941578003 = 0.001373578
gets the impact of the starting mistake... 941578003 is a fake, the four or five lesser digits are under the experimental error and then without meaning. Amusingly, if you put the experimental error in, then you need to reject the two or three last digits in 0.001373578 and your fit is more impressive.

The moral: control the numbers against the errors.

Ah, note that using Koide, we have

[tex]
{({\sqrt m_e + \sqrt m_\mu + \sqrt m_\tau})^2 \over 3 }
=
{({ m_e + m_\mu + m_\tau}) \over 2 }
[/tex]
So a related coincidence is that the masses of muon and tau average near of the proton mass. Or m_\mu + m_\tau \approx 12 \pi^5 m_e ...
 
Last edited:
  • #305
arivero said:
The analysis starts more crackpotty than usual.

Well, this is hot off the press. There are thousands of mesons, baryons and resonances so I've got many many hours of effort left on this. One of the things I'm putting together is a Java calculator to make the calculations automatic. What you see above are the first calculations from that calculator.

arivero said:
The second part of your posting is your quest for powers of 3. There, the last of the relationships,
939565360 - 938272029)/941578003 = 0.001373578
gets the impact of the starting mistake... 941578003 is a fake, the four or five lesser digits are under the experimental error and then without meaning.

I could have screwed this up. The objective is to use the Koide relation to eliminate the need to use the tau mass data. In the original calculation, I used the AMU figures from my MASSES2 paper (equation 17), using this technique. For this, the scale factor is:

[tex]\mu_1 = 0.5804642012(71)[/tex]

which is a little less than 8 digits of accuracy for the 941578003 number. Note that this is an AMU accuracy, which is 15x as accurate as the eV number. The AMU data at the PDG is more accurate than the eV because the measurements are made in AMU and then converted to eV. So the error is limited by the accuracy of the conversion. However, this conversion error should cancel as all the above calculations were done with eV data that were converted from AMU measurements (presumably the PDG uses the same conversion ratio for their best guess). (Provided I avoided the tau mass number.)

In fact, the first time I made these calculations was by hand. I used the AMU data, of course. I was so shocked at the result that I went back and wrote up some Java code to assist in the calculation, and as a check I redid them with eV data instead of AMU. The result was substantially the same.

Let me redo it with the AMU data the right way and show it here as an edit in the next hour or two:

[edit]

Proton and neutron masses in AMU:

[tex]m_P = 1.00727646688(13)[/tex]
[tex]m_N = 1.00866491560(55)[/tex]

[tex]m_N-m_p = 0.00138844872(68)[/tex] (Error is 5 x 10^-7 )

From the MASSES2 paper calculation:

[tex]m_L = [0.5804642012(71)]^2 = 0.3369386889(83) [/tex] (Error is 2.5 x 10^-8)

Therefore, the ratio has an overall error of 5 x 10^-7 and we have:

[tex](m_N-m_P)/m_L = 0.0041207755(20)[/tex]

or

[tex]0.0041207735 < (m_N-m_P)/m_L < 0.0041207775[/tex]

If you're going to convert numbers to base 3 by hand, I suggest first converting them to base 9, and then taking each digit and converting it to base 3. To check my work, you should get the following:

[tex]\begin{array}{cccc}
0.0041207735 &=& 0.0030028471 &(base 9)\\
0.0041207775 &=& 0.0030028486 &(base 9)
\end{array}[/tex]

[tex]\begin{array}{cccc}
0.0041207735 &=& 0.00001000000222112101 &(base 3)\\
0.0041207775 &=& 0.00001000000222112220 &(base 3)
\end{array}[/tex]

Now I'm claiming that [tex]3^6 = 729[/tex] is the correct equivalent to the fine structure constant here. So it makes sense to convert the above into digit groups of six tribits:

[tex]\begin{array}{ccccccc}
0.0041207735 &=& 0.000010 &000002 &221121 &01 &(base 3)\\
0.0041207775 &=& 0.000010 &000002 &221122 &20 &(base 3)
\end{array}[/tex]

To write this as 3 times a sum in base 729, we have:

[tex]0.0041207745(1) = 3 (3^{-6} +3^{-12} - 12.5 \times 3^{-18} ).[/tex]

In the above, the "12.5" was chosen to get the number in the range above. I.e., in the earlier calculation, this is the [tex]O(3^{-18})[/tex] figure. It is only 1.7 percent of the next higher term, so it seems likely that once I understand the sequence, I can make a calculation that will give this term.

You may possibly recall that my 8-digit predictions for the neutrino masses were based on the assumption that a factor of [tex]3^{12}[/tex] was involved. I realize that this looked pretty cracked at the time. I didn't make much effort to publish that paper because I knew that it would look pretty insane. What can I say, there are things one learns by doing that cannot be easily explained to others. And I didn't want to waste my time on a "gee whiz, look at this unexplainable coincidence" paper just to see it in print.

There are a lot of things in my Clifford algebra calculations that I've never explained to you. As far as I can tell from my DNS logs, very few people have even downloaded all my papers on Clifford algebra. If people were reading it, they'd be pointing out typos or asking for clarification. No one is doing this, but even if they did, there is a lot of other results that are not available on the net. Perhaps my intuition is good, perhaps it is not, that is for time to tell. But before you reject this as another coincidence involving BIG powers of 3, you should consider the possibility that I am still sitting on a lot of information I haven't explained to you.

Carl
 
Last edited:
  • #306
CarlB said:
Proton and neutron masses in AMU:

[tex]m_P = 1.00727646688(13)[/tex]
[tex]m_N = 1.00866491560(55)[/tex]

[tex]m_N-m_p = 0.00138844872(68)[/tex] (Error is 5 x 10^-7 )

From the MASSES2 paper calculation:

[tex]m_L = [0.5804642012(71)]^2 = 0.3369386889(83) [/tex] (Error is 2.5 x 10^-8)

Ok here is my objection: m_P and m_N have experimental errors, but m_L is a calculation. The error in m_L if you take it as experimental is dominated by the error in the mass of the tau, of order 10^-4, and you are taking it to be 10^-8 because you are using the calculated prediction from Koide instead of the experimental value.

In the first post you listed the experimental mass of the tau, no the Koide prediction, and you caused me to switch to fast reading mode (*) . But perhaps you really need to use the prediction and to take only electron and mu as inputs. It is OK to do it; it is only that in the first post you were not doing it, or not telling about. The second post is a lot better.

(*) and In fact I missed the point of that you were already explaining this detail in the second posting :frown:
 
Last edited:
  • #307
But even if the third triplet of your decimals (er, not 10-cimals, but 3-cimals ... tricimals?)happens to be just noise, the main fit is interesting. You connect the mass of the leptons to the constituent mass of quarks, or to the whole mass of the proton if you wish.

[tex]
{({\sqrt m_e + \sqrt m_\mu + \sqrt m_\tau})^2 \over 3 }
=
{({ m_e + m_\mu + m_\tau}) \over 2 }
=m_{p,n}\approx 0.5 m_D[/tex]

EDITED: furthermore, the (electromagnetic??) mass difference between a neutral pion (quarks with atractive electric charges) and a charged pion (repulsive) is about 4.6 MeV, then it is pausible that your difference m_p - m_L comes from the electromagnetic binding, and than a "pure QCD" barion had a mass exactly m_L. I set a question mark because some mass differences can be told to come from the difference of quark masses (still assuming no preonic quarks here)
 
Last edited:
  • #308
First, a few more interesting coincidences. When the fine structure constant is used, the first order change to energy is proportional to alpha^2. This turns out to be very close to [tex]27^{-3}[/tex]. I would prefer to write this as [tex]27 \times 729^{-2}[/tex] where 729 is the 12th power of 3.

The gravitational coupling constant and the weak coupling constant both have a factor of M^2. If one divides the gravitational coupling constant by 4pi, as is done in the first chapter of the recent textbook "High Energy Physics" by Perkins:
https://www.amazon.com/dp/0521621968/?tag=pfamazon01-20
then the gravitational constant =[tex]5.33 \times 10^{-40}[/tex]. The Fermi force coupling constant is already so divided, it is [tex]1.16637(1)\times 10^{-5}[/tex]
see http://pdg.lbl.gov/2006/reviews/consrpp.pdf

These two coupling constants have the pure number ratio of [tex]2.188\times 10^{+34} = 729^{11.99}[/tex], very close to an exact power of 729.

So the weak and gravitational coupling constants are related by a power of three (to first order in 1/27). To relate these to the fine structure constant, we must pick an energy or mass. Of course it is possible to choose a mass that makes all three coupling constants be related by powers of three. What would that mass be? Well, since there is a division by M^2, one has different scales that one could choose for this. But a natural scale is the mass of the proton / neutron. And this turns out to work, at least to first order in 1/27.

I know that what I've written is going to be interpreted as just more unimportant coincidences from the forklift driver, but to explain the theory behind this requires more time than my readers have available. If you want to get started understanding my solution of the problem of how one combines quantum states to produce bound states you will just have to begin reading my book on the subject of the density operator formalism of quantum mechanics, http:\\brannenworks.com\dmaa.pdf

The above book is not up to date. I will know when someone is reading it because they will generate many dozens of questions, requests for clarification, and complaints about errors, I will then start writing more. I see none of these, so I know that I have about a 3 month head start on the rest of the planet - as it now, I'm the only person who understands how to apply primitive idempotents to elementary particle theory.

The new theory uses a sort of Feynman diagrams in a sort of perturbation theory. Instead of having incoming fermions treated as point objects, they are treated as bundles of six preons. (This is derived from very simple principles in the above book.) There is only one coupling constant, it is 1/3. To properly count diagrams, one must understand that all the particles must be dressed. That is, the bare propagators of the usual theory are dressed composite propagators in this theory, so one must dress the incoming and outgoing fermion propagator bundles to make them identical in form to a free fermion propagator bundle.

There are 27 tree level diagrams that contribute to [tex]\alpha^2[/tex]. In these diagrams, there are three places where an arbitary color phase can be chosen. The choices of color phase are red, green, and blue, and this gives the 27 = 3x3x3 diagrams. These tree level diagrams each have 12 vertices, and each contributes an overall probability of 1/3. The result is that [tex]alpha^2[/tex] is given to first order by [tex]27\times 3^{-12}[/tex].

Carl
 
Last edited by a moderator:
  • #309
CarlB said:
The gravitational coupling constant and the weak coupling constant both have a factor of M^2.

This is one of my motivations to be against natural units :!) : At the end of the day, Fermi coupling is short range (sort of dirac delta, or a 3-dim 1/r^3 potential) while gravitational coupling is long range. But in natural units they seem the same.
 
Last edited:
  • #310
They will find my sites of townships -- not the cities that I set there.
They will rediscover rivers -- not my rivers heard at night.
By my own old marks and bearings they will show me how to get there,
By the lonely cairns I builded they will guide my feet aright.


Kipling
 
  • #311
Kea said:
They will find my sites of townships -- not the cities that I set there.
They will rediscover rivers -- not my rivers heard at night.
By my own old marks and bearings they will show me how to get there,
By the lonely cairns I builded they will guide my feet aright.


Kipling

Well I'm a little more optimistic than that. I think that they will eventually get quite stuck and will be forced to more carefully examine the compass. But until then, there is plenty to do in examining the promised land (that the compass pointed to). The numerical coincidences are fun to find, but dealing with powers of three is a pain unless you happen to have a base-N calculator like this one:
http://www.measurementalgebra.com/CarlCalc.html

I think it's rather clunky and will be improving it to a more efficient model soon. The source Java code is here:
http://www.measurementalgebra.com/Calc_Top.java

[edit]First improvement: The "accuracy" of some of the inverse trig and hyperbolic functions has beem "improved".[/edit]

Carl
 
Last edited:
  • #312
03 14 07

Hello Carl:
I found this on a thread on Kea's blog. Nice. I share Alejandro's thoughts on accuracies in your approximations. I don't, however, knock the idea of looking for meaning in powers of three. To be honest, there is a beauty in that approach. Ultimately, just check your significant figures such that your truncation errors can be reduced to nada!

I havta say that natural units always drive me crazy because I must remember from whose perspective they are natural. hehehehehe I wasn't able to read the whole thread, so I was simply wondering about the approximation for the fine structure constant you listed above.

When you recommend changing from base ten to base nine to base three, I think each transformation you are losing accuracy of numerical approximation. This introduces systemmatic error into your calculations. Why not use the Summation convention generated by geometric series no matter which base you are in? Then you are limited by only truncation error in power of logarithm which is rational that you are setting to an infinite sum of terms.

Recall that post I did on a non standard approximation for 1/3 in binary with alternating coefficients. The alternating coefficients may not fit into your framework, but you can find infinite number of geometric series to represent your numbers in base three! Interestingly enough, I think that likening the fine structure constant to the twelfth power of three is pretty darned clever! You are seeing the importance of such correspondence. HOWEVER, that approximation introduces errors as well because (from what I gather) the fine structure constant is actually irrational. Recall that you may produce a series expansion of an irrational number in any base. You will see some beauty fall out but here is the process that I might take and then go out as many terms as possible to keep as accurate as data allows:


1. Solve equation: 3^x=fsc (fine structure constant)
2. Solution will be infinite geometric series in log(fsc)/log(3). See below:

3^x=fsc
Log(3^x=fsc)
xLog(3)=Log(fsc)
x=Log(fsc)/Log(3)

3. Now this is with the understanding that you previously series expanded the term LOG(fsc) and you can do that a number of ways!

4. Nice approach, now it is time for cranking down truncations!
 
  • #313
mrigmaiden, nice blog you have.

Yes indeed, natural units are a sort of misnamer. I was happy with calling them "Planck units", from the paper of Planck.

About the approximations, in CarlB and in all the thread, it makes sense to have two layers, "exact", and "exact at order alpha" (or alpha^2 or alpha^n), and then of course to consider the "order alpha" correction and see if it is exact on its own.

Since the start of the thread, we have been insisting on simple numbers percent accuracy, and from this point of view 939 vs 941 GeV qualifies! The powers of 3 are not so simple at first try, but I do not deny them. They have a flavour near to Krolinowski's formulae.
 
Last edited:
  • #314
Mahndisa,

Your comments on accuracy are important and need to be taken account of. Modern computers use http://www.psc.edu/general/software/packages/ieee/ieee.html standard double precision 64-bit floating point for calculations. IEEE is optimal in that any calculation is wrong only by at most one bit. My Java program uses the "double" format. This gives 16 decimal digits (51 binary bits) of accuracy. Each time one does a multiplication or division, one can end up wrong by 1 part in 10^16, or 1 part in 3^33. Subtraction and addition can be worse, if the numbers cancel to high accuracy, but ignoring this, their accuracy is similar.

The calculation for the conversion to base 3 requires a multiplication and a subtraction for each digit produced. The calculator I provided gives base 3 numbers with 33 digits. To do this calculation requires 66 arithmetic operations and the maximum error will be 2110 base 3, which means that the bottom four 3-bits are iffy. In other words, the calculator is only good to 29 digits of 3-digit numbers (13 digits of 10-digit numbers).. Fortunately, this is far more accuracy than is available in scientific measurements of masses.

Base 9 uses half as many cycles so its error is half as big. Conversion from base 9 to base 3 has no error at all as a single base 9 digit converts into two base 3 digits directly with no arithmetic operations. So if you used base 9 as an intermediary to base 3, you could get 30 3-digits of accuracy, worst case.

The thing you did with the ln(fsc)/ln(3) is actually how I do calculations, and is how the calculator finds base 3 versions of numbers. The natural log calculation has an error of only one bit so doing this introduces very small errors compared to the conversion to base 3.

To test the conversion, you might try converting to base 3 various fractions that you happen to know the repeating 3-base expansion for. For instance, 0.5, in base 3, is 0.1(1) And 1/7 is 0.102120(102120) in base 3. The calculator happens to give these numbers exact. If you can find one that gives a significant error, do tell. I've considered running the numbers out to higher accuracy. I just don't look forward to writing the trig and hyperbolic functions.

To test the accuracy, you can compute various numbers that you happen to know exactly. For example ((1/729 + 1)^5)( 3^78) gives

1.00001200010100010100001200000022 x 3^(78)

The binomial theorem for (x+1)^5 has terms 1, 5, 10, 10, 5, and 1. In base 3 these are 1, 12, 101, 101, 12, and 1. Thus the exact result is:

1.00001200010100010100001200000100 x 3^(78)

And the above calculation has an error of only one bit in the smallest displayed digit. This sort of thing shows that the inherent accuracy of the calculator is more than sufficient for the first 3 powers of 729 for numbers in the range of what is involved here, even with scientific notation. (This is better than my back of the envelope worst case calculation, but that's what back of the envelope worst calculations are supposed to tell you.)

And anyway, if I had problems that led to random bits, then why the heck am I getting sweet results for powers of 3? That would be a result in probability theory or computational theory that would be no less shocking than the physics result. No, the powers of 3 are present in the spectrum of pure numbers used in particle theory and these are unexplained in the standard model. I have found many more results than are discussed above. I've provided the calculator so that you can find them too. If you don't publish anything, I will keep feeding them to you guys here on this thread.

To find powers of 3 in this stuff induces fairly severe cognitve dissonance in any reasonable physicist. It is very natural to conclude that there is arithmetic or precision errors. Furthermore, this is as it should be; unexpected results need to be examined very carefully and critically.

Suppose that God came to you and provided you with a peek at the unified field theory. Suppose that it required nearly everything that is known and believed about physics to be overturned. Suppose that you were an engineer who hadn't been in physics for years. What would you do?

You can try writing a book on the theory. The resulting book (like any other textbook giving substantial new results in physics) requires many weeks of difficult study to understand. People educated in the previous way of doing things will accept your new way of looking at things with the same warm acceptance that the older physicists accepted quantum theory back 100 years ago (i.e. not at all).

You will find that people capable of understanding your book are too busy pushing their own extensions of the standard physics to bother with it, and people with time available to read are not equipped with a sufficient understanding of the standard model to appreciate the relationships. Furthermore, the more you know about physics, the more you know how the assumptions of physics are self reinforcing, and the less inclined you are to continue reading anything that rejects those things that you already "know". A collection of self reinforcing beliefs is not a proof of truth, instead it only implies that if something is wrong with it as a whole, (in the present case, QM is not compatible with relativity), then it is shot through with self reinforcing errors.

So what do you do next? What you do is you start applying your understanding to standard physics. A major source of dissatisfaction with the status quo is the large number of arbitrary constants. These arbitrary constants mostly appear in the quantum theory. This is the weakest spot in the foundations of physics, the place where the cracks are widest. While the foundations are broken from one end to the other, for sociological reasons, you can only sneak in under the "crackpot radar" at this weak point.

So you obtain formulas that relate things that should have no relation. If the relationships are simple enough, your formulas will attract attention, and maybe, eventually, someone will take the trouble to understand what you are saying. Before they do this, they will assume every other possible thing about your formulas. (1) You made a mistake. (2) They are accidents. The next explanation will be: (3) They are correct but can be explained by minor modifications to the standard model.

Communicating with physicists is extremely difficult. One must write formulas that are so blindingly simple that they do not allow cognitive dissonance to cause the reader to jump to explanations that are compatible with the many assumptions built into a physicist. The simplest explanation for a paper giving new results in physics is that it just wrong.

I don't think that the formulas I've written here are sufficient to cause anyone to undergo the pain needed to understand density operator theory. However, this is just the nose of the lead camel in a very long pack train.

Of the mass numbers in the Particle Data Group, the mesons are a mess and will require some heavy lifting to figure out. The structure of the baryons is fairly obvious. They come in 3s and use modified versions of the Koide relationship in its eigenvector form. I'll write them up next, but don't hold your breath, I'm fairly busy in my day job right now and am a little surprised that I haven't already been fired.

Masses are just the tip of the iceberg of PDG information. There are also widths, phase angles, lifetimes and branching ratios. These are shot through with unexplained coincidences.

Carl
 
Last edited:
  • #315
CarlB said:
You can try writing a book on the theory. The resulting book (like any other textbook giving substantial new results in physics) requires many weeks of difficult study to understand. People educated in the previous way of doing things will accept your new way of looking at things with the same warm acceptance that the older physicists accepted quantum theory back 100 years ago (i.e. not at all).

You will find that people capable of understanding your book are too busy pushing their own extensions of the standard physics to bother with it, and people with time available to read are not equipped with a sufficient understanding of the standard model to appreciate the relationships.

Hi Carl

One of the problems is that the strictly operator approach opens up a theoretical can of worms. A question that arises is: what happens to gauge symmetry in the strictly operator approach? You address this question in 8.4 of your book, but I haven't yet found any explicit calculations of the actions of SU(3), SU(2) and SU(1) on the PIs. If you have any of these in print, I'd like to check them out, thanks!
 
Last edited:

Similar threads

  • High Energy, Nuclear, Particle Physics
Replies
1
Views
2K
  • High Energy, Nuclear, Particle Physics
Replies
16
Views
5K
  • Poll
  • Beyond the Standard Models
Replies
5
Views
1K
  • High Energy, Nuclear, Particle Physics
Replies
6
Views
2K
  • Advanced Physics Homework Help
Replies
13
Views
4K
  • Special and General Relativity
Replies
15
Views
1K
  • High Energy, Nuclear, Particle Physics
Replies
11
Views
9K
  • High Energy, Nuclear, Particle Physics
2
Replies
49
Views
9K
Replies
3
Views
6K
Replies
1
Views
2K
Back
Top