All the lepton masses from G, pi, e

  • Thread starter arivero
  • Start date

Multiple poll: Check all you agree.

  • Logarithms of lepton mass quotients should be pursued.

    Votes: 21 26.3%
  • Alpha calculation from serial expansion should be pursued

    Votes: 19 23.8%
  • We should look for more empirical relationships

    Votes: 24 30.0%
  • Pythagorean triples approach should be pursued.

    Votes: 21 26.3%
  • Quotients from distance radiuses should be investigated

    Votes: 16 20.0%
  • The estimate of magnetic anomalous moment should be investigated.

    Votes: 24 30.0%
  • The estimate of Weinberg angle should be investigated.

    Votes: 18 22.5%
  • Jay R. Yabon theory should be investigate.

    Votes: 15 18.8%
  • I support the efforts in this thread.

    Votes: 43 53.8%
  • I think the effort in this thread is not worthwhile.

    Votes: 29 36.3%

  • Total voters
    80
  • #302
CarlB
Science Advisor
Homework Helper
1,231
25
Nice talk, and very good slides. They give the talking points along with the images.

This past weekened I was looking for (one of my many alleged) copy of Georgi's book. I didn't find it.

The objective was to tie down the 3x3 circulant matrices as an example of an SU(3) symmetry. I managed to make some progress.

With the usual representation of SU(3), the canonical particles are the eigenvectors of the diagonalized operators. Turning these into density operators (like I always do), the states are:
[tex]\left(\begin{array}{ccc}1&&\\&0&\\&&0\end{array}\right)[/tex]
[tex]\left(\begin{array}{ccc}0&&\\&1&\\&&0\end{array}\right)[/tex]
[tex]\left(\begin{array}{ccc}0&&\\&0&\\&&1\end{array}\right)[/tex]

These are primitive idempotents. Call the associated spinors |1>, |2>, and |3>. Their eigenvalues are (+1,+1), (+1,-1), and (-2,0). We want to map these into the circulant primitive idempotents:

[tex]\frac{1}{3}\left(\begin{array}{ccc}1&w^{+n}&w^{-n}\\w^{-n}&1&w^{+n}\\w^{+n}&w^{-n}&1\end{array}\right)[/tex]
where [tex]w = \exp(2i\pi/3)[/tex] and n=1, 2, and 3. Call the three associated spinors |R>, |G>, |B>.

The mapping is then given by S = |R><1| + |G><2| + |B><3|, and its inverse and one takes

[tex]a -> S a S^{-1}[/tex]

where a is any of the 8 generators of SU(3).

On doing this, one finds that, sure enough, the diagonalized (commuting) SU(3) generators become commuting circulant matrices. And since the trace is conserved by the S mapping, the diagonal terms are zero. What one ends up with for the commuting circulant generators of SU(3) are:

[tex]\left(\begin{array}{ccc}0&1&1\\1&0&1\\1&1&0\end{array}\right)[/tex]
[tex]\left(\begin{array}{ccc}0&+i&-i\\-i&0&+i\\+i&-i&0\end{array}\right)[/tex]

where I have left off some unimportant multiplications by constants. Note that the above are Hermitian and circulant. The other generators of SU(3) end up non circulant, just as in the usual representation of SU(3) they are non diagonal.

Carl
 
  • #303
CarlB
Science Advisor
Homework Helper
1,231
25
Alejandro, I've been looking at how one would apply Koide's mass formula to the mesons and baryons. There are some really cool things you can do with resonances, but the mass measurements of resonances are not that accurate so the statistics are not terribly convincing. However, the masses of the neutron and proton are very carefully measured so I will talk about them now.

In rewriting Koide's formula into eigenvector form, you may recall that there was an overall scale factor. It was equal to the average of the square roots of the charged leptons:

[tex]\begin{array}{ccc}
\textrm{Particle}&\textrm{Mass (eV)}&\sqrt{\textrm{eV}}\\
\textrm{Electron}& 510998.91& 714.84187762049867\\
\textrm{muon}&105658369.2&10279.02569312870236\\
\textrm{tauon}&1776990000&42154.35920518778328\\
\end{array}[/tex]

The sum of square roots is 53148.22677593698431 and so the average square root charged lepton mass is: 17716.07559197899477.

In my version of preons, the "average root" is the contribution of the valence preons to the amplitude of a mass interaction in the charged leptons. The mass comes about by squaring the amplitude. The rest of the amplitude comes from the sea preons, and can be negative or positive. The sea preon amplitude is sqrt(1/2) times the valence preon amplitude, but there are twice as many sea preons, hence the sqrt(2) in the Koide formula.

The idea is that the electron is light is not because it is made of parts that are light (compared to the muon and tau), but instead because the sea and valence contributions cancel almost completely. And the neutrinos are unnaturally light because their mass is coupled through 11 stages of sterile neutrinos. The sterile neutrinos aren't seen because they don't interact weakly (or strongly), but their light weight is not an indication that the particles they are made of have a weight different from the ones making up the charged leptons. Instead, the charged and neutral leptons are just orthogonal primitive idempotents (somehow).

Mass is additive, so when we compute masses based on objects interacting with forces no greater than the strong force, we have to add together masses, not square roots of masses. Accordingly, to get a generic preon mass, we square the above average square root to get:
313859334.4 eV = 313.859334 MeV.

There are three quarks in a baryon. Surprisingly, tripling the above mass gives a number remarkably close to the neutron and proton masses: 941.5780031 MeV.

From a year ago, I suspect that the neutrinos are lighter because of 11 sterile species of neutrinos, and this causes a 3^11 ratio in the average square root mass of the charged versus neutral leptons. I found that power of 3 by writing out the ratio in base 3. In base 3, the ratio 177082.0 works out to be 22222220121.0 (base 3) a number which is very close to 100000000000.0 (base 3), hence the 3^11 ratio.

The reason for looking for this is because one suspects that the natural probability in a system of three preons is 1/3, and that given several different ways that something can happen, one expects that one will pick up powers of this probability the same way that one picks up powers of alpha in QED. That is, tree level diagrams will contribute a certain negative power of 3, and the next level diagrams will be some power of 3 smaller, depending on the number of probabilities that have to be picked up.

Applying this sort of reasoning to the neutron and proton masses, I find the following odd coincidences:

(Neutron - Proton)/Proton
(939565360 - 938272029)/938272029 = 0.001378418
= .0000010000101122101 (base 3)

(Neutron - Proton)/Neutron
(939565360 - 938272029)/939565360 = 0.001376520
= .0000010000021121200 (base 3)

(Neutron - Proton)/3xPreon
(939565360 - 938272029)/941578003 = 0.001373578
= .0000010000002221000 (base 3)

To compare these numbers, let us write them together. I've grouped digits into sets of six to show the structure better. Note that the actual number of significant digits is a couple less than shown, but I haven't worked out the details over variation in experimental measurement of the input masses:

[tex]
\begin{array}{ccccc}
.000001 &000010 &112210 &1 &\textrm{/Proton}\\
.000001 &000002 &112120 &0 &\textrm{/Neutron}\\
.000001 &000000 &222100 &0 &\textrm{/3xPreon}\end{array}
[/tex]

Putting the probability p = 1/3, all the above have a leading term of [tex]p^6[/tex], and the next leading terms are of order [tex]p^{12}[/tex]. Of the three, note that the division by 3xPreon is the simplest, in the third group it has the sequence 222100 which suggests the base 3 number 222,222 ~ 1,000,000. That the preon mass division gives the simplest form for the mass is what we would expect if the Preon is to be the simplest and most basic quantum of mass.

Cleaning up the above, we can guess that the first three terms are:

[tex]3(p^6 + p^{12} - N p^{18})[/tex]

That is, the differences between proton and neutron mass work out to be something involving three diagrams with 6 vertices at first order, three diagrams with 12 vertices at second order, and possibly some multiple of three diagrams each with 18 vertices at third order. The reason for their being multiples of three diagrams at each order is because of RGB symmetry.

Other than this, I've found more amazing things with the masses of other mesons and baryons, specifically repetitions of the mysterious 0.22222204717 number, but the above mass formulas are easily more important

The AMU data for all but the tau mass are more accurate than the eV data shown above. And you can presumably compute the tau mass from the Koide formula. That means that you can redo the above computations with AMU data and get more accurate numbers.

What this gives is two levels of mass splitting in the baryons. The first a sort of fine-preon splitting with a mass of Preon/3^5 = 1.291602 MeV, and a next level of mass splitting with a value of Preon/3^11 = 1771 eV = .001771 MeV. Of course the first level of mass is just the preon mass itself.

The objective is to classify as many of the several thousand mesons, baryons, and resonances according to these sorts of mass splittings, and then look for patterns. In addition to the mass splittings, there is also a lot of information about phase angles that will eventually be useful, but I think that getting a good classification of the mesons and baryons by this sort of mass splitting will be a useful thing. One then looks for patterns in the relation between the quantum numbers of the states and the splitting counts.

Summing up the mass formula, this one is that the baryons masses are approximately

[tex]m_{N,P} = (1 + O(3^{-6}))(\sqrt{m_e}+\sqrt{m_\mu} + \sqrt{m_\tau})^2/3[/tex]

and their splitting is approximately

[tex]m_N - m_P = (3^{-6} + O(3^{-12})(\sqrt{m_e}+\sqrt{m_\mu} + \sqrt{m_\tau})^2/3[/tex]
 
Last edited:
  • #304
arivero
Gold Member
3,359
98
The analysis starts more crackpotty than usual...
17716.07559197899477, good heavens! It doesnt mean anything; all these digits are under the experimental error. Fortunately you do not use them in the the real discussion, but I was very tempted to stop reading here and I am sure some people has.

The main point in the first part is that you suggest the constituent mass of a u or d quark is nearby
[tex]
({\sqrt m_e + \sqrt m_\mu + \sqrt m_\tau \over 3})^2
[/tex]
and so the proton and neutron masses, having three of such quarks, are near

[tex]
({\sqrt m_e + \sqrt m_\mu + \sqrt m_\tau})^2 \over 3
[/tex]

Ok, it could be. Or it could simply to reflect the already misterious fact of having the mass of the tau in the GeV range.


The second part of your posting is your quest for powers of 3. There, the last of the relationships,
939565360 - 938272029)/941578003 = 0.001373578
gets the impact of the starting mistake... 941578003 is a fake, the four or five lesser digits are under the experimental error and then without meaning. Amusingly, if you put the experimental error in, then you need to reject the two or three last digits in 0.001373578 and your fit is more impressive.

The moral: control the numbers against the errors.

Ah, note that using Koide, we have

[tex]
{({\sqrt m_e + \sqrt m_\mu + \sqrt m_\tau})^2 \over 3 }
=
{({ m_e + m_\mu + m_\tau}) \over 2 }
[/tex]
So a related coincidence is that the masses of muon and tau average near of the proton mass. Or m_\mu + m_\tau \approx 12 \pi^5 m_e ...
 
Last edited:
  • #305
CarlB
Science Advisor
Homework Helper
1,231
25
The analysis starts more crackpotty than usual.

Well, this is hot off the press. There are thousands of mesons, baryons and resonances so I've got many many hours of effort left on this. One of the things I'm putting together is a Java calculator to make the calculations automatic. What you see above are the first calculations from that calculator.

The second part of your posting is your quest for powers of 3. There, the last of the relationships,
939565360 - 938272029)/941578003 = 0.001373578
gets the impact of the starting mistake... 941578003 is a fake, the four or five lesser digits are under the experimental error and then without meaning.

I could have screwed this up. The objective is to use the Koide relation to eliminate the need to use the tau mass data. In the original calculation, I used the AMU figures from my MASSES2 paper (equation 17), using this technique. For this, the scale factor is:

[tex]\mu_1 = 0.5804642012(71)[/tex]

which is a little less than 8 digits of accuracy for the 941578003 number. Note that this is an AMU accuracy, which is 15x as accurate as the eV number. The AMU data at the PDG is more accurate than the eV because the measurements are made in AMU and then converted to eV. So the error is limited by the accuracy of the conversion. However, this conversion error should cancel as all the above calculations were done with eV data that were converted from AMU measurements (presumably the PDG uses the same conversion ratio for their best guess). (Provided I avoided the tau mass number.)

In fact, the first time I made these calculations was by hand. I used the AMU data, of course. I was so shocked at the result that I went back and wrote up some Java code to assist in the calculation, and as a check I redid them with eV data instead of AMU. The result was substantially the same.

Let me redo it with the AMU data the right way and show it here as an edit in the next hour or two:

[edit]

Proton and neutron masses in AMU:

[tex]m_P = 1.00727646688(13)[/tex]
[tex]m_N = 1.00866491560(55)[/tex]

[tex]m_N-m_p = 0.00138844872(68)[/tex] (Error is 5 x 10^-7 )

From the MASSES2 paper calculation:

[tex]m_L = [0.5804642012(71)]^2 = 0.3369386889(83) [/tex] (Error is 2.5 x 10^-8)

Therefore, the ratio has an overall error of 5 x 10^-7 and we have:

[tex](m_N-m_P)/m_L = 0.0041207755(20)[/tex]

or

[tex]0.0041207735 < (m_N-m_P)/m_L < 0.0041207775[/tex]

If you're going to convert numbers to base 3 by hand, I suggest first converting them to base 9, and then taking each digit and converting it to base 3. To check my work, you should get the following:

[tex]\begin{array}{cccc}
0.0041207735 &=& 0.0030028471 &(base 9)\\
0.0041207775 &=& 0.0030028486 &(base 9)
\end{array}[/tex]

[tex]\begin{array}{cccc}
0.0041207735 &=& 0.00001000000222112101 &(base 3)\\
0.0041207775 &=& 0.00001000000222112220 &(base 3)
\end{array}[/tex]

Now I'm claiming that [tex]3^6 = 729[/tex] is the correct equivalent to the fine structure constant here. So it makes sense to convert the above into digit groups of six tribits:

[tex]\begin{array}{ccccccc}
0.0041207735 &=& 0.000010 &000002 &221121 &01 &(base 3)\\
0.0041207775 &=& 0.000010 &000002 &221122 &20 &(base 3)
\end{array}[/tex]

To write this as 3 times a sum in base 729, we have:

[tex]0.0041207745(1) = 3 (3^{-6} +3^{-12} - 12.5 \times 3^{-18} ).[/tex]

In the above, the "12.5" was chosen to get the number in the range above. I.e., in the earlier calculation, this is the [tex]O(3^{-18})[/tex] figure. It is only 1.7 percent of the next higher term, so it seems likely that once I understand the sequence, I can make a calculation that will give this term.

You may possibly recall that my 8-digit predictions for the neutrino masses were based on the assumption that a factor of [tex]3^{12}[/tex] was involved. I realize that this looked pretty cracked at the time. I didn't make much effort to publish that paper because I knew that it would look pretty insane. What can I say, there are things one learns by doing that cannot be easily explained to others. And I didn't want to waste my time on a "gee whiz, look at this unexplainable coincidence" paper just to see it in print.

There are a lot of things in my Clifford algebra calculations that I've never explained to you. As far as I can tell from my DNS logs, very few people have even downloaded all my papers on Clifford algebra. If people were reading it, they'd be pointing out typos or asking for clarification. No one is doing this, but even if they did, there is a lot of other results that are not available on the net. Perhaps my intuition is good, perhaps it is not, that is for time to tell. But before you reject this as another coincidence involving BIG powers of 3, you should consider the possibility that I am still sitting on a lot of information I haven't explained to you.

Carl
 
Last edited:
  • #306
arivero
Gold Member
3,359
98
Proton and neutron masses in AMU:

[tex]m_P = 1.00727646688(13)[/tex]
[tex]m_N = 1.00866491560(55)[/tex]

[tex]m_N-m_p = 0.00138844872(68)[/tex] (Error is 5 x 10^-7 )

From the MASSES2 paper calculation:

[tex]m_L = [0.5804642012(71)]^2 = 0.3369386889(83) [/tex] (Error is 2.5 x 10^-8)

Ok here is my objection: m_P and m_N have experimental errors, but m_L is a calculation. The error in m_L if you take it as experimental is dominated by the error in the mass of the tau, of order 10^-4, and you are taking it to be 10^-8 because you are using the calculated prediction from Koide instead of the experimental value.

In the first post you listed the experimental mass of the tau, no the Koide prediction, and you caused me to switch to fast reading mode (*) . But perhaps you really need to use the prediction and to take only electron and mu as inputs. It is OK to do it; it is only that in the first post you were not doing it, or not telling about. The second post is a lot better.

(*) and In fact I missed the point of that you were already explaining this detail in the second posting :frown:
 
Last edited:
  • #307
arivero
Gold Member
3,359
98
But even if the third triplet of your decimals (er, not 10-cimals, but 3-cimals ... tricimals?)happens to be just noise, the main fit is interesting. You connect the mass of the leptons to the constituent mass of quarks, or to the whole mass of the proton if you wish.

[tex]
{({\sqrt m_e + \sqrt m_\mu + \sqrt m_\tau})^2 \over 3 }
=
{({ m_e + m_\mu + m_\tau}) \over 2 }
=m_{p,n}\approx 0.5 m_D[/tex]

EDITED: furthermore, the (electromagnetic??) mass difference between a neutral pion (quarks with atractive electric charges) and a charged pion (repulsive) is about 4.6 MeV, then it is pausible that your difference m_p - m_L comes from the electromagnetic binding, and than a "pure QCD" barion had a mass exactly m_L. I set a question mark because some mass differences can be told to come from the difference of quark masses (still assuming no preonic quarks here)
 
Last edited:
  • #308
CarlB
Science Advisor
Homework Helper
1,231
25
First, a few more interesting coincidences. When the fine structure constant is used, the first order change to energy is proportional to alpha^2. This turns out to be very close to [tex]27^{-3}[/tex]. I would prefer to write this as [tex]27 \times 729^{-2}[/tex] where 729 is the 12th power of 3.

The gravitational coupling constant and the weak coupling constant both have a factor of M^2. If one divides the gravitational coupling constant by 4pi, as is done in the first chapter of the recent text book "High Energy Physics" by Perkins:
https://www.amazon.com/dp/0521621968/?tag=pfamazon01-20
then the gravitational constant =[tex]5.33 \times 10^{-40}[/tex]. The Fermi force coupling constant is already so divided, it is [tex]1.16637(1)\times 10^{-5}[/tex]
see http://pdg.lbl.gov/2006/reviews/consrpp.pdf

These two coupling constants have the pure number ratio of [tex]2.188\times 10^{+34} = 729^{11.99}[/tex], very close to an exact power of 729.

So the weak and gravitational coupling constants are related by a power of three (to first order in 1/27). To relate these to the fine structure constant, we must pick an energy or mass. Of course it is possible to choose a mass that makes all three coupling constants be related by powers of three. What would that mass be? Well, since there is a division by M^2, one has different scales that one could choose for this. But a natural scale is the mass of the proton / neutron. And this turns out to work, at least to first order in 1/27.

I know that what I've written is going to be interpreted as just more unimportant coincidences from the forklift driver, but to explain the theory behind this requires more time than my readers have available. If you want to get started understanding my solution of the problem of how one combines quantum states to produce bound states you will just have to begin reading my book on the subject of the density operator formalism of quantum mechanics, http:\\brannenworks.com\dmaa.pdf

The above book is not up to date. I will know when someone is reading it because they will generate many dozens of questions, requests for clarification, and complaints about errors, I will then start writing more. I see none of these, so I know that I have about a 3 month head start on the rest of the planet - as it now, I'm the only person who understands how to apply primitive idempotents to elementary particle theory.

The new theory uses a sort of Feynman diagrams in a sort of perturbation theory. Instead of having incoming fermions treated as point objects, they are treated as bundles of six preons. (This is derived from very simple principles in the above book.) There is only one coupling constant, it is 1/3. To properly count diagrams, one must understand that all the particles must be dressed. That is, the bare propagators of the usual theory are dressed composite propagators in this theory, so one must dress the incoming and outgoing fermion propagator bundles to make them identical in form to a free fermion propagator bundle.

There are 27 tree level diagrams that contribute to [tex]\alpha^2[/tex]. In these diagrams, there are three places where an arbitary color phase can be chosen. The choices of color phase are red, green, and blue, and this gives the 27 = 3x3x3 diagrams. These tree level diagrams each have 12 vertices, and each contributes an overall probability of 1/3. The result is that [tex]alpha^2[/tex] is given to first order by [tex]27\times 3^{-12}[/tex].

Carl
 
Last edited by a moderator:
  • #309
arivero
Gold Member
3,359
98
The gravitational coupling constant and the weak coupling constant both have a factor of M^2.

This is one of my motivations to be against natural units :!!) : At the end of the day, Fermi coupling is short range (sort of dirac delta, or a 3-dim 1/r^3 potential) while gravitational coupling is long range. But in natural units they seem the same.
 
Last edited:
  • #310
Kea
859
0
They will find my sites of townships -- not the cities that I set there.
They will rediscover rivers -- not my rivers heard at night.
By my own old marks and bearings they will show me how to get there,
By the lonely cairns I builded they will guide my feet aright.


Kipling
 
  • #311
CarlB
Science Advisor
Homework Helper
1,231
25
They will find my sites of townships -- not the cities that I set there.
They will rediscover rivers -- not my rivers heard at night.
By my own old marks and bearings they will show me how to get there,
By the lonely cairns I builded they will guide my feet aright.


Kipling

Well I'm a little more optimistic than that. I think that they will eventually get quite stuck and will be forced to more carefully examine the compass. But until then, there is plenty to do in examining the promised land (that the compass pointed to). The numerical coincidences are fun to find, but dealing with powers of three is a pain unless you happen to have a base-N calculator like this one:
http://www.measurementalgebra.com/CarlCalc.html

I think it's rather clunky and will be improving it to a more efficient model soon. The source Java code is here:
http://www.measurementalgebra.com/Calc_Top.java

[edit]First improvement: The "accuracy" of some of the inverse trig and hyperbolic functions has beem "improved".[/edit]

Carl
 
Last edited:
  • #312
15
0
03 14 07

Hello Carl:
I found this on a thread on Kea's blog. Nice. I share Alejandro's thoughts on accuracies in your approximations. I don't, however, knock the idea of looking for meaning in powers of three. To be honest, there is a beauty in that approach. Ultimately, just check your significant figures such that your truncation errors can be reduced to nada!!!

I havta say that natural units always drive me crazy because I must remember from whose perspective they are natural. hehehehehe I wasn't able to read the whole thread, so I was simply wondering about the approximation for the fine structure constant you listed above.

When you recommend changing from base ten to base nine to base three, I think each transformation you are losing accuracy of numerical approximation. This introduces systemmatic error into your calculations. Why not use the Summation convention generated by geometric series no matter which base you are in? Then you are limited by only truncation error in power of logarithm which is rational that you are setting to an infinite sum of terms.

Recall that post I did on a non standard approximation for 1/3 in binary with alternating coefficients. The alternating coefficients may not fit into your framework, but you can find infinite number of geometric series to represent your numbers in base three! Interestingly enough, I think that likening the fine structure constant to the twelfth power of three is pretty darned clever! You are seeing the importance of such correspondence. HOWEVER, that approximation introduces errors as well because (from what I gather) the fine structure constant is actually irrational. Recall that you may produce a series expansion of an irrational number in any base. You will see some beauty fall out but here is the process that I might take and then go out as many terms as possible to keep as accurate as data allows:


1. Solve equation: 3^x=fsc (fine structure constant)
2. Solution will be infinite geometric series in log(fsc)/log(3). See below:

3^x=fsc
Log(3^x=fsc)
xLog(3)=Log(fsc)
x=Log(fsc)/Log(3)

3. Now this is with the understanding that you previously series expanded the term LOG(fsc) and you can do that a number of ways!

4. Nice approach, now it is time for cranking down truncations!!!
 
  • #313
arivero
Gold Member
3,359
98
mrigmaiden, nice blog you have.

Yes indeed, natural units are a sort of misnamer. I was happy with calling them "Planck units", from the paper of Planck.

About the approximations, in CarlB and in all the thread, it makes sense to have two layers, "exact", and "exact at order alpha" (or alpha^2 or alpha^n), and then of course to consider the "order alpha" correction and see if it is exact on its own.

Since the start of the thread, we have been insisting on simple numbers percent accuracy, and from this point of view 939 vs 941 GeV qualifies! The powers of 3 are not so simple at first try, but I do not deny them. They have a flavour near to Krolinowski's formulae.
 
Last edited:
  • #314
CarlB
Science Advisor
Homework Helper
1,231
25
Mahndisa,

Your comments on accuracy are important and need to be taken account of. Modern computers use IEEE standard double precision 64-bit floating point for calculations. IEEE is optimal in that any calculation is wrong only by at most one bit. My Java program uses the "double" format. This gives 16 decimal digits (51 binary bits) of accuracy. Each time one does a multiplication or division, one can end up wrong by 1 part in 10^16, or 1 part in 3^33. Subtraction and addition can be worse, if the numbers cancel to high accuracy, but ignoring this, their accuracy is similar.

The calculation for the conversion to base 3 requires a multiplication and a subtraction for each digit produced. The calculator I provided gives base 3 numbers with 33 digits. To do this calculation requires 66 arithmetic operations and the maximum error will be 2110 base 3, which means that the bottom four 3-bits are iffy. In other words, the calculator is only good to 29 digits of 3-digit numbers (13 digits of 10-digit numbers).. Fortunately, this is far more accuracy than is available in scientific measurements of masses.

Base 9 uses half as many cycles so its error is half as big. Conversion from base 9 to base 3 has no error at all as a single base 9 digit converts into two base 3 digits directly with no arithmetic operations. So if you used base 9 as an intermediary to base 3, you could get 30 3-digits of accuracy, worst case.

The thing you did with the ln(fsc)/ln(3) is actually how I do calculations, and is how the calculator finds base 3 versions of numbers. The natural log calculation has an error of only one bit so doing this introduces very small errors compared to the conversion to base 3.

To test the conversion, you might try converting to base 3 various fractions that you happen to know the repeating 3-base expansion for. For instance, 0.5, in base 3, is 0.1(1) And 1/7 is 0.102120(102120) in base 3. The calculator happens to give these numbers exact. If you can find one that gives a significant error, do tell. I've considered running the numbers out to higher accuracy. I just don't look forward to writing the trig and hyperbolic functions.

To test the accuracy, you can compute various numbers that you happen to know exactly. For example ((1/729 + 1)^5)( 3^78) gives

1.00001200010100010100001200000022 x 3^(78)

The binomial theorem for (x+1)^5 has terms 1, 5, 10, 10, 5, and 1. In base 3 these are 1, 12, 101, 101, 12, and 1. Thus the exact result is:

1.00001200010100010100001200000100 x 3^(78)

And the above calculation has an error of only one bit in the smallest displayed digit. This sort of thing shows that the inherent accuracy of the calculator is more than sufficient for the first 3 powers of 729 for numbers in the range of what is involved here, even with scientific notation. (This is better than my back of the envelope worst case calculation, but that's what back of the envelope worst calculations are supposed to tell you.)

And anyway, if I had problems that led to random bits, then why the heck am I getting sweet results for powers of 3? That would be a result in probability theory or computational theory that would be no less shocking than the physics result. No, the powers of 3 are present in the spectrum of pure numbers used in particle theory and these are unexplained in the standard model. I have found many more results than are discussed above. I've provided the calculator so that you can find them too. If you don't publish anything, I will keep feeding them to you guys here on this thread.

To find powers of 3 in this stuff induces fairly severe cognitve dissonance in any reasonable physicist. It is very natural to conclude that there is arithmetic or precision errors. Furthermore, this is as it should be; unexpected results need to be examined very carefully and critically.

Suppose that God came to you and provided you with a peek at the unified field theory. Suppose that it required nearly everything that is known and believed about physics to be overturned. Suppose that you were an engineer who hadn't been in physics for years. What would you do?

You can try writing a book on the theory. The resulting book (like any other textbook giving substantial new results in physics) requires many weeks of difficult study to understand. People educated in the previous way of doing things will accept your new way of looking at things with the same warm acceptance that the older physicists accepted quantum theory back 100 years ago (i.e. not at all).

You will find that people capable of understanding your book are too busy pushing their own extensions of the standard physics to bother with it, and people with time available to read are not equipped with a sufficient understanding of the standard model to appreciate the relationships. Furthermore, the more you know about physics, the more you know how the assumptions of physics are self reinforcing, and the less inclined you are to continue reading anything that rejects those things that you already "know". A collection of self reinforcing beliefs is not a proof of truth, instead it only implies that if something is wrong with it as a whole, (in the present case, QM is not compatible with relativity), then it is shot through with self reinforcing errors.

So what do you do next? What you do is you start applying your understanding to standard physics. A major source of dissatisfaction with the status quo is the large number of arbitrary constants. These arbitrary constants mostly appear in the quantum theory. This is the weakest spot in the foundations of physics, the place where the cracks are widest. While the foundations are broken from one end to the other, for sociological reasons, you can only sneak in under the "crackpot radar" at this weak point.

So you obtain formulas that relate things that should have no relation. If the relationships are simple enough, your formulas will attract attention, and maybe, eventually, someone will take the trouble to understand what you are saying. Before they do this, they will assume every other possible thing about your formulas. (1) You made a mistake. (2) They are accidents. The next explanation will be: (3) They are correct but can be explained by minor modifications to the standard model.

Communicating with physicists is extremely difficult. One must write formulas that are so blindingly simple that they do not allow cognitive dissonance to cause the reader to jump to explanations that are compatible with the many assumptions built into a physicist. The simplest explanation for a paper giving new results in physics is that it just wrong.

I don't think that the formulas I've written here are sufficient to cause anyone to undergo the pain needed to understand density operator theory. However, this is just the nose of the lead camel in a very long pack train.

Of the mass numbers in the Particle Data Group, the mesons are a mess and will require some heavy lifting to figure out. The structure of the baryons is fairly obvious. They come in 3s and use modified versions of the Koide relationship in its eigenvector form. I'll write them up next, but don't hold your breath, I'm fairly busy in my day job right now and am a little surprised that I haven't already been fired.

Masses are just the tip of the iceberg of PDG information. There are also widths, phase angles, lifetimes and branching ratios. These are shot through with unexplained coincidences.

Carl
 
Last edited:
  • #315
118
3
You can try writing a book on the theory. The resulting book (like any other textbook giving substantial new results in physics) requires many weeks of difficult study to understand. People educated in the previous way of doing things will accept your new way of looking at things with the same warm acceptance that the older physicists accepted quantum theory back 100 years ago (i.e. not at all).

You will find that people capable of understanding your book are too busy pushing their own extensions of the standard physics to bother with it, and people with time available to read are not equipped with a sufficient understanding of the standard model to appreciate the relationships.

Hi Carl

One of the problems is that the strictly operator approach opens up a theoretical can of worms. A question that arises is: what happens to gauge symmetry in the strictly operator approach? You address this question in 8.4 of your book, but I haven't yet found any explicit calculations of the actions of SU(3), SU(2) and SU(1) on the PIs. If you have any of these in print, I'd like to check them out, thanks!
 
Last edited:
  • #316
15
0
Response to C. Brennen on neutrino mass calculations

03 14 07


Carl thanks for the response, but I think you missed the point I was trying to make. Yes I know how a calculator works, but the crucial point I was trying to make was that you get truncation error when you solve log equation for arbitrary base and that error gets carried around with your calculations systemmatically. My question about using infinite geometric series to represent numbers in any base is a valid one and I am not sure if you addressed that. My question also had to do with you approximating the fine structure constant, An IRRATIONAL number in base three. How exactly are you doing that? The way I would do it would be to use an expanion of Log(fsh) and tak that to large accuracy first. Next, I would express fsh as an infinite series in Log(fsh), which can be done for arbitrary irrational numbers.

As to commnication with physicists, I am not here to do anything but learn. Sakurai used density matrices in Chapter 4 maybe? So density operators aren' new, but I dont know enough about your theory to say much else.

The only thing I was really curious about was how you were expressing the fine structure constant, because I believe your approximation for it in base three is a bit too crude.
 
  • #317
15
0
One more Thing

03 14 07

Oh lastly Carl, I never specify which base in Log I am using because that depends upon easiest way to parse problem. I want a pure interpretation of a number and a generating series base three. At this point we are speaking around one another because our desire is to express a number in an arbitrary base. Natural log is only good for when you have e popping about. Otherwise use Log base arbitrary! heheheh So when solving 3^x=2 you could write:

3^x=2
exp(log(3^x))=2
then go through the bs rigomorole with LN, OR you could also say:

3^x=2
Log[3^x=2]; Taking Log base three of both sides to get that x= log2/log3 But this is log base3.

You can do a Taylor series approximation for log base three quite nicely so ...
 
  • #318
CarlB
Science Advisor
Homework Helper
1,231
25
One of the problems is that the strictly operator approach opens up a theoretical can of worms. A question that arises is: what happens to gauge symmetry in the strictly operator approach?

A gauge symmetry means that one has a bunch of distinct ways of representing the exact same physical situation. This in itself is a theoretical can of worms. There is only one universe, why should we assume that it may be faithfully represented by more than one mathematical object? The only reason we require gauge symmetries to define boson interactions is because it has worked so far, but as physics has seen so many times before, this is not proof that it will suffice for all future versions of physics.

The gauge principle is one of the self-reinforcing beliefs about reality that has locked physics into not advancing for so many years. Like I said above, when you have a large number of self-reinforcing beliefs, it does not imply that they are all true. Instead, it implies that if you fiddle with one of them, you will also have to fiddle with the others.

When you convert a spinor into a density matrix (operator), you eliminate the arbitrary complex phase. This is an elimination of a gauge freedom, and yet the density operator has the same physically relevant information as the original wave function. I think that this is a clue. What I'm doing with the SU(3) and SU(2) gauge symmetries is analogous to this. They no longer exist per se.

SU(2) symmetries are very easy in Clifford algebra. Any two primitive idempotents that anticommute and square to unity define an SU(2), if I recall correctly. For the standard model, you need to get SU(2) reps four at a time in the particular form of a doublet and two singlets. These appear naturally in Clifford algebras when you combine two primitive idempotents as is discussed at length in DMAA.

In short, given four primitive idempotents with quantum numbers (-1,-1), (-1,+1), (+1,-1), (+1,+1) the nonzero sums compatible with the Pauli exclusion principle are (-2,0),(+2,0) the doublet, and (0,-2) and (0,+2), the two singlets. To put it into Dirac gamma matrices, the four primitive idempotents could be [tex](1\pm \gamma_0\gamma_3)(1\pm i\gamma_1\gamma_2)/4[/tex] with the signs giving the quantum numbers. On the other hand, SU(3) is built into the assumptions of circulant symmetry. I'd type up more on this, and was sort of getting inclined to, but the PDG is too much fun to play with.

The use of the gauge symmetries in QM is to allow a derivation of the forces of physics. But after one "derives" those forces from this "principle", one need not retain the gauge symmetry itself. Instead, one could suppose that some particular gauge was the correct one and make all ones calculations based on that choice.

In other words, the assumption of gauge symmetry is not required to make calculations in QM, but instead to derive the laws. Gauge symmetry is an attribute of the equations, but it does not need to be an attribute of all possible equations unless you assume that the equations you know are all that can be. Since any new theory of QM need only be equivalent to the old theory in terms of comparing the results of calculations that can be verified by current experiments, there is no need to preserve a gauge symmetry per se.

Things like "no preferred reference frame" are not experimental facts, they are only theoretical facts. An experimental fact is a description of an experiment combined with the observed result of that experiment. For example, electrons interfere with each other when sent through a double slit experiment. All realistic theories must agree on this. They do not need to agree on how the calculation is done. For example, one theory might assume that the electron is composite, another that it is elementary.

The question of the compositeness of the electron is a theoretical fact, it's an assumption of the theory, not something that can be proven experimentally. The theory that produces a calculation is just wrapping around a calculational result, even the calculation itself, is just wrapping around that numerical result. The accuracy of the calculation cannot prove the uniqueness of the wrapping.

Regarding the use of density operators (for the continuous degrees of freedom instead of the discrete degrees I play with), you might find this article interesting, which also discusses how one need not have a Hilbert space to do QM:
http://www.arxiv.org/abs/quant-ph/0005026]Brown & Bohm- Schrodinger revisited: an algebraic approach [Broken]

Carl
 
Last edited by a moderator:
  • #319
CarlB
Science Advisor
Homework Helper
1,231
25
Carl thanks for the response, but I think you missed the point I was trying to make. Yes I know how a calculator works, but the crucial point I was trying to make was that you get truncation error when you solve log equation for arbitrary base and that error gets carried around with your calculations systemmatically.

Mahndisa,

I'm just a very practical working man, an engineer by trade mostly. Mathematics is very attractive and there are mathematical beauties hiding behind everything, but if I spent the time to explore them I would not have time to do the physics. One thing I think is interesting is the lengths of repeating decimals. It should be clear that the repeating decimal for p/q cannot repeat with more than q-1 bits in an arbitrary base. Do you suppose there could be something else here, perhaps something that has to do with [URL [Broken] Little Theorem[/url]?

But life is short. To remain free of the grasp of pure mathematics, one must make the decision to not waste ones time on the beauty of pure mathematics every second of the day.

Glad to see you're feeling good enough to be posting more.
 
Last edited by a moderator:
  • #320
15
0
Optimization Science Should be Explored

03 15 07


<b>"To remain free of the grasp of pure mathematics, one must make the decision to not waste ones time on the beauty of pure mathematics every second of the day."</b>


Yes Carl, this is where we have a difference in approach. I don't think that studying the beauty of pure mathematics is ever a waste and I don't see it separate from physics either. Yes Fermat's Little Theorem is part of what I was alluding to. However, still not quite. You don't have to be obsessed with the beauty of math to come up with additive discrete representations of numbers in arbitrary base using geometric series. And although there exist practical limitations to computing power, there are go-arounds for the efficient.

Since you are using density matrix formalism etc, how could you distinguish that formalism from revelling in mathematics? I see no distinction.

What I will say is that more physicists and engineers might wish to study optimization science, and intense error analysis. It can only help.

One of the issues that I faced when I took a Graduate Laboratory Seminar was that I knew a lot of the theory quite well and could build circuits etc, but my methods for error propagation were not as complete. I studied this and now try to avoid the dominant sources of systemmatic error one might come across in such computations.

As a matter of practicality, a geometric series representation used to represent a number makes the most sense and by using double or float (as you aptly mentioned) you can go out quite far!
 
  • #321
CarlB
Science Advisor
Homework Helper
1,231
25
Baryon excitations part I, theory

Continuing the story, we return to the case of the baryons.

The Koide formula gives the masses of the electron, muon, and tau by the formula:
[tex]\sqrt{m_n} = \mu_v + \mu_s \cos(2n\pi/3 + \delta)[/tex]

where the constants are defined as:

[tex] \begin{array}{rcl}
\mu_v &=& 17.716 \sqrt{MeV},\\
\mu_s &=& \mu_v \sqrt{2},\\
\delta &=& 0.22222204717
\end{array}[/tex]

The above formula has one degree of freedom removed with the square root of 2. This is the formula that Koide discovered in the early 1980s. The angle \delta, is surprisingly close to 2/9, and this post is devoted to the application of this angle to the baryon resonances.

The authors model of the leptons supposes that they are composite with three elementary objects in each (simplifying here a bit), and that these elementary objects are held together with a force similar to the color force. That is, the claim is that the electron, muon and tau are color singlets and the generation structure arises from a similar effect.

The author extended the above formula to the neutral leptons, the neutrinos, by jumping to the conclusion that the above numbers have something to do with quantum numbers. The justification for this is beyond the scope of this post, but the formula published a year ago was:

[tex]\begin{array}{rcl}
\sqrt{m_{\nu n}} &=& 3^{-11} (\mu_v + \mu_s\cos(2n\pi/3 + \delta + \pi/12))[/tex]
with the same constants given above.

A short form reason for the pi/12 is that it appears when you convert a 3x3 array of complex multiples of primitive idempotents of the Pauli algebra into a 3x3 array of complex numbers that preserves matrix addition and multiplication. In short, the equation that relates the 0.5 with the pi/12 is:
[tex]\begin{array}{rcl}
P_x &=& 0.5(1 + \sigma_x),\\
P_y &=& 0.5(1 + \sigma_y),\\
P_z &=& 0.5(1 + \sigma_z),\\
(0.5\exp(-i\pi/12))^4 P_xP_yP_xP_x &=& (0.5 \exp(-i\pi/12)) P_x,
\end{array}[/tex]
That is, one can eliminate the nastiness of the product of these three projection operators, and turn them into just another complex multiplication if you multiply each by some numbers that have to do with the sqrt(2) in the Koide formula, and the difference between the charged and lepton delta angles. (Note, I haven't checked the above with care. If you play around with it, you can fix any errors.)

There are hundreds of baryon resonances / excitations and understanding their masses is an ongoing project. We will use the word "resonance" to mean a set of baryons that all have the same quantum numbers. We will use the word "excitations" to distinguish between baryons that have the same quantum numbers.

Other than lepton number, the leptons all have the same quantum numbers. So our analogy between the leptons and the baryons, is between generations of leptons, and excitations of baryons. This makes a certain amount of sense in that leptons do not have excitations other than the generation structure. One could also imagine looking at the generation structure of the baryons. For such an analogy, one would want to compare stuff like (ddd,sss,bbb) and (uuu,ttt,ccc). Unfortunately, these more charmed states do not have very good data.

When a baryon has only two or fewer excitations, we suppose that the others are either yet to be detected, or are hidden as we will discuss later. For this program, a worse situation is when an excitation comes in a multiplicity greater than 3. For the baryons, this happens only one time, with the [tex]N_{1/2+}[/tex]. The fourth state carries only one reliability star. In the PDG data, this means that "evidence of existence is poor". Accordingly, we will ignore this state.

With the leptons, we saw that the angle 0.22222204717 had something to do with the difference between the charged and neutral leptons. We suppose it has to do with the Weinberg angle. The charged leptons and the neutral ones differed by pi/12 = 15 degrees. We therefore speculate that the excitations of the baryon resonances will carry this same relation, that is, that they will have angular dependency of the form:
[tex]\cos(2n\pi/3 + \delta + m\pi/12)[/tex].
where m depends on the resonance.

Adding 8 to m is the same as subtracting 1 from n, so we need only consider 8 different values of m, for instance, from 0 to 7. Since the cosine is an even function, we cannot distinguish between positive and negative angles. This causes a reflection in the data. Consequently the algorithm for finding the angle from the mass data will return angles from 0 to 60 degrees rather than 0 to 120 degrees. As a result, the cases for m > 3 are folded over those same 60 degrees and we will bin the calculated values into the following 8 bins

[tex]\begin{array}{rcr}
\delta + 7\pi/12&==& 2.27\\
\delta + 0\pi/12&==&12.73\\
\delta + 6\pi/12&==&17.27\\
\delta + 1\pi/12&==&27.73\\
\delta + 5\pi/12&==&32.27\\
\delta + 2\pi/12&==&42.73\\
\delta + 4\pi/12&==&47.27\\
\delta + 3\pi/12&==&57.73\end{array}[/tex]

For example, in the first line, [tex]\delta + 7\pi/12[/tex] gives 117.73 degrees. Adding 1 to n is the same as subtracting 120 degrees from the angle, so this is the same as -2.73 degrees and is indistinguishable from +2.73 degrees because the cosine is an even function.

Note that the first and last bins are very close to 0 and 60 degrees. An angle of 0 degrees corresponds to a degenerate case with two excitations at the lower mass value while the angle 60 degrees puts the degeneracy at the upper mass value. These degeneracies would correspond to excitations of baryons that only appear with two masses.

The above set of bins have gaps of length 4.54 and 10.46 degrees. The rms average for a random value in a bin of length D is:
[tex]\frac{2}{D} \int_{x=0}^{D/2} x^2 dx = D^2/12[/tex]
The 4.54 degree gap will be hit 4.54/15 of the time, while the other will be hit 10.46/15 of the time. The average rms is therefore:
([tex](4.54^3 + 10.46^3)/12*15)^{1/2} = 2.622[/tex]

I will present the PDG data in the next post.
 
Last edited:
  • #322
CarlB
Science Advisor
Homework Helper
1,231
25
Baryon Excitations part II, PDG data

Here's the results of the calculations:
[tex]\begin{array}{lccccc|l}
Bin/Set & mu_v & mu_s & \delta&Error & L_{IJ} &Notes \\ \hline
m=7 & & & 2.27 & & & Hidden \\ \hline
m=0 & & &12.73 & & &\\
e,\mu,\tau & 17.716 &25.05 &12.73 & & & ****, ****, ****\\
N_{1/2-} & 41.89 & 3.92 &12.97 &+0.24 & S_{11} & ****, ****, *\\
\Lambda_{3/2-} & 42.77 & 5.58 &12.67 &-0.06 & D_{03} & ****, ****, *\\\hline
m=6 & & &17.27 & & & \\
\Sigma_{3/2-} & 41.52 & 2.45 &16.08 &-1.19 & D_{13} & ****, ***, **\\
N_{3/2-} & 41.95 & 3.87 &19.22 &+1.95 & D_{13} & ****, ***, **\\ \hline
m=1 & & &27.73 & & & \\
\Sigma_{1/2-} & 42.33 & 2.60 &23.03 &-4.70 & S_{11} & ***, **, *\\ \hline
\end{array}[/tex]

[tex]\begin{array}{lccccc|l}
Bin/Set & mu_v & mu_s & \delta&Error & L_{IJ} &Notes \\ \hline
m=5 & & &32.27 & & & \\
\Delta_{1/2-} & 43.51 & 3.36 &31.43 &-0.84 & S_{31} & ****, **, *\\
\Delta_{3/2+} & 39.80 & 5.16 &35.70 &+3.43 & P_{33} & ****, ***, ***\\ \hline
m=2 & & &42.73 & & & \\
\Sigma_{3/2+} & 41.90 & 4.95 &41.57 &-1.16 & P_{13} & ****, **, *\\
N_{1/2+} & 36.69 & 6.34 &42.65 &-0.08 & P_{11} & ****, ****, ***\\
\Sigma_{1/2+} & 39.55 & 5.23 &43.22 &+0.49 & P_{11} & ****, ***, **\\
\Lambda_{1/2-} & 40.21 & 2.81 &43.51 &+0.78 & S_{01} & ****, ****, ***\\ \hline
m=4 & & &47.27 & & & \\
\Lambda_{1/2+} & 38.74 & 5.46 &47.46 &+0.19 & P_{01} & ****, ***, ***\\ \hline
m=3 & & &57.73 & & & Hidden\\ \hline
\end{array}[/tex]

The units of [tex]\mu_v^2, \mu_s^2[/tex] are MeV. The next column is the calculated delta value, then the error. The final column are notes. The asterisks are from, the PDG and describe how certain the three states are. I've split the table into two because the PF LaTex editor just couldn't quite handle it as one.

The rms error is 1.88 degrees, somewhat below the expected 2.623. The worst fit is for the [tex]\Sigma_{1/2-}[/tex], which coincidentally also happens to carry the worst asterisk rating from the PDG. The second worst fit is the [tex]\Delta_{3/2+}[/tex]. While this set is well supported by experiment, it includes the [tex]\Delta(1600)[/tex] whose mass is only loosely constrained. About this particle, the PDG writes: "The various analyses are not in good agreement." Together, these two bad fits contribute 80% of the error among the 13 sets of 3 masses each.

The [tex]S_{1/2-}[/tex] excitations, in addition to carrying a high error, are also in a class by themselves. They would seem to fit better in the [tex]\delta + 0\pi/12[/tex] class, with the [tex]N_{1/2-}[/tex] which is also [tex]S_{11}[/tex].

What you're seeing here is ALL the data from the baryons. I suspect that the data will look better when the particle mass ranges are taken into account. My mass calculator does not yet have the software to take errors in the data into account. Before this can be published, it needs to have the errors in the excitations taken into account. Hopefully the bad fits will correspond to loose mass ranges.
 
Last edited:
  • #323
CarlB
Science Advisor
Homework Helper
1,231
25
Whoops. I left off the [tex]\Sigma_{1/2+}[/tex]. [edit]Oh, no I didn't![/edit]

Looking at the data, it seems that the really well supported classes are the even ones. Also, there are two low-lying excitations with two masses where one of the two masses is said, in the PDG, to be possibly doubled. These would be the [tex]N_{3/2-} N(1700), N(2080)^2 = D_{13}[/tex] and the [tex]N_{3/2+}N(1720)^2,N(1900) = P_{13}[/tex].

I've finished the user interface for an online applet that will compute Koide parameters with error bars, but I've not yet put the math code into it. I've got that code in another program so it's just a matter of putting it in there and ironing out any bugs. But I really need to get back to the day job.

Carl
 
Last edited:
  • #324
CarlB
Science Advisor
Homework Helper
1,231
25
Okay, I've got a tool that allows you to compute error bars for these Koide type parameterizations of three masses.

In addition, the mesons are famous for being messy with duplicate masses hard to distinguish. One ends up with six masses instead of three. The hope is that one can split those six mesons into two groups of three that are decent. Accordingly, the tool holds six masses at the same time and automatically steps through the various permutations:
http://www.measurementalgebra.com/KoideCalc.html

Source code is available at the above.

Using new tools is difficult. I've initialized the above program with the data for the electron, muon and tau. So all you have to do to calculate your first error bars is hit the "KOIDE" button. Also, I've set it up to give the angles in degrees rather than radians.
 
Last edited:
  • #325
arivero
Gold Member
3,359
98
In addition, the mesons are famous for being messy with duplicate masses hard to distinguish.

Indeed that is the idea, isn't it? We have six mesons with charge +1, due to combinations of three families of antidown quarks and two families of up quarks. But they have spin 0; then we have the same number of degrees of freedom that in the case of leptons of charge +1.

Have you spotted now some interesting pattern in the mesons, Carl? I tried some pages ago, up in the thread, and I was not very happy.
 

Related Threads on All the lepton masses from G, pi, e

  • Last Post
Replies
1
Views
2K
  • Last Post
Replies
3
Views
2K
  • Last Post
Replies
5
Views
1K
  • Last Post
Replies
12
Views
13K
  • Last Post
Replies
11
Views
5K
  • Last Post
Replies
1
Views
4K
  • Last Post
Replies
4
Views
4K
  • Last Post
Replies
1
Views
2K
  • Last Post
Replies
5
Views
3K
E
Top