All the lepton masses from G, pi, e

  • Thread starter arivero
  • Start date
  • Tags
    Lepton Pi
In summary, the conversation revolved around using various equations and formulae to approximate the values of fundamental constants such as the Planck Mass and the fine structure constant. The discussion also delved into the possibility of using these equations to predict the masses of leptons and other particles. Some participants raised concerns about the validity of using such numerical relations, while others argued that it could be a useful tool for remembering precise values.

Multiple poll: Check all you agree.

  • Logarithms of lepton mass quotients should be pursued.

    Votes: 21 26.6%
  • Alpha calculation from serial expansion should be pursued

    Votes: 19 24.1%
  • We should look for more empirical relationships

    Votes: 24 30.4%
  • Pythagorean triples approach should be pursued.

    Votes: 21 26.6%
  • Quotients from distance radiuses should be investigated

    Votes: 16 20.3%
  • The estimate of magnetic anomalous moment should be investigated.

    Votes: 24 30.4%
  • The estimate of Weinberg angle should be investigated.

    Votes: 18 22.8%
  • Jay R. Yabon theory should be investigate.

    Votes: 15 19.0%
  • I support the efforts in this thread.

    Votes: 43 54.4%
  • I think the effort in this thread is not worthwhile.

    Votes: 28 35.4%

  • Total voters
    79
  • #316
Response to C. Brennen on neutrino mass calculations

03 14 07


Carl thanks for the response, but I think you missed the point I was trying to make. Yes I know how a calculator works, but the crucial point I was trying to make was that you get truncation error when you solve log equation for arbitrary base and that error gets carried around with your calculations systemmatically. My question about using infinite geometric series to represent numbers in any base is a valid one and I am not sure if you addressed that. My question also had to do with you approximating the fine structure constant, An IRRATIONAL number in base three. How exactly are you doing that? The way I would do it would be to use an expanion of Log(fsh) and tak that to large accuracy first. Next, I would express fsh as an infinite series in Log(fsh), which can be done for arbitrary irrational numbers.

As to commnication with physicists, I am not here to do anything but learn. Sakurai used density matrices in Chapter 4 maybe? So density operators aren' new, but I don't know enough about your theory to say much else.

The only thing I was really curious about was how you were expressing the fine structure constant, because I believe your approximation for it in base three is a bit too crude.
 
Physics news on Phys.org
  • #317
One more Thing

03 14 07

Oh lastly Carl, I never specify which base in Log I am using because that depends upon easiest way to parse problem. I want a pure interpretation of a number and a generating series base three. At this point we are speaking around one another because our desire is to express a number in an arbitrary base. Natural log is only good for when you have e popping about. Otherwise use Log base arbitrary! heheheh So when solving 3^x=2 you could write:

3^x=2
exp(log(3^x))=2
then go through the bs rigomorole with LN, OR you could also say:

3^x=2
Log[3^x=2]; Taking Log base three of both sides to get that x= log2/log3 But this is log base3.

You can do a Taylor series approximation for log base three quite nicely so ...
 
  • #318
kneemo said:
One of the problems is that the strictly operator approach opens up a theoretical can of worms. A question that arises is: what happens to gauge symmetry in the strictly operator approach?

A gauge symmetry means that one has a bunch of distinct ways of representing the exact same physical situation. This in itself is a theoretical can of worms. There is only one universe, why should we assume that it may be faithfully represented by more than one mathematical object? The only reason we require gauge symmetries to define boson interactions is because it has worked so far, but as physics has seen so many times before, this is not proof that it will suffice for all future versions of physics.

The gauge principle is one of the self-reinforcing beliefs about reality that has locked physics into not advancing for so many years. Like I said above, when you have a large number of self-reinforcing beliefs, it does not imply that they are all true. Instead, it implies that if you fiddle with one of them, you will also have to fiddle with the others.

When you convert a spinor into a density matrix (operator), you eliminate the arbitrary complex phase. This is an elimination of a gauge freedom, and yet the density operator has the same physically relevant information as the original wave function. I think that this is a clue. What I'm doing with the SU(3) and SU(2) gauge symmetries is analogous to this. They no longer exist per se.

SU(2) symmetries are very easy in Clifford algebra. Any two primitive idempotents that anticommute and square to unity define an SU(2), if I recall correctly. For the standard model, you need to get SU(2) reps four at a time in the particular form of a doublet and two singlets. These appear naturally in Clifford algebras when you combine two primitive idempotents as is discussed at length in DMAA.

In short, given four primitive idempotents with quantum numbers (-1,-1), (-1,+1), (+1,-1), (+1,+1) the nonzero sums compatible with the Pauli exclusion principle are (-2,0),(+2,0) the doublet, and (0,-2) and (0,+2), the two singlets. To put it into Dirac gamma matrices, the four primitive idempotents could be [tex](1\pm \gamma_0\gamma_3)(1\pm i\gamma_1\gamma_2)/4[/tex] with the signs giving the quantum numbers. On the other hand, SU(3) is built into the assumptions of circulant symmetry. I'd type up more on this, and was sort of getting inclined to, but the PDG is too much fun to play with.

The use of the gauge symmetries in QM is to allow a derivation of the forces of physics. But after one "derives" those forces from this "principle", one need not retain the gauge symmetry itself. Instead, one could suppose that some particular gauge was the correct one and make all ones calculations based on that choice.

In other words, the assumption of gauge symmetry is not required to make calculations in QM, but instead to derive the laws. Gauge symmetry is an attribute of the equations, but it does not need to be an attribute of all possible equations unless you assume that the equations you know are all that can be. Since any new theory of QM need only be equivalent to the old theory in terms of comparing the results of calculations that can be verified by current experiments, there is no need to preserve a gauge symmetry per se.

Things like "no preferred reference frame" are not experimental facts, they are only theoretical facts. An experimental fact is a description of an experiment combined with the observed result of that experiment. For example, electrons interfere with each other when sent through a double slit experiment. All realistic theories must agree on this. They do not need to agree on how the calculation is done. For example, one theory might assume that the electron is composite, another that it is elementary.

The question of the compositeness of the electron is a theoretical fact, it's an assumption of the theory, not something that can be proven experimentally. The theory that produces a calculation is just wrapping around a calculational result, even the calculation itself, is just wrapping around that numerical result. The accuracy of the calculation cannot prove the uniqueness of the wrapping.

Regarding the use of density operators (for the continuous degrees of freedom instead of the discrete degrees I play with), you might find this article interesting, which also discusses how one need not have a Hilbert space to do QM:
http://www.arxiv.org/abs/quant-ph/0005026]Brown & Bohm- Schrodinger revisited: an algebraic approach

Carl
 
Last edited by a moderator:
  • #319
mrigmaiden said:
Carl thanks for the response, but I think you missed the point I was trying to make. Yes I know how a calculator works, but the crucial point I was trying to make was that you get truncation error when you solve log equation for arbitrary base and that error gets carried around with your calculations systemmatically.

Mahndisa,

I'm just a very practical working man, an engineer by trade mostly. Mathematics is very attractive and there are mathematical beauties hiding behind everything, but if I spent the time to explore them I would not have time to do the physics. One thing I think is interesting is the lengths of repeating decimals. It should be clear that the repeating decimal for p/q cannot repeat with more than q-1 bits in an arbitrary base. Do you suppose there could be something else here, perhaps something that has to do with Little Theorem[/url]?

But life is short. To remain free of the grasp of pure mathematics, one must make the decision to not waste ones time on the beauty of pure mathematics every second of the day.

Glad to see you're feeling good enough to be posting more.
 
Last edited by a moderator:
  • #320
Optimization Science Should be Explored

03 15 07


<b>"To remain free of the grasp of pure mathematics, one must make the decision to not waste ones time on the beauty of pure mathematics every second of the day."</b>


Yes Carl, this is where we have a difference in approach. I don't think that studying the beauty of pure mathematics is ever a waste and I don't see it separate from physics either. Yes Fermat's Little Theorem is part of what I was alluding to. However, still not quite. You don't have to be obsessed with the beauty of math to come up with additive discrete representations of numbers in arbitrary base using geometric series. And although there exist practical limitations to computing power, there are go-arounds for the efficient.

Since you are using density matrix formalism etc, how could you distinguish that formalism from revelling in mathematics? I see no distinction.

What I will say is that more physicists and engineers might wish to study optimization science, and intense error analysis. It can only help.

One of the issues that I faced when I took a Graduate Laboratory Seminar was that I knew a lot of the theory quite well and could build circuits etc, but my methods for error propagation were not as complete. I studied this and now try to avoid the dominant sources of systemmatic error one might come across in such computations.

As a matter of practicality, a geometric series representation used to represent a number makes the most sense and by using double or float (as you aptly mentioned) you can go out quite far!
 
  • #321
Baryon excitations part I, theory

Continuing the story, we return to the case of the baryons.

The Koide formula gives the masses of the electron, muon, and tau by the formula:
[tex]\sqrt{m_n} = \mu_v + \mu_s \cos(2n\pi/3 + \delta)[/tex]

where the constants are defined as:

[tex] \begin{array}{rcl}
\mu_v &=& 17.716 \sqrt{MeV},\\
\mu_s &=& \mu_v \sqrt{2},\\
\delta &=& 0.22222204717
\end{array}[/tex]

The above formula has one degree of freedom removed with the square root of 2. This is the formula that Koide discovered in the early 1980s. The angle \delta, is surprisingly close to 2/9, and this post is devoted to the application of this angle to the baryon resonances.

The authors model of the leptons supposes that they are composite with three elementary objects in each (simplifying here a bit), and that these elementary objects are held together with a force similar to the color force. That is, the claim is that the electron, muon and tau are color singlets and the generation structure arises from a similar effect.

The author extended the above formula to the neutral leptons, the neutrinos, by jumping to the conclusion that the above numbers have something to do with quantum numbers. The justification for this is beyond the scope of this post, but the formula published a year ago was:

[tex]\begin{array}{rcl}
\sqrt{m_{\nu n}} &=& 3^{-11} (\mu_v + \mu_s\cos(2n\pi/3 + \delta + \pi/12))[/tex]
with the same constants given above.

A short form reason for the pi/12 is that it appears when you convert a 3x3 array of complex multiples of primitive idempotents of the Pauli algebra into a 3x3 array of complex numbers that preserves matrix addition and multiplication. In short, the equation that relates the 0.5 with the pi/12 is:
[tex]\begin{array}{rcl}
P_x &=& 0.5(1 + \sigma_x),\\
P_y &=& 0.5(1 + \sigma_y),\\
P_z &=& 0.5(1 + \sigma_z),\\
(0.5\exp(-i\pi/12))^4 P_xP_yP_xP_x &=& (0.5 \exp(-i\pi/12)) P_x,
\end{array}[/tex]
That is, one can eliminate the nastiness of the product of these three projection operators, and turn them into just another complex multiplication if you multiply each by some numbers that have to do with the sqrt(2) in the Koide formula, and the difference between the charged and lepton delta angles. (Note, I haven't checked the above with care. If you play around with it, you can fix any errors.)

There are hundreds of baryon resonances / excitations and understanding their masses is an ongoing project. We will use the word "resonance" to mean a set of baryons that all have the same quantum numbers. We will use the word "excitations" to distinguish between baryons that have the same quantum numbers.

Other than lepton number, the leptons all have the same quantum numbers. So our analogy between the leptons and the baryons, is between generations of leptons, and excitations of baryons. This makes a certain amount of sense in that leptons do not have excitations other than the generation structure. One could also imagine looking at the generation structure of the baryons. For such an analogy, one would want to compare stuff like (ddd,sss,bbb) and (uuu,ttt,ccc). Unfortunately, these more charmed states do not have very good data.

When a baryon has only two or fewer excitations, we suppose that the others are either yet to be detected, or are hidden as we will discuss later. For this program, a worse situation is when an excitation comes in a multiplicity greater than 3. For the baryons, this happens only one time, with the [tex]N_{1/2+}[/tex]. The fourth state carries only one reliability star. In the PDG data, this means that "evidence of existence is poor". Accordingly, we will ignore this state.

With the leptons, we saw that the angle 0.22222204717 had something to do with the difference between the charged and neutral leptons. We suppose it has to do with the Weinberg angle. The charged leptons and the neutral ones differed by pi/12 = 15 degrees. We therefore speculate that the excitations of the baryon resonances will carry this same relation, that is, that they will have angular dependency of the form:
[tex]\cos(2n\pi/3 + \delta + m\pi/12)[/tex].
where m depends on the resonance.

Adding 8 to m is the same as subtracting 1 from n, so we need only consider 8 different values of m, for instance, from 0 to 7. Since the cosine is an even function, we cannot distinguish between positive and negative angles. This causes a reflection in the data. Consequently the algorithm for finding the angle from the mass data will return angles from 0 to 60 degrees rather than 0 to 120 degrees. As a result, the cases for m > 3 are folded over those same 60 degrees and we will bin the calculated values into the following 8 bins

[tex]\begin{array}{rcr}
\delta + 7\pi/12&==& 2.27\\
\delta + 0\pi/12&==&12.73\\
\delta + 6\pi/12&==&17.27\\
\delta + 1\pi/12&==&27.73\\
\delta + 5\pi/12&==&32.27\\
\delta + 2\pi/12&==&42.73\\
\delta + 4\pi/12&==&47.27\\
\delta + 3\pi/12&==&57.73\end{array}[/tex]

For example, in the first line, [tex]\delta + 7\pi/12[/tex] gives 117.73 degrees. Adding 1 to n is the same as subtracting 120 degrees from the angle, so this is the same as -2.73 degrees and is indistinguishable from +2.73 degrees because the cosine is an even function.

Note that the first and last bins are very close to 0 and 60 degrees. An angle of 0 degrees corresponds to a degenerate case with two excitations at the lower mass value while the angle 60 degrees puts the degeneracy at the upper mass value. These degeneracies would correspond to excitations of baryons that only appear with two masses.

The above set of bins have gaps of length 4.54 and 10.46 degrees. The rms average for a random value in a bin of length D is:
[tex]\frac{2}{D} \int_{x=0}^{D/2} x^2 dx = D^2/12[/tex]
The 4.54 degree gap will be hit 4.54/15 of the time, while the other will be hit 10.46/15 of the time. The average rms is therefore:
([tex](4.54^3 + 10.46^3)/12*15)^{1/2} = 2.622[/tex]

I will present the PDG data in the next post.
 
Last edited:
  • #322
Baryon Excitations part II, PDG data

Here's the results of the calculations:
[tex]\begin{array}{lccccc|l}
Bin/Set & mu_v & mu_s & \delta&Error & L_{IJ} &Notes \\ \hline
m=7 & & & 2.27 & & & Hidden \\ \hline
m=0 & & &12.73 & & &\\
e,\mu,\tau & 17.716 &25.05 &12.73 & & & ****, ****, ****\\
N_{1/2-} & 41.89 & 3.92 &12.97 &+0.24 & S_{11} & ****, ****, *\\
\Lambda_{3/2-} & 42.77 & 5.58 &12.67 &-0.06 & D_{03} & ****, ****, *\\\hline
m=6 & & &17.27 & & & \\
\Sigma_{3/2-} & 41.52 & 2.45 &16.08 &-1.19 & D_{13} & ****, ***, **\\
N_{3/2-} & 41.95 & 3.87 &19.22 &+1.95 & D_{13} & ****, ***, **\\ \hline
m=1 & & &27.73 & & & \\
\Sigma_{1/2-} & 42.33 & 2.60 &23.03 &-4.70 & S_{11} & ***, **, *\\ \hline
\end{array}[/tex]

[tex]\begin{array}{lccccc|l}
Bin/Set & mu_v & mu_s & \delta&Error & L_{IJ} &Notes \\ \hline
m=5 & & &32.27 & & & \\
\Delta_{1/2-} & 43.51 & 3.36 &31.43 &-0.84 & S_{31} & ****, **, *\\
\Delta_{3/2+} & 39.80 & 5.16 &35.70 &+3.43 & P_{33} & ****, ***, ***\\ \hline
m=2 & & &42.73 & & & \\
\Sigma_{3/2+} & 41.90 & 4.95 &41.57 &-1.16 & P_{13} & ****, **, *\\
N_{1/2+} & 36.69 & 6.34 &42.65 &-0.08 & P_{11} & ****, ****, ***\\
\Sigma_{1/2+} & 39.55 & 5.23 &43.22 &+0.49 & P_{11} & ****, ***, **\\
\Lambda_{1/2-} & 40.21 & 2.81 &43.51 &+0.78 & S_{01} & ****, ****, ***\\ \hline
m=4 & & &47.27 & & & \\
\Lambda_{1/2+} & 38.74 & 5.46 &47.46 &+0.19 & P_{01} & ****, ***, ***\\ \hline
m=3 & & &57.73 & & & Hidden\\ \hline
\end{array}[/tex]

The units of [tex]\mu_v^2, \mu_s^2[/tex] are MeV. The next column is the calculated delta value, then the error. The final column are notes. The asterisks are from, the PDG and describe how certain the three states are. I've split the table into two because the PF LaTex editor just couldn't quite handle it as one.

The rms error is 1.88 degrees, somewhat below the expected 2.623. The worst fit is for the [tex]\Sigma_{1/2-}[/tex], which coincidentally also happens to carry the worst asterisk rating from the PDG. The second worst fit is the [tex]\Delta_{3/2+}[/tex]. While this set is well supported by experiment, it includes the [tex]\Delta(1600)[/tex] whose mass is only loosely constrained. About this particle, the PDG writes: "The various analyses are not in good agreement." Together, these two bad fits contribute 80% of the error among the 13 sets of 3 masses each.

The [tex]S_{1/2-}[/tex] excitations, in addition to carrying a high error, are also in a class by themselves. They would seem to fit better in the [tex]\delta + 0\pi/12[/tex] class, with the [tex]N_{1/2-}[/tex] which is also [tex]S_{11}[/tex].

What you're seeing here is ALL the data from the baryons. I suspect that the data will look better when the particle mass ranges are taken into account. My mass calculator does not yet have the software to take errors in the data into account. Before this can be published, it needs to have the errors in the excitations taken into account. Hopefully the bad fits will correspond to loose mass ranges.
 
Last edited:
  • #323
Whoops. I left off the [tex]\Sigma_{1/2+}[/tex]. [edit]Oh, no I didn't![/edit]

Looking at the data, it seems that the really well supported classes are the even ones. Also, there are two low-lying excitations with two masses where one of the two masses is said, in the PDG, to be possibly doubled. These would be the [tex]N_{3/2-} N(1700), N(2080)^2 = D_{13}[/tex] and the [tex]N_{3/2+}N(1720)^2,N(1900) = P_{13}[/tex].

I've finished the user interface for an online applet that will compute Koide parameters with error bars, but I've not yet put the math code into it. I've got that code in another program so it's just a matter of putting it in there and ironing out any bugs. But I really need to get back to the day job.

Carl
 
Last edited:
  • #324
Okay, I've got a tool that allows you to compute error bars for these Koide type parameterizations of three masses.

In addition, the mesons are famous for being messy with duplicate masses hard to distinguish. One ends up with six masses instead of three. The hope is that one can split those six mesons into two groups of three that are decent. Accordingly, the tool holds six masses at the same time and automatically steps through the various permutations:
http://www.measurementalgebra.com/KoideCalc.html

Source code is available at the above.

Using new tools is difficult. I've initialized the above program with the data for the electron, muon and tau. So all you have to do to calculate your first error bars is hit the "KOIDE" button. Also, I've set it up to give the angles in degrees rather than radians.
 
Last edited:
  • #325
CarlB said:
In addition, the mesons are famous for being messy with duplicate masses hard to distinguish.

Indeed that is the idea, isn't it? We have six mesons with charge +1, due to combinations of three families of antidown quarks and two families of up quarks. But they have spin 0; then we have the same number of degrees of freedom that in the case of leptons of charge +1.

Have you spotted now some interesting pattern in the mesons, Carl? I tried some pages ago, up in the thread, and I was not very happy.
 
  • #326
Dr Rivero,

Right now I'm busily working on the theoretical side rather than the phenomenological side. I wrote up the software so that one could divide up a set of six mesons into two groups each with decent Koide numbers. But I don't think that alone would be very convincing. To do it right, I think you have to split six states into two groups of three in such a way that the other properties of the states make sense.

There are four S=C=0 mesons for which only three excitations exist, the pi, the omega, the f(1) and the rho(3). Their mass data are:

[tex]\begin{array}{cccclll}
\pi & \pi &\pi(1300) &\pi(1800) &= 134.9766, &1300(100), &1812(14)\\
\omega & \omega(782) &\omega(1420) &\omega(1650) &= 782.65, &1425(25), &1670(30)\\
f(1) & 1285 &1420 &1510 &= 1281.8(0.6), &1426.3(0.9), &1518(5)\\
\rho(3) & \rho(1690) &\rho(1990) &\rho(2250) &= 1688.8(2.1), &1982(14), &2250(?100?)
\end{array}[/tex]

The rho is ugly in that two of its states are single star, and they don't give limits on the mass of the last one. I've arbitrarily put +-100MeV. Typing the data into the Koide calculator, the resulting angles are (in degrees, min, typ, max):

[tex]\begin{array}{c|ccc}
\pi & 45.5 & 48.5 & 51.3\\
\omega & 43.8 & 46.6 & 49.5\\
f(1) & 36.9 & 38.0 & 39.1\\
\rho(3)& 26.1 & 32.7 & 41.8
\end{array}[/tex]

All but the f(1) are consistent with delta angles, that is, 47.27 and 32.27 degrees. The f(1), which unfortunately has the most accurate mass measurements, is between 32.27 and 42.73. The errors are +1.23, -.67, -4.27, -0.43. RMS error is 2.27, less than what chance would suggest, but not by a lot.

When you read the description of the f(1) resonances, at least they are pretty weird. Their lightest entry shows fairly heavy, maybe they are something more complicated. The PDG commentary on the states is interesting. See page 3:
http://pdg.lbl.gov/2006/listings/m027.pdf
 
Last edited:
  • #327
I think that it would be more useful if you had some physical theory to tune these numbers and equations into.Because without one what use will the numbers and equations be anyway!
 
  • #328
verdigris, I can think of some uses: Even without a theory, it is an attack to the ideology of GUT unification at high scales, because in such setup the masses run via renormalisation group and you should not be able to find any simple relationship at low energy. And we find some.

The problem was that the best model builders got in love with this business of Very High Energy unification, and most of the standard material is focused from such approaches. If you neglect this point of view, Carl (and others?) effort is not very out from the hopes of group theory based unification. His adopted Fermion Cube holds spinors in the same way that it was done in the late seventies (or early eighties). The unification for one family need to be extended in a way to produce three generations and no more, and this is the usual problem in model building.

(Note for instance the idea of using spinors, or fermionic creation operators, to build the representation. Using six of them you get 32 fermionic degrees of freedom, with the right charges of one generation, but if you add another two then you get not 96=32*3 but 128=32*4. Thus you need a very exotic symmetry breaking scheme for the extra two generators, or alternatively a different way to produce the generation-wise symmetry).
 
Last edited:
  • #329
General Mass Formula:

We possibly have neglected our own mass formula which
we had already at the start of this thread. New is a possible
link to real physics which I will talk more about in coming posts.

[tex]M(a,b)\ =\ a\pi-b/\pi\ =\ 2\sqrt{ab}\ \sinh\left( \ln\left(\pi \mbox{\Large $\sqrt{\frac{a}{b}}$}\ \right)\right)[/tex]

The inputs are always two small integer numbers and the
output is the log ratio of two masses. Almost all the
elementary combinations are close to actual ratios of
the (base states) of existing particles:

Code:
.
M(0,2) : log(mτ/mp)   --->   0.636619 : 0.638635   =   0.316%
M(1,1) : log(mτ/mμ)   --->   2.823282 : 2.822461   =   0.029%
M(1,2) : log(mτ/mπ)   --->   2.504972 : 2.544107   =   1.562%
M(1,3) : log(mp/mμ)   --->   2.186662 : 2.183828   =   0.129%
M(1,4) : log(mp/mπ)   --->   1.868353 : 1.905472   =   1.986%
M(2,2) : log(mπ/me)   --->   5.646565 : 5.609955   =   0.652%
M(2,3) : log(mμ/me)   --->   5.328255 : 5.331598   =   0.062%
M(3,4) : log(mτ/me)   --->   8.151538 : 8.154063   =   0.030%
M(4,2) : log(mw/me)   --->   11.92975 : 11.96646   =   0.307%
M(3,6) : log(mp/me)   --->   7.514918 : 7.515427   =   0.0068%

me  =  electron
mμ  =  muon lepton
mτ  =  tau lepton
mπ  =  pion (+/-) 
mp  =  proton
mw  =  W-boson

The possible relation with real physics would be this:

[tex]M(a,b)\ =\ \sum_{k\ =-\infty}^\infty\ \mbox{\huge
J}_k(2\sqrt{ab})\ \left(\pi \mbox{\Large $\sqrt{\frac{a}{b}}$}\
\right)^k[/tex]

[tex]e^{ieA \sin(\omega t)}\ =\ \sum_{k\ =-\infty}^\infty\ \mbox{\huge
J}_k(eA)\ \ e^{ik\omega t}[/tex]

The later expression describes the non-pertubative interaction
factor with a sinusoidal field.Regards, Hans
 
Last edited:
  • #330
verdigris said:
I think that it would be more useful if you had some physical theory to tune these numbers and equations into.

There is a physical theory behind this. It's based on Schwinger's measurement algebra. The equations are rather distant from what you can get with the usual physics and so is the theory. I've written most of a book dedicated to explaining the principles:
http://brannenworks.com/dmaa.pdf

I know that no one has read the above book with any degree of care because if they did, they'd have pestered me with questions, complaints, pointed out typos, and suggested improvements. People are too busy to spend a lot of time reading an amateur's textbook.

What I do get are comments from people who don't understand my theory, haven't carefully read my many documents, do not understand much about Clifford algebras, but who want to use the equations for promoting their own physics theories.

I don't blame them for this. It is very similar to what I did with Koide's original equation. After a couple hours of playing with it I saw that it had an easy derivation in terms of the principle idempotents of Clifford algebras and ran with it. The new interpretation put his equation into eigenvalue form and I posted it here. But I've ignored completely Koide's work in explaining the equation.

There are something like 30,000 physicists on the planet. I think that the papers of only about 1000 of them are carefully read and studied. The other 29,000 write stuff that is ignored.
 
  • #331
A mass relation for all six principal charge 1 particles:[tex]
\begin{array} {|ccc|c|c|c|c|c|c|c|}
\hline
& & & & & & & & & \\
& & \ \ &\ \ \frac{0}{\pi}\ \ &\ -\frac{1}{\pi}\ &\ -\frac{2}{\pi}\ &\ -\frac{3}{\pi}\ &\ -\frac{4}{\pi}\ &\ -\frac{5}{\pi}\ &\ -\frac{6}{\pi}\ \\
& & & & & & & & & \\
\hline
&0\ \pi & &2VeV&\cdot&\cdot&\cdot& & &\cdot\\
\hline
&1\ \pi & &\cdot&\cdot&\cdot&\cdot& W &\cdot&\cdot\\
\hline
&2\ \pi & & p &\cdot&\tau &\cdot&\cdot&\cdot&\cdot\\
\hline
&3\ \pi & &\cdot&\cdot&\cdot& \mu & \pi^\pm &\cdot&\cdot\\
\hline
&4\ \pi & &\cdot&\cdot&\cdot& &\cdot&\cdot&\cdot\\
\hline
&5\ \pi & & & &\cdot&\cdot&\cdot&\cdot& e \\
\hline
\end{array}
[/tex]We can put all six principal charge 1 particles in a simple 2D grid.
All grid positions with "." are forbidden via a simple rule that says:

"No two pair of particles may the same mass ratio." The log mass ratio calculation is: [itex] Y\pi -X/\pi[/itex], where X and Y are the
2D grid's axis. The origin of the grid is 2VeV. (Vacuum expectation Value)Some examples:

1) Electron-Proton mass ratio: log(1836.1526726) (natural log)
7.515427 = experimental
7.514918 = calculated = [itex] 3\pi -6/\pi[/itex]
accuracy: 0.00006772) Electron mass ratio with 2VeV (Vacuum exp.Value): log(963699)
13.77853 = experimental
13.79810 = calculated = [itex] 5\pi -6/\pi[/itex]
accuracy: 0.001423) Electron-Muon mass ratio: log(206.7682838)
5.331598 = experimental
5.328255 = calculated = [itex] 2\pi -3/\pi[/itex]
accuracy: 0.0006274) Proton-Pion mass ratio: log(6.72258237)
1.905472 = experimental
1.868353 = calculated = [itex] \pi -4/\pi[/itex]
accuracy: 0.019865) Electron W-boson mass ratio: log(157387)
11.96646 = experimental
11.92975 = calculated = [itex] 4\pi -2/\pi[/itex]
accuracy: 0.00307Try it yourself!

Regards, Hans.

[tex]
\begin{array} {|clc|c|rc|}
\hline
& & & & & \\
& electron & & e & 0.51099892(40)& MeV \\
& muon\ lepton & & \mu & 105.658369(9) & MeV \\
& tau\ lepton & &\tau & 1776.99(29) & MeV \\
& pion\ \pm & & \pi & 139.57018(35) & MeV \\
& proton & & p & 938.27203(8) & MeV \\
& W boson & & W & 80398(25) & MeV \\
& & & & & \\
\hline
\end{array}
[/tex]
 
Last edited:
  • #332
Hans de Vries said:
Try it yourself!

I've violated our tradition of ignoring ideas that don't immediately strike us as useful for our own silly ideas and tried it myself.

Your accuracy numbers are exaggerated. You're using the natural logs of the masses. And your formula is linear in those logs. So there's no reason to divide by the logarithm.

You are not fitting to a linear formula like y = ax + b where you would be testing for, for example, b = 0. In such a case it would make sense to divide the error by y because you would want a relative error.

Instead, you are fitting a sort of Diophantine equation and the errors are compared to integers. Another way of putting this is that the way you are calculating errors, the larger the ratio, the more accurate your ratios will be. But the steps between different choices of [tex]m-n/\pi[/tex] are constant, so the expected error does not depend on the ratio.

Here, let me make an error calculation the way you are doing it. The year is 2007, which turns out to have approximately a worst case error for fitting to stuff with a small value for [tex]n/\pi[/tex]. We find that:

[tex]639*\pi - 1/\pi = 2007.16[/tex]

The way you are calculating errors, this would have an accuracy of

[tex]0.16/2007 = 0.0000794[/tex]

Do you really want to claim this for a fit to 2007? By the way, a better approximation for 2007 is:

[tex]639*\pi - 1.5/\pi = 2007.00024[/tex]

Let me put it this way, if you were claiming that the ratios were integers, then the error would obviously be the difference between a ratio and the nearest integer. The worst you could do would be 1/2, and this is the number you should divide your errors by, not for example, 2007 or whatever the nearest integer.

When one describes the points on the real line of the form [tex]n\pi - m/\pi[/tex] for small values of n and m, one finds that they are mostly separated by [tex]1/\pi = 0.3183[/tex]. When one assigns random numbers to the nearest one of these, the worst one can do is half the interval, or [tex]1/(2\pi) = 0.1591[/tex] so this is the number you need to divide the differences by, not the log of the mass ratio.

With these changes, the mass ratios you've listed have errors as follows:

0.3%
12.3%
2.1%
23.3%
23.1%

I think that this is impressive enough as it is. What I would like to see is the results of a computer program that can find these sorts of fits, and see various randomized data thrown at it.

Now the other thing I wanted to point out is that the mass formula you have here is not at all incompatible with the Koide formula or the formulas that I've described. Furthermore, the Koide formula ends up putting an angle of 0.22222204717(48) radians into an exponential in the mass matrix for the charged leptons. This suggests that taking natural logs of masses is likely to be a useful thing to do.

Carl
 
  • #333
CarlB said:
Your accuracy numbers are exaggerated. You're using the natural logs of the masses. And your formula is linear in those logs. So there's no reason to divide by the logarithm.

Carl,

These accuracies correctly describe the number of prediction bits
according to information theory. Don't forget that a dynamic range is
needed next to the precision. In terms of floating point numbers:
You need an "exponent" as well as a "mantissa".

Let us simply do the calculation: Take example 1:

Hans de Vries said:
1) Electron-Proton mass ratio: log(1836.1526726) (natural log)
7.515427 = experimental
7.514918 = calculated = [itex]3\pi-6/\pi[/itex]
accuracy: 0.0000677

The number of bits predicted is -log2(0.0000677) = 13.850 ~ 14 bits.

If I would express the number exp(7.514918) = 1835.2181 which has an
accuracy of 0.000509, as a floating point number, then I would need:

-log2(0.000509) = 10.939 ~ 11 bits for the mantissa,

but I also need another 4 bits for the exponent which would have the
value 10 for 2^10 = 1024 to express the range between 1024 and 2048
in which the value 1835.2181 falls.

To be entirely exact: For the exponent I need -log2(10) = 3.321 bits.

Now 10.939 + 3.321 bits ~ 14 bits or the same number of bits as in the
case of the logarithm!

The total number of predicted bits in the 5 relations is 50. There are
six independent relations which provide a total of 61 bits. Good for
18.4 correct decimal digits.

Now we are also inputing information here which we need to subtract.
These are the grid positions. The information we provide for the six
relations = 6 x( log2(5) + log2(6)) = 29.441 bits.

Subtract these from the 61 bits and we are left with 31.6 bits which is,
by far, the best result I've presented on this entire thread until now...

Now here is something else very interesting. The exclusion rule: Note
that almost all grid positions are forbidden by it and a random placement
would very likely break the rule. Thus, the 29 bits number would be
much lower if the rule is correct.


Regards, Hans
 
Last edited:
  • #334
this thread needs only one number, 42. this will probably be the page number when it gets locked.
 
  • #335
whatta said:
this thread needs only one number, 42.

Dear Whatta,

This thread was initiated to allow, (but also to contain) posts with
the purpose of the archival of numerical coincidences. These
post are allowed (here on this thread) within the following restrictions:

1) The reported numerical coincidences should be independent of
the units used (meters, kg..). Only coincidences with dimensionless
numbers are allowed.

2) The numerical coincidences must be independent of the number
system used, like, decimal numbers, binary, hexadecimal.

3) The reported numerical coincident should have a sufficient
"predictability". That is, It should produce significantly more result
bits as the number of bits used as input.Kind Regards, Hans de Vries
 
Last edited:
  • #336
Hans de Vries said:
These accuracies correctly describe the number of prediction bits according to information theory.

I would like to think that information theory is appropriate for the calculation of a fit, but I suspect that even a well accepted calculation like the g-2 value for the electron will fail an information theory test. That is, the amount of bits required to describe the thousands of Feynman diagrams, or the rules to generate those Feynman diagrams along with coupling constants plus etc., etc., etc., will be greater than the amount required to simply code g-2 to experimental error.

But if you insist on using information theory, you need to count the number of bits needed to describe your formula, along with the number of bits used to describe the exponent and the number of bits needed to describe the mantissa. This will blow up the accuracy. If the formulas really were that good they wouldn't be ignored so much. You haven't faulted my calculation for the approximation of [tex]e^{2007}[/tex] in the form [tex]e^{n\pi - m/\pi}[/tex] because it was correct.

The calculation I provided has the useful feature that for masses distributed uniformly over relatively small ratio spaces, for example, if the mass ratio is from [tex]e^{12.5} < R < e^{13.0}[/tex], it will give approximately the correct probability for getting a hit, in that the error will be approximately uniformly distributed from 0 to 100%. This feels to me like the right way of looking at it.

There are some other things I forgot to mention. The vertical column would make more sense to me if it was turned upside down relative to the pi values. The way it's set up, increasing values of the n cause decreases in masses and that is counterintuitive. So the electron would have the 0\pi rather than 5\pi.

Second, I'm not sure what "principle" means with respect to charge 1 particles. Why not include the [tex]\Delta^+[/tex]? Did you try to fit other charge +1 particles and they didn't fit or what? I guess there are most of a thousand meson / baryon resonances and excitations with charge +1.

Carl
 
  • #337
CarlB said:
But if you insist on using information theory, you need to count the number of bits needed to describe your formula.

I see I was too conservative on the input bits side.

There are only 30 grid positions which are the same for each result.
6 of the 30 are hits. I should have discounted the input space only once
and not 6 times. This brings the prediction back to 61-log2(30)~56 bits

The expression itself, discounting the small integers, is about 10 bit, which
is three times a basic operation like +-x/ (two bit each) plus the use of
a single elementary constant (pi) which we presume to be in small group
together with the small integers. (3-4 bit)

This leaves us with ~45 bits prediction which is three to four times more
as the alpha result.

CarlB said:
There are some other things I forgot to mention. The vertical column would make more sense to me if it was turned upside down relative to the pi values. The way it's set up, increasing values of the n cause decreases in masses and that is counterintuitive. So the electron would have the 0\pi rather than 5\pi.

It was like this until I decided to put the Vacuum expectation Value at the
origin of the grid.

CarlB said:
Second, I'm not sure what "principle" means with respect to charge 1 particles. Why not include the [tex]\Delta^+[/tex]? Did you try to fit other charge +1 particles and they didn't fit or what? I guess there are most of a thousand meson / baryon resonances and excitations with charge +1.

Many, many indeed, and there are only 30 grid positions...

It's more that the particles select them self by fitting. The resonances
and excitations don't fit but the base states do. Hadrons with masses
dominated by a single (fractional charge) quark probably won't fit either
but I have yet to try.

Heck, this formula is one of the subjects of the very first post in this
thread and I never even bothered to try a hadron mass because of
me being convinced that these can only have incredibly complicated
QCD determined mass values.

But then, looking at the infamous 'proton spin crisis', something can be
incredibly complex inside, and then, all the small spin contributions from
the quarks, the gluons, the sea-quarks and all the various angular
momenta add up to a very simple number determined by geometry only.


Regards, Hans
 
  • #338
Hans, I've been playing around with the numbers and now I see that I was too hasty to judge your work as numerology. (I didn't say so, but that was what I was thinking.) In fact, now that I understand the method better, I think I can contribute to this exciting branch of phenomenology. Before, I just had trouble seeing what the heck an exponential could be doing in the mass spectrum.

First, I guess I should mention some theory that drove me in this particular direction. The fact is that there is a lot of periodicity in the elementary particle masses having to do with 3s. 3 is sort of midway between pi and e. The next important number smaller than e is 5/2. Also, 5/2 is the average of the two smallest prime numbers, which suggests that p-adic field theory could be important here, just like the string theorists say. And I like to think of QFT as a probability related theory so equations such as

[tex]5/2 = 1 + \sum_n3^{-n}[/tex]

naturally led me to explore the use of 5/2 in the elementary particles. Enough for the theory (maybe it still needs some work); so here's my formula:

[tex]m_{N,M} = (5/2)^{3(N - M/(4\pi^2))}.[/tex]

This formula works very well for the charged leptons, with the electron, muon and tau taking N=0,2,3, and M= 0, 2, and 1, respectively. I need not point out how suggestive these small constants are! The experimental and calculated exponents are:

[tex]\begin{array}{rcc|l|l|l|}
&N&M& 3(N+M/(4\pi^2)) & \log_{5/2}(m/m) & accuracy \\ \hline
\mu/e &2& 2& = 5.848018& 5.818676 & 0.00501\\
\tau/e &3& 1& = 8.924009& 8.898993 & 0.00280\\
\tau/\mu&1&-1& = 3.075991& 3.0803163& 0.00141\\ \hline
\end{array}[/tex]

The above makes a grid with 4x3 = 12 boxes, three of which are filled. This is suggestive, especially when you look at those tight accuracy figures and the very convincing division by pi. But I don't think that this is nearly enough to write a paper on.

The real test is to see if we can put the other charged particles into the same formula. Admittedly, one might suppose that the color force, being a sort of charge, would contribute something to the mass of a quark or baryon, who reallly knows? In phenomenology, it makes sense to boldy go where no sane man has gone before and simply see what particles naturally fit together.

[tex]\begin{array}{|ccc|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
&&&&&&&&&&&&&&&\\
&&M=&0&1&2&3&4&5&6&7&8&9&10&11&12\\
\hline
&N=0&&&&e&&&&&&&&&&\\
\hline
&N=1&&3d&&&&&&&&&&3u&&\\
\hline
&N=2&&\pi^+&&&&\mu&&&&&&&&\\
\hline
&N=3&&&&&\tau&\Omega&&&\Xi&\Delta&\Sigma&\Lambda&&p\\
\hline
\end{array}[/tex]

In the above, I'm not so sure about the up and down quark. The masses aren't known very accurately. I've made the assumption that the masses that one needs to use are at the low end of the PDG figures. And since the quarks also have the color force (as well as E&M), I've multiplied their masses by 3.

Only one meson fell into the fit, but it is the most fundamental one. I guess this is evidence that the mesons really are a mess. But I could fit all of the basic (i.e. no charm, bottom or top) low lying baryons. Furthermore, no two particles ended up in the same slot, which is cool.

Some of the fits that I checked are remarkably good. For example, the Lambda/e accuracy is 0.00013 and the 3u/Omega is 0.00041. That last ratio really is stunning accuracy given that the mass of the up is listed as 1.5 to 4.0 MeV in the PDG! I'm not sure about how this happened, I'm just crunching numbers on my calculator.

Anyway, the formula fits all the principle baryons, all the charged leptons, the lightest quarks, and the lightest meson.

Hans de Vries said:
There are only 30 grid positions which are the same for each result. 6 of the 30 are hits. I should have discounted the input space only once and not 6 times. This brings the prediction back to 61-log2(30)~56 bits

I'm kind of a dummy and I don't really see how your calculation here makes any sense. But since I've got 12 hits in 52 grid positions, it does look like this should be okay. In addition, all the fermions are bunched together on one side of the diagram so it kind of makes me wonder if it wouldn't be useful to define a "baryon bit" and use three quantum numbers to define the mass exponent.

Hans de Vries said:
The expression itself, discounting the small integers, is about 10 bit, which is three times a basic operation like +-x/ (two bit each) plus the use of a single elementary constant (pi) which we presume to be in small group together with the small integers. (3-4 bit) This leaves us with ~45 bits prediction which is three to four times more as the alpha result.

Since I've got 12 masses listed I bet that I should be really positive. Since I was employed for 15 years designing digital logic, and one of my specialties was coding theory, it really bothers me that I don't understand your bit calculations. Maybe I should hit the books -- I learned theory back in the stone age when we mostly used our fingers. Heck, I still remember punch card codes.

I find it hard to believe that you could send someone your expression in just 10 bits. Of course when mapping expressions to bit sequences, it makes sense to code the important expressions in smaller number of bits.

My HP calculator is fairly efficient general purpose scientific calculator. It has over 32 keys so pressing anything that can be coded in one key stroke costs 6 bits. Looking up a data value takes two keystrokes or 12 bits. Pressing the <enter> key costs 6 bits.

Hans de Vries said:
It was like this until I decided to put the Vacuum expectation Value at the origin of the grid.

Yes, the factor of 2 was brilliant! I'd have never had the idea to triple the quark masses if you hadn't led the way.

enjoy,
Carl
 
Last edited:
  • #339
was that post sarcastic
 
  • #340
CarlB said:
...which suggests that p-adic field theory could be important here...

Yes, Carl. A nice paper on this point is

On the universality of string theory
K. Schlesinger
http://arxiv.org/abs/hep-th/0008217

which introduces the concept of a tower of quantizations. All of String theory fits into the prime 3 on the tower, from the point of view of the true M Theory. In other words, the unified theory can address all scales on an equal footing. The classical landscape is Mathematics Itself.

How do you like that, whatta?
 
  • #341
Kea said:
On the universality of string theory
K. Schlesinger
http://arxiv.org/abs/hep-th/0008217

which introduces the concept of a tower of quantizations. All of String theory fits into the prime 3 on the tower, from the point of view of the true M Theory. In other words, the unified theory can address all scales on an equal footing. The classical landscape is Mathematics Itself.

T. Pengpan and P. Ramond showed in http://arxiv.org/hep-th/9808190" that the 11D supergravity triplet of SO(9) representations sits at the base of an infinite tower of irreps of SO(9), describing an infinite family of massless states of higher spin. They muse that such higher-spin states describe degrees of freedom of M-theory.
 
Last edited by a moderator:
  • #342
whatta said:
was that post sarcastic

Somehow I forgot to give the calculations for the absolute errors and the masses of the predicted particles. Of course these aren't as pretty as the selected ratios, but are probably more of an indication of how good the fit is:

[tex]Calc = 0.5743319321193890 MeV * {(5/2)^{Exp}}[/tex]

[tex]\begin{array}{cccc}
Exponent&particle&Mass&Calc/Mass\\
3(0 + 2/(4\pi^2))&e&0.51099892&0.9778\\
3(1 + 10/(4\pi^2))&3u&4.5&0.9940\\
3(1 + 0/(4\pi^2))&3d&9.0&0.9971\\
3(2 + 4/(4\pi^2))&\mu&105.658369&1.0044\\
3(2 + 0/(4\pi^2))&\pi^+&139.57018 &1.0046\\
3(3 + 12/(4\pi^2))&p&938.27203&1.0126\\
3(3 + 10/(4\pi^2))&\Lambda&1115.683&0.9788\\
3(3 + 9/(4\pi^2))&\Sigma&1189.37&0.9843\\
3(3 + 8/(4\pi^2))&\Delta&1232.0&1.0188\\
3(3 + 7/(4\pi^2))&\Xi&1321.31&1.01845\\
3(3 + 4/(4\pi^2))&\Omega&1672.45&0.9915\\
3(3 + 3/(4\pi^2))&\tau&1776.99&1.0005
\end{array}[/tex]

Getting back to the charged leptons, their masses (and ratios) are exact numbers and one expects that to store them requires an infinite number of binary bits. Our experimental measurements, on the other hand, are inexact, so one can certainly expect that data they contain can be easily compressed.

The charged lepton mass numbers run over a very wide ratio and so the natural way to compress them is by a power series. Koide's formula uses a square root, which is also a compression method and so at first I suspected it as well.

But if Koide were looking for a compression formula rather than physics, he would have done well to begin with the 7th root of masses rather than the square root. Then the masses of the charged leptons are fairly close to [tex]1^7, 2^7[/tex], and [tex]3^7[/tex], and one could write a generation formula in the form

[tex]m_n = (n+f(n) )^7[/tex]

for n=1,2,3. In fact, I'm quite certain that I could find such a formula, and then bend it around to pick up the baryon masses which form such a convenient linear series. But I think my point here is made.

In the face of how easy it is to find compression algorithms for sparse data, where the Koide formla is more convincing is that it is consistent with exactly three generations and no more. The short form for the Koide formula is:

[tex]\sqrt{m_n} = 1 + \sqrt{2}\cos(2n\pi/3 + 2/9 + \epsilon)[/tex]

where [tex]\epsilon = 0.22222204717(48) - 2/9[/tex] and I've left off an overall scaling factor. Since [tex]2(n+3m)\pi/3 = 2n\pi/3 + 2m\pi[/tex], the formula gives exactly three masses so there are only three generations implied. These are the electron, muon, and tau for n the generation number, 1, 2, 3. The [tex]\sqrt{2}[/tex] is what Koide found in 1981, the [tex]\epsilon[/tex] is what I found a year ago.

So the Koide formula is exact to experimental error, and it's not really in a form that is obviously convenient for simply hiding a compression algorithm.
 
Last edited:
  • #343
"It is shown here that the rest energies and magnetic moments of the basic elementary particles are given directly by the corresponding Planck sublevels."

http://uk.arxiv.org/pdf/physics/0611100

Enjoy.
 
  • #344
This was released by Hans on sci.physics.foundations:

A non-perturbative derivation of the exact value of the SU(2) coupling value g from the standard Electroweak Lagrangian itself.
Hans de Vries, March 30, 2007
http://chip-architect.com/physics/Electroweak_coupling_g.pdf

I wasted two days playing with the charged particle formulas and ended up quite disgusted and angry. Having learned my lesson, I'm leaving this one for others to analyze.

In short, it relates a smaller number of constants but to a higher accuracy and with simpler formulas, which is about what one would expect.
 
  • #345
Hi, Carl

I must say you did impress me with your feverish activity the in last few days.
From experience I know that these extreme bursts of mental activity have a
risk of ending up in the type of exhaustion you're describing here. :blushing:
Relax, Rome wasn't build in a day, they say. I was still considering a
response before starting this subject.

CarlB said:
This was released by Hans on sci.physics.foundations:A non-perturbative derivation of the exact value of the SU(2) coupling value g from the standard Electroweak Lagrangian itself.

Hans de Vries, March 30, 2007
http://chip-architect.com/physics/Electroweak_coupling_g.pdf
In short, it relates a smaller number of constants but to a higher accuracy and with simpler formulas, which is about what one would expect.

Before jumping onto this one I should maybe first point out the "somewhat
vague" mathematical relation between the new paper and the numerical
coincidents in these mass ratios:[tex]
\mbox{\huge $ e^{\left( m\pi-\frac{n}{\pi} \right) } $}\ \ =\
\mbox{mass ratio numerical coincidents}
[/tex]

Where m and n should be small integer values. This can be written as a
power expansion like this:

[tex]
\mbox{\huge $ e^{\left(m\pi-\frac{n}{\pi}\right)} $}\ \ =\ \sum_{k\
=-\infty}^\infty\ \mbox{\huge J}_k(2\sqrt{nm})\ \left( \pi
\mbox{ $\sqrt{\frac{n}{m}}$}\ \right)^k
[/tex]
This now relates to the core of the new paper:

[tex]
\mbox{\huge $ e^{iQ \sin(\omega t)}$}\ \ =\ \sum_{k\
=-\infty}^\infty\ \mbox{\huge J}_k(Q)\ \mbox{\huge $ e^{ik\omega t} $}
[/tex]

The later is the phase a charged particle acquires in a sinusoidal
electromagnetic (electroweak) field. This is a superposition where
the Bessel coefficients can be interpreted as amplitudes:

J0(x) = amplitude to absorb 0 quanta
J1(x) = amplitude to absorb 1 quanta
J2(x) = amplitude to absorb 2 quanta
J3(x) = amplitude to absorb 3 quanta
...

These Bessel coefficients have the unique property that they
are Unitary for both amplitudes as well as probabilities
for any value of Q:

[tex]
\sum_{k\ =-\infty}^\infty \mbox{\huge J}_k(Q)\ = 1, \quad \ \
\sum_{k\ =-\infty}^\infty \left|\mbox{\huge J}_k(Q)\right|^2\ = 1
[/tex]Regards, Hans

PS: Try the link below. There's a lot on these Bessel coefficients
in regard with frequency modulation on the internet:

http://images.google.nl/images?hl=e...gle+Search&ie=UTF-8&oe=UTF-8&um=1&sa=N&tab=wi
 
Last edited:
  • #346
I guess I should probably update three things.

First of all, I've been using the constant 17.716 sqrt(MeV). Squaring to get to the units everyone else uses, this is 313.85 MeV.

Nambu uses 35 MeV in his empirical mass formulas. The relation to my 313.85 MeV is that 35 x 9 = 315 MeV. Of course 9 is a power of 3 and powers of 3 are important in my theoretical stuff, the majority of which is not published. Some links for the Nambu theory are:

http://www.google.com/search?hl=en&q=nambu+mass+formula

Also

http://www.arxiv.org/abs/hep-ph/0311031

refers to it as Y. Nambu, Prog. in Theor. Phys., 7, 595 (1952). I've not yet read much on the theory. I'd like to thank Dr. Koide for noting that my mass formulas reminded him of the Nambu stuff.

The Nambu formula has probably been discussed around here but I haven't found it. I'll drop by the local university and read the articles on it sometime in the next week or so. We should discuss the Nambu formulas here or maybe on another thread. Dr. Koide also mentioned the Matsumoto formula, which I've not yet looked up.

Since the mass I'm using comes from the electron and muon masses, I can calculate the "Nambu mass" to much higher accuracy. The number starts out as 34.87 MeV.

Second, on the analogy between the force that composes the electron, muon and tau, and the excitations of the elementary particles: At first I was thinking that the analogy should be strongest when the three quarks making up the baryon were identical, as in the Delta++. But the spin of a Delta++ has to be 3/2 which is different from that of the electron.

In the theory I'm playing with, the 3 preons inside an electron are assumed to be in an S state and can transform from one to another by a sort of gluon. To get that kind of wave function, one should instead look at the baryons that are made up of three different quarks.

Among the low lying baryons, there are two that are composed of one each of u, d, and s. These are the Lambda and Sigma. The charged lepton Koide formula is:

[tex]\sqrt{m_n} = 17.716 \sqrt(MeV) (1 + \sqrt{2}\cos(2n\pi/3 + 0.22222204717(48) ))[/tex]

and the neutrinos by a similar formula (multiplied by [tex]3^{-11}[/tex]), but with the angle [tex]\pi/12[/tex] added to the angle inside the cosine. These are the m=0 and m=1 mass formulas listed above, though the neutrinos are not included above.

To get the analogy between the charged leptons and the baryons as close as possible, one naturally looks for a set of three "uds" baryons that have the same angle as the charged lepton mass formula. Such a triple does exist, it is the [tex]\Lambda_{3/2-} D03[/tex]. The triple consists of the [tex]\Lambda(1520), \Lambda(1690), \Lambda(2325)[/tex]. Putting these into the Koide formula gives the form:

[tex]\sqrt{m_n} = 42.769 + 5.5856 \cos(2n\pi/3 + 0.22186)\;\;\;\sqrt{MeV}[/tex]

The angle is close to the 0.22222204717, though it cannot be distinguished from 2/9. The other two constants are related to the 17.716 constant approximately as
[tex]\mu_v = 42.769 = (1+\sqrt{2}) \;\;\;17.716[/tex]
[tex]\mu_c = 5.5856 = \sqrt{2}\;\;\; 17.716\times 2/9[/tex]

Making the assumption that these are exact allows one to "predict" the associated resonances as:

[tex]\Lambda(1520) = 1520.408[/tex]
[tex]\Lambda(1690) = 1690.673[/tex]
[tex]\Lambda(2325) = 2323.355[/tex]

These numbers are well within the PDG estimates. This is the only uds excitation that falls in the m=0 class. The other Lambda and Sigma excitations have some interesting numbers as well, but are not as nicely suggestive.

The suggestion is that [tex]\mu_v[/tex] comes from the internal energy of the particles. Looking at a quark as a system, its internal (square root) energy is the [tex]1+\sqrt{2}[/tex] number in the charged lepton formulas when you ignore the cosine.

The idea here is that if you ignored the color effects and the energy of the stuff that glues them together, all quarks would weigh the same amount. The "1" is the length of the mass vector that the preons differ in, while the sqrt(2) is the length of the mass vector that they share. This sqrt(2) gets modified by the cosine according to how well they cancel their fields. (And the generations arise from glue effects.)

The [tex]\mu_s[/tex] comes from the color force. The color force between quarks is only 2/9 of the force between the preons. One can provide various unconvincing arguments for why this should be 2/9. Suffice it to say that 2/9 shows up fairly frequently in these formulas.

The third thing I need to mention is that I made an error in a calculation for the delta angles from the baryon excitations. I was making calculations by calculator, this was before I coded it up into Java. There were two excitations that gave particularly bad errors in their delta calculation. The primary change is that these errors decreased considerably and the fit is much better than advertised.

The [tex]\Sigma_{1/2-}[/tex] delta error was -4.7 degrees in the m=1 class, now it is 20.44 and is in the m=6 class with an error of +3.17. The [tex]\Delta_{3/2+}[/tex] error was 3.43. Now the best angle is 34.10 and the error is 1.83 degrees. There are still wide error bands on the calculated angles, but the RMS error is close to halved as these two outliers contributed 80% of the old RMS error.

Eventually I'll write this up in a LaTex article and check the numbers carefully. Right now, I'm amusing myself by alternately pushing from the theoretical and phenomenological sides. Also I should mention that I found and fixed an unrelated minor Java programming error in the Koide calculator.

When I finally get around to writing up the LaTex article, I will try to figure out how Alejandro and Andre wrote the "Gim" symbol in this paper:

http://www.arxiv.org/abs/hep-ph/0505220

and redefine it as a vector, so that mass = |Gim|^2.

There are obvious reasons for expecting powers of e in physics. Powers of 3 are more rare. One way of getting a power of 3 is by exponentiating ln(3). Lubos Motl's blog recently brought the subject of how ln(3) shows up in black hole calculations here:
http://motls.blogspot.com/2007/04/straightforward-quasinormal-calculation.html
 
Last edited:
  • #347
The latest Standard Model prediction for the tau magnetic moment is:

1. 00117721 (5)

The theoretical value is six orders of magnitude more accurate
as the experimental one due of course to the short lifetime.


The tau lepton anomalous magnetic moment
S. Eidelman, M. Giacomini, F.V. Ignatov, M. Passera
http://arxiv.org/abs/hep-ph/0702026


Theory of the tau lepton anomalous magnetic moment
S. Eidelman, M. Passera
http://arxiv.org/abs/hep-ph/0701260


Regards, Hans
 
  • #348
Another numerical coincident of the vertex correction (magnetic anomaly)
in a quite elementary mass ratio. This time the square of the pion mass delta:

[tex]\left|\ \frac{\pi^\pm}{\pi^0} - 1\ \right|^2\ =\ 0.00115821 (26)[/tex]

So we have as numerical coincidences:

0.001159652________ Electron Magnetic Analomy
0.001159567________ Mass independent Magnetic Analomy
0.001158692_(27)___ Muon / Z boson mass ratio.
0.00115821__(26)___ Pion mass delta square.


The latter two relations are as good as sigma 1.8. Not as good but similar
is this one concerning the proton neutron mass delta:

[tex]\left|\ \frac{m_p}{m_n} - 1\ \right|\ \ =\ 0.0013765212 (6)[/tex]


0.00131419__(41)___ Muon / W boson mass ratio
0.0013765212_(6)___ Proton-neutron mass delta
Regards, Hans

http://arxiv.org/PS_cache/hep-ph/pdf/0503/0503104v1.pdf
 
Last edited:
  • #349
A pretty amazing coincident isn't it? The relation:

[tex]\mbox{\Huge $\frac{m_{\pi^\pm}}{m_{\pi^0}}\ =\ 1+ \left(\frac{m_\mu}{m_Z}\right)^\frac{1}{2}\ =\ 1.0340344(55)$}[/tex]

following from the previous post is as exact as:

1 : 1.0000067 (42)

Where more than half the error is experimental uncertainty.And secondly. The delta isn't just any value. It's square, which is:

[tex]\mbox{\Huge $\frac{m_\mu}{m_Z}\ =\ 0.001158692(27) $}[/tex]

is to a very high degree equal to the magnetic anomaly. Most notably
the mass independent value (without the vacuum polarization terms),
which is 0.001159567I've used the latter two of following pion mass data:

[tex]m_{\pi^\pm} = 139.57018 \pm 0.00035\ MeV[/tex]
[tex]m_{\pi^0} = 134.9766 \pm 0.0006\ MeV[/tex]
[tex]m_{\pi^\pm}-m_{\pi^0} = 4.5936 \pm 0.0005\ MeV[/tex]

Where the difference is experimentally known better as the two
absolute values. So the error I use is the 0.0005 MeV. The
combined error of the charged and neutral pions is larger as
the error in the numerical coincident!Regards, Hans.
 
Last edited:
  • #350
Baryons, Mesons, Gluons and QCD Confinement

Dear friends over at Physics Forums:

I have for more than two years been researching the possibility that
baryons may in fact be non-Abelian magnetic sources.

The result of this research are now formally and rigorously presented in
a paper at:

http://home.nycap.rr.com/jry/Papers/Baryon%20Paper.pdf

Among other things, I believe this paper fundamentally solves the
problem of quark and gluon confinement within baryons, and origin of
mesons as the mediators of nuclear interactions. It may also resolve
the question of fermion generation replication.

I think you guys may enjoy playing with the mass formula which I first develop in (3.7). It bears a resemblance to Koide formula.

I would very much appreciate your constructive comments.

Best to all.

Jay
_____________________________
Jay R. Yablon
Email: jyablon@nycap.rr.com
Web site: http://home.nycap.rr.com/jry/FermionMass.htm
sci.physics.foundations co-moderator
 
Last edited by a moderator:

Similar threads

  • High Energy, Nuclear, Particle Physics
Replies
1
Views
2K
  • High Energy, Nuclear, Particle Physics
Replies
16
Views
5K
  • Poll
  • Beyond the Standard Models
Replies
5
Views
1K
  • High Energy, Nuclear, Particle Physics
Replies
6
Views
2K
  • Advanced Physics Homework Help
Replies
13
Views
4K
  • Special and General Relativity
Replies
15
Views
1K
  • High Energy, Nuclear, Particle Physics
Replies
11
Views
9K
  • High Energy, Nuclear, Particle Physics
2
Replies
49
Views
9K
Replies
3
Views
6K
Replies
1
Views
1K
Back
Top