- 568
- 41
Oh Good, I hate loosing unconventional talents.I see that you have been working very hard behind the scenes .Good luck and be strongThanks ftr, I'm still there
Oh Good, I hate loosing unconventional talents.I see that you have been working very hard behind the scenes .Good luck and be strongThanks ftr, I'm still there
Thanks, indeed, with many new insights.I see that you have been working very hard behind the scenes .
Hmm, the main property Weibull distribution is that you can integrate it, so perhaps they are just seeing some exponential fitting. As for the coincidence of shape... How are they "fitting" the distribution anyway? max likelihood? for a sample of six points?using a two-parameter "Weibull distribution". The parameters are a shape parameter k and a (mass) scale parameter l. They find (equation 3.6), "surprisingly", that the two distributions have the same shape parameter, to three decimal places, so differing only by mass scale. Is this circumstantial evidence that a similar mechanism (e.g. @arivero's waterfall) is behind both sets of yukawas?
Python 3.6.5 (default, Mar 31 2018, 19:45:04) [GCC] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import scipy.stats as s
>>> import numpy as np
>>> def printStats(data,fit):
... nnlf=s.weibull_min.nnlf(fit,np.array(data))
... ks=s.stats.kstest(np.array(data),'weibull_min',fit)
... print("Fit:",fit)
... print("negloglikelihood",nnlf)
... print(ks)
...
>>> data=[2.3,4.8,95,1275,4180,173210]
>>> printStats(data,s.weibull_min.fit(data, floc=0))
Fit: (0.26861598701150763, 0, 2288.475995797873)
negloglikelihood 51.591787735494115
KstestResult(statistic=0.15963622669415056, pvalue=0.9979920390593924)
>>> data=[0.511,106,1777]
>>> printStats(data,s.weibull_min.fit(data, floc=0))
Fit: (0.37366611506161873, 0, 229.48782534013557)
negloglikelihood 19.233771988350043
KstestResult(statistic=0.23629696537671507, pvalue=0.996122995979272)
>>>
>>> printStats(data,s.weibull_min.fit(data, floc=0,f0=0.26861598701150763))
Fit: (0.26861598701150763, 0, 163.62855309410182)
negloglikelihood 19.44374499168725
KstestResult(statistic=0.25597858377056465, pvalue=0.9893658166203932)
Fit: (0.2698428583536703, 0, 1156.8564935786583)
negloglikelihood 71.49265190220518
KstestResult(statistic=0.14728900912921583, pvalue=0.9897758037009418)
>>> data=[2.3,4.8,95,1275,4180,173210]
>>> printStats(data,s.weibull_min.fit(data))
Fit: (0.37359275206555403, 2.2999999999999994, 39837.607589227395)
negloglikelihood 30.744667740180212
KstestResult(statistic=0.48342279946216715, pvalue=0.08187510735420012)
The paradigm of Tye et al is something like: We consider a landscape of string vacua in which vacua are indexed by fluxes (and other properties), and we suppose that the flux values are sampled from a uniform distribution. But the yukawas depend on the fluxes in an "anti-natural" way (Lubos's word), such that uniformly distributed fluxes translate into Weibull-distributed yukawas (distribution divergently peaked at zero). "Related distributions" at Wikipedia shows how a uniformly distributed variable can be mapped to an exponentially distributed variable, and then to a Weibull distribution.perhaps they are just seeing some exponential fitting
I am disappointed that the fit algorithm in scipy fails to produce the same shape... I wonder how they are doing the fit, if R or some manual code, of different precision. The use of chi square points to some ad-hoc code; after all, the point of the Weibull distribution is that it has an exact and very simple cdf,It is still mysterious why the lepton "waterfall", consisting of just one triplet, and the quark waterfall, consisting of four triplets, would have the same Weibull shape, but this might be clarified with further study
That was my suspicion too, as I can at leat get the same k if I do the fit with quarks... but then it is very puzzling that they claim chi^2=1 for leptons in 3.6. Again, I have no idea how do they calculate the chi coefficient.I now suspect that they simply decided apriori that shape should be the same. In the introduction to part 3, they say "Once dynamics introduces a new scale... it will fix l, while k is unchanged"; and in 3.2 they say colored and colorless particles fit this paradigm. So I think they just did some kind of joint fit, deliberately assuming (or aiming for) a common k value.
>>> import scipy.stats as s
>>> data=[0.511,106,1777]
>>> fit=(0.37366611506161873, 0, 229.48782534013557)
>>> from skgof import ks_test, cvm_test, ad_test
>>> w=s.weibull_min(*fit)
>>> ad_test(data,w)
GofResult(statistic=0.25987976933243573, pvalue=0.9716940635456661)
>>> fit=(0.26861598701150763, 0, 163.62855309410182)
>>> w=s.weibull_min(*fit)
>>> ad_test(data,w)
GofResult(statistic=0.22716618686611634, pvalue=0.9893423546344761)
FWIW, my hypothesis is that Koide-type relationships and the mass hierarchy general arises because (1) the CKM matrix is logically prior to the mass matrix, and (2) the mass matrix represents a dynamic balancing of the mass of each particle of one type, with each of the particles it could transition to via the W boson, adjusted for transition probabilities, in a simultaneous equation that covers and balances all transitions at once.A paper by Goldman and Stephenson today, promotes the idea that the standard model mass matrices can be obtained by "democratic" yukawa couplings that all have the same value, plus small perturbations.
The reason is as follows. Suppose we have a 3x3 matrix in which all matrix entries are the same (e.g. they could all be equal to 1). You can diagonalize this matrix, by multiplying by a "tribimaximal" matrix. The resulting matrix will be diag(m,0,0) for some m. But for quarks and charged leptons, we have that the third generation is much more massive than the first two. So in all cases, the mass matrix can be approximated by a matrix of the form diag(m,0,0).
Goldman and Stephenson perform an inverse tribimaximal transformation on the quark mass matrices in order to show just how close to democratic they are (eqn 6 and 7), and they show that, for a particular parametrization, the deviations from democracy are small (equation 11)... the largest of these perturbations is still just .02, so if a model can be found, it can be analyzed perturbatively. They proposed in a previous paper that these perturbations might arise from interactions with dark-matter sterile neutrinos, but they don't provide a model. The other potentially significant thing they observe, is that some of the perturbation parameters need to be complex, so they propose that this is where CP violation comes from (section IV B).
They call their idea Higgs Universality, since the idea is that to a first approximation, the coupling of all fermions to the Higgs is the same.
They don't present a model. However, I will point out that recent work by Koide and Nishiura (mentioned, e.g., at #141 in this thread) to some extent is such a model. Koide and Nishiura have a universal ansatz for the mass matrices, which involves contributions from the democratic matrix, the unit matrix, and a matrix diag(√e,√μ,√τ). Ironically, however, for the charged leptons, the contribution from the democratic matrix is zero. This is ironic, not only because Goldman and Stephenson assert (calculations promised for a future paper) that the charged lepton masses can also be obtained by a small perturbation of a democratic matrix; but Koide himself obtained them that way, in earlier work!
If I look at the history of Koide's attempts to explain his own formula, I see three kinds of model. First, the preon model where he first obtained it. Second, the democratic model. Third, the perturbed democratic model with Nishiura. It is my understanding that @arivero's sbootstrap was partly inspired by the preon model, perhaps because some of the preons can be paired up in a fashion reminiscent of quark-diquark supersymmetry. (This should be compared with Risto Raitio's approach to supersymmetric preons.) It would be intriguing if one could close the circle of Koide's models, and obtain the "perturbed democratic model" by having democratically interacting preons mix with their own composites - the latter providing the "√e,√μ,√τ" perturbation.
Speaking of supersymmetry, the study of the supermathematics of Grassmann, Berezin, etc, has given me a new perspective on where the problematic phase of 2/9, discovered by @CarlB, could come from (see e.g. #173 in this thread). Phases that are rational multiples of π are much more natural. I had previously noticed that the well-known expansion of π/4 as 1 - 1/3 + 1/5 - ... contains a 2/3 in its first two terms, so if the analogous expansion for π/12 were somehow truncated there, one could obtain 2/9. The only problem was that I couldn't think of a good reason for such a truncation. One just had to construct a model with a π/12 phase and hope, perhaps, that it approximated Carl's ansatz well enough.
However - that expansion can be obtained as a Taylor series in x, for x=1. Meanwhile, for a grassmann number θ, θ^2 (and all higher powers) equals zero, because of anticommutativity: ab=-ba, so θ.θ = -θ.θ = 0. So, what if you took a Taylor series for x=1, and superanalytically continued it to x=θ...? All powers of x equal to x^2 or higher, will drop out. Unfortunately, 1/3 or 1/9 doesn't naturally show up as the coefficient of x, but rather as the coefficient of x^3, and I haven't thought of a sensible way to associate it with x^1.
Has anyone checked this with the square root neutrino masses, one of which is negative? If not, I'm inclined to do it myself.Here is a nifty new little paper:
Phenomenological formula for CKM matrix and physical interpretation
Kohzo Nishida
(Submitted on 3 Aug 2017)
We propose a phenomenological formula relating the Cabibbo--Kobayashi--Masukawa matrix VCKM and quark masses in the form (md‾‾‾√ms‾‾‾√mb‾‾‾√)∝(mu‾‾‾√mc‾‾‾√mt‾‾‾√)VCKM. The results of the proposed formula are in agreement with the experimental data. Under the constraint of the formula, we show that the invariant amplitude of the charged current weak interactions is maximized.
Comments: 6 pages, no figures
Subjects: High Energy Physics - Phenomenology (hep-ph)
Cite as: arXiv:1708.01110 [hep-ph]
(or arXiv:1708.01110v1 [hep-ph] for this version)
I just realized a fairly large problem with how I was thinking of this. For the 2x2 case, his formula is providing two complex equations (i.e. real equations which imply that the imaginary part is zero) which is 4 real restrictions. That happens to match the number of real degrees of freedom in a 2x2 unitary matrix so it determines the answer.Has anyone checked this with the square root neutrino masses, one of which is negative? If not, I'm inclined to do it myself.
Yes. It's a bit like MOND. It may be a phenomenological relationship not grounded in theory, but any theory has to reproduce it because it compactly describes the evidence.One might therefore take the attitude that the counterintuitive nature of Koide's formula - counterintuitive with respect to field theorist's common sense - is a further clue, about what needs to be investigated. One should directly investigate what would have to be true, for a theory to exhibit just this kind of unlikely or impossible-seeming infrared relationship.
What an extraordinarily delicate way to express that sentiment.Nonetheless, the LHC results appear to be telling us that the world works in a different way.