# I The ever-increasing proton lifetime

Tags:
1. Jan 20, 2018

### Staff: Mentor

A quick search for "proton decay baryogenesis" finds various results.
https://arxiv.org/abs/hep-ph/0005095
https://arxiv.org/abs/1207.5771
http://www.physics.mcgill.ca/~guymoore/research/baryogenesis.html
It even has its own section at Wikipedia: https://en.wikipedia.org/wiki/Proton_decay#Baryogenesis

2. Jan 23, 2018

### swampwiz

Occam's Razor.

3. Jan 23, 2018

Staff Emeritus
Is not a very good tool to judge experimental results. And experimentally we know that decays that are possible occur at some rate.

4. Jan 23, 2018

### Urs Schreiber

I understand the general idea that baryogenesis, by definition, involves baryon non-conservation. What I was hoping to see was some more concrete anlysis with maybe numerical exclusion bounds on what the implication of experimental bounds on proton decay is for models of baryogenesis, similar to the detailed discussion one sees for GUT models, where experimental results have led to some of these models being effectively ruled out.

But I gather that models of baryogenesis just aren't detailed enough in themselves to admit any of this? I gather there is just Sakharov's conditions justifying the general possibility of baryogenesis, without any quantitative ideas of the process. Is that right?

I suppose quantitative understanding of baryogenesis via chiral anomay $\mathrm{div} J_{\mathrm{quark}} \propto tr(F \wedge F)$ all depends on having some idea of $tr(F \wedge F)$ in the early universe, or maybe at least its local fluctuations or something? Maybe there is some indirect (probably very indirect?) experimental bounds on what that could have been?

Anyway, it's these concrete implications of experimental bounds on proton-decay to models of baryogenesis that I never saw discussed, also not in the references that you googled, or else I missed them. Probably they just don't exist. That's fine, I just wanted to know.

5. Jan 23, 2018

### snorkack

Why would Occam´s razor favour baryogenesis?
What is wrong with baryon number being initial parametre of universe? Why are models with primordial baryon number against mainstream and why does Occam´s razor favour models with zero primordial baryon number PLUS baryogenesis imbalance that somehow produces just the observed number of baryons, PLUS predicted proton lifetime that always seems to be not observed at the rate expected?

6. Jan 23, 2018

### ohwilleke

I'm with you on this. A positive finite baryon number of the universe as one of its initial conditions is no more problematic than a positive finite mass-energy of the universe at its inception and I have yet to see anyone argue for a Big Bang mass-energy in the universe that is either zero or infinite.

A positive baryon number of the universe as an initial condition is the only possibility that is consistent with the SM.

A vague desire for an initial condition of the universe to be different because it looks pretty if that is the case is not a very compelling reason to go head to head with overwhelming evidence that there are no observed cases of either baryon number violation in general by any means, or proton decay, or lepton number violation by any means, up to very, very stringent limits. It also goes up against the theoretical reality that the only possible baryon number violating process in the SM, the sphaeleron (forgive me if I've spelled it incorrectly), cannot account for baryon number asymmetry in the universe.

Also, keep in mind that the energy levels to which the SM has been experimentally tested are higher than those present in any natural phenomena in the universe for something like 13.5 billion years +/-. We haven't (and never will be able to) experimentally tested the SM at the energy scales of the Big Bang and a brief period of time immediately thereafter, but, any baryon number violating process has to be confined to a very short period of time. Any process that takes even hundreds of millions of years to produce the observed matter-antimatter asymmetry of the universe is too slow to be consistent with the experimental proof of the SM at energy scales we have tested. And, now that the Higgs boson mass has been determined, we know that the SM is theoretically consistent up to the GUT scale.

7. Jan 24, 2018

### Urs Schreiber

Maybe it doesn't matter for your argument, but ultra-high energy cosmic ray particles way beyond LHC energies are rare, but routinely seen by the Pierre Auger observatory.

8. Jan 24, 2018

### Urs Schreiber

I am wondering if this is true. The chiral anomaly in the standard model says that baryon current conservation is violated by instantons, via $\partial_\mu J^\mu_{B} \propto tr(F \wedge F)$. While it is true that this violation is not seen in perturbation theory, it is present non-perturbatively in the standard model.

Thus it seems that just as with GUTs, already in the plain SM the question is not "why would it violate baryon number conservation at a measurable rate" but "why would it not?".

I have been trying to discuss this with "mfb" above. For some reason much less seems to be known about this for the plain SM baryon number violation, than for hypothetical GUT extensions.

9. Jan 24, 2018

### ohwilleke

Fair point. But, it doesn't really affect the argument as these show no sign of SM violations.

10. Jan 24, 2018

### ohwilleke

I have seen several papers that calculated this and found the numbers to be insufficient. I'll try to find a cite to one or two of them.

11. Jan 24, 2018

### Urs Schreiber

Right. It's sometimes used to argue that the SM vacuum, should it be metastable, is stable at least to these much higher energies than accelerators can probe. Which may be comforting to know. :-)

12. Jan 24, 2018

### Urs Schreiber

I'd be really interested. Thanks.

13. Jan 24, 2018

### ohwilleke

Here are some papers that establish the general proposition that: "A positive baryon number of the universe as an initial condition is the only possibility that is consistent with the SM.", although not in as many words, relying on prior scholarship for it as they explore BSM scenarios, some of which reference older papers containing the actual analysis and calculations in the Standard Model case.

Here's a quote from an October 17, 2016 preprint:

A July 11, 2011 pre-print which has since been published in Physics D opens with the same conclusion:

A December 14, 2017 pre-print simply states at the outset:

And here's a quote from a November 28, 2017 pre-print:

Some sources may be found in the citations to the introduction to this September 26 2017 pre-print (updated December 30, 2017) which begins:

Last edited: Jan 24, 2018
14. Jan 24, 2018

### ohwilleke

For what it is worth, near complete matter-antimatter asymmetry follows almost trivially in any case where particles are well mixed in a compact space in the Big Bang and immediately after and the initial baryon number and lepton numbers of the universe are not zero. And, almost every matter-antimatter balanced decay process from a Big Bang of the type observed in the SM is going to produce this kind of well mixed set of particles in a compact space.

Heuristically, matter particles can be taken without loss of generality to be the more numerous type and anti-matter particles can be taken to be the less numerous type. Matter and antimatter particles annihilate each other on a 1-1 basis leaving only matter particles and a tiny share of particles that were not fully mixed and continued to stay unmixed for billions of years. Ergo, in every scenario except the one in which B and L are exactly zero at the beginning of the Big Bang, you expect extreme BAU which is what we observe. The more homogeneous the expanding universe is, the longer the time available for this mutual annihilation until only matter or only antimatter is left to take place is, since you don't have large pockets that would be annihilation free.

A rigorous derivation of this result would be a considerably more laborious enterprise and would call for more formal assumptions but would follow the same basis logic. You could have a small but non-zero value of B or L or both and still end up without BAU, but the further you get from zero the less likely the system is to end up this way. But, if you simply track the SM back to the beginning without new physics, you get a minimum absolute value of B and L that very far indeed from zero by many orders of magnitude, so the marginal cases don't matter.

If you engage in the rather deplorable Baysean notion that the many worlds folks try to engaged in that there is a distribution of possible B and L values at the beginning of the universe from which the initial conditions of the universe are chosen on a probabilistic basis, there is only one infinitessimal point that is B=0 and L=0, while all other points lead to BAU, so BAU is almost infinitely likely relative to any other initial condition.

Last edited: Jan 24, 2018
15. Jan 24, 2018

### Staff: Mentor

That is not how probability works.

If you roll a die, are numbers different from 1 "almost infinitely likely" relative to the single number 1?

16. Jan 25, 2018

### ohwilleke

You will note that I prefaced this statement with the statement "If you engage in the rather deplorable Baysean notion that the many worlds folks try to engaged in that there is a distribution of possible B and L values at the beginning of the universe from which the initial conditions of the universe are chosen on a probabilistic basis," specifically disavowing this technique which I agree is dubious because it formulates a prior distribution without any basis for doing so, when the only reason to use Baysean statistics in the first place is that you have some basis for presuming one prior distribution over another. But, because this kind of reasoning is frequently used by practicing cosmologists, I nonetheless raised it.

If your die has an infinite number of sides, then yes, it is true. The only known limit in principle on the possible values of B or L is that they be on the line of integers, which is, for B and for L, independently, an infinite set that can take any positive or negative integer value.

Looking at the actual values, which are really, really large, is instructive of what kind of probability distribution might make sense, if you are inclined to give the admittedly dubious Baysean multi-worlds approach to fundamental constants of the universe including the initial values of B and L any credit. This size of these values suggest very extreme tails.

Astronomers have been able to estimate the baryon number of the universe to one or two significant digits, but while they have a good estimate of the number of Standard Model leptons in the universe (to one significant digit) they have a less accurate estimate of the lepton number of the universe since they don't know the relative number of neutrinos and antineutrinos.

As one science education website explains:

Thus, the baryon number of the universe is roughly 1.2x10^80, to about one significant digit.

To determine B-L for the universe, the number of protons and number of charged leptons cancel, leaving the number of neutrons (about 5*10^78 given the ratio of protons to neutrons in the universe) minus the number of neutrinos plus the number of antineutrinos. The combined number of neutrinos is 1.2 x 10^89, but we don't have nearly as good of an estimate of the relative number of neutrinos and antineutrinos.

The number of neutrinos in the universe outnumber the number of neutrons in the universe by about 2.4*10^10 (i.e. about 24 billion to 1), so the conserved quantity B-L in the universe (considering only Standard Model fermions) is almost exactly equal to the number of antineutrinos in the universe minus the number of neutrinos in the universe.

As of March 2013, the best available observational evidence suggests that antineutrinos overwhelmingly outnumber neutrinos in the universe. As the abstract of a paper published in the March 15, 2013 edition of the New Journal of Physics by Dominik J Schwarz and Maik Stuke, entitled "Does the CMB prefer a leptonic Universe?" (open access preprint here), explains:

Specifically, they found that the neutrino-antineutrino asymmetry supported by each of the several kinds of CMB data was in the range of 38 extra-antineutrinos per 100 neutrinos to 2 extra neutrinos per 100 neutrinos, a scenario that prefers an excess of antineutrinos, but is not inconsistent with zero at the one standard deviation level. The mean value of 118 antineutrinos per 100 neutrinos would mean that B-L is hopelessly out of whack relative to zero.

A 2011 paper considering newly measured PMNS matrix mixing angles (especially theta13), and WMAP data had predicted a quite modest relative neutrino-antineutrino asymmetry (if any), but it doesn't take much of an asymmetry at all to make B-L positive and for the neutrino contribution to this conserved quantity to swamp the baryon number contribution.

It is also worth recalling that while the Standard Model does have one process that violates B and L conservation that even that process conserves B-L. So, unless the ratio of antineutrinos to neutrinos in the universe is just right, it is impossible under any Standard Model process to have initial conditions of B=0 and L=0. To the extent that the ratio of antineutrinos and neutrinos is not exactly 1-1 right down to 89 orders of magnitude, the beauty of B=0 and L=0 that you get in a pure energy initial condition is unattainable anyway even considering sphaleron processes at the very dawn of the Big Bang.

In the Standard Model, baryon number (the number of quarks minus the number of antiquarks, divided by three) is conserved as is lepton number (the number of charged leptons and neutrinos minus the number of charged antileptons and antineutrinos), except in sphaleron process. Per Wikipedia:

The trouble is that if you start with B=0 and L=0, as you would expect to in a Big Bang comprised initially of pure energy, it is hard to determine how you end up with the observed values of B and L in the universe which are so far from zero.

The mainstream view among physicists, although there are some theorists who dissent from this analysis (there was a link for this to a paper from Helesinki U. but it went bad and I can't figure out how to fix it), is that Standard Model sphaleron processes in the twenty minutes during which Big Bang Nucleosynthesis is believed to have taken place, or the preceding ten seconds between the Big Bang and the onset of Big Bang Nucleosynthesis, can't account for the massive asymmetry between baryons made of matter and baryonic anti-matter that is observed in the universe (also here) without beyond the Standard Model physics (also here).

For example, even if you have BSM processes that allow neutrinoless double beta decay and proton decay, that isn't good enough. Those processes have to produce the huge B and L numbers seen today for the entire universe fast, in just half an hour or so.

In any case, the line between Big Ban Nucleosynthesis which is figured out using Standard Model physics with some very basic assumptions and compared to real evidence that matches it except for a modest Lithium-7 problem, and the ten seconds before, is really where the science of cosmology gives way to speculation. Physics is incredibly powerful, but our understanding of the first ten seconds out of 13 and change billion years still has mostly questions and few answers. Still, the credibility we get from applying the SM to BBN means that once you get to BBN the BSM physics have to be pretty modest to non-existent at that point. You need to fit more or less almost all of your BSM physics that violations B number and L number and B-L number conservation into the first ten seconds or however long the pre-BBN period actually was, which means you need a phase transition and it has to be an incredibly fast and efficient process working in one direction on the B side and in one direction on the L side, and unless you get a perfect balance of neutrinos and antineutrinos in the universe today, you still can't have B=0, L=0, which is pretty pointless with B=0 but L not equal to 0, in terms of beauty.

Likewise, after that point, sphaleron processes should be so rare that they can't explain the baryon asymmetry of the universe (BAU), which is one of the great unsolved problems in physics, unless you come to terms with the totally arbitrary assumption that the initial conditions of the universe had equal amounts of matter and antimatter, which has no real solid basis except that it seems pretty and unique.

Also, even if you have an alternative to sphaleron processes, most of the popular supersymmetric and other BSM models which have B and L violations, like the sphaleron process, conserve B-L at least, so if the neutrino-antineutrino balance isn't perfect it doesn't work anyway.

(By the way, if your DM has the appropriate matter v. antimatter character to partially balance out a B-L issue with ordinary matter, you still have a problem if your DM is in the keV mass range or more because the number of neutrinos is so huge that any meaningful imbalance there can't be meaningfully mitigated with DM because there are orders of magnitude fewer DM particles than neutrinos. The only kind of dark matter that is numerous enough to address a B-L imbalance due to neutrinos not being evenly balanced between neutrinos and antineutrinos to many significant digits is axion-like dark matter because only it has enough particles to rival the number of neutrinos in the universe. And, if you use axion-like DM to address a B-L imbalance you also need to have all of it created prior to BBN if the processes affecting ordinary matter conserve B-L.)

Another reference that could have been included earlier: As of a 2006 paper discussing the experimental evidence for baryon number non-conservation (and citing S. Eidelman et al. (Particle Data Group), Phys. Lett. B592 (2004) 1): "No baryon number violating processes have yet been observed." Some of the processes that B-L conservation might make possible in some beyond the standard model theories, such as proton decay, are also not observed.

Last edited: Jan 25, 2018