# I The ever-increasing proton lifetime

#### mfb

Mentor
Sure, I know, that's why it seems curious that one never sees discussion of proton decay issues in the context of baryogenesis. At least I never saw it discussed in that context. It seems you are saying that you heard people speak about it, but that there is nothing tangible in print that one could point to. (But I'll leave it at that now, unless you have more to say.)
A quick search for "proton decay baryogenesis" finds various results.
https://arxiv.org/abs/hep-ph/0005095
https://arxiv.org/abs/1207.5771
http://www.physics.mcgill.ca/~guymoore/research/baryogenesis.html
It even has its own section at Wikipedia: https://en.wikipedia.org/wiki/Proton_decay#Baryogenesis

#### swampwiz

Staff Emeritus
Occam's Razor.
Is not a very good tool to judge experimental results. And experimentally we know that decays that are possible occur at some rate.

#### Urs Schreiber

Gold Member
A quick search
I understand the general idea that baryogenesis, by definition, involves baryon non-conservation. What I was hoping to see was some more concrete anlysis with maybe numerical exclusion bounds on what the implication of experimental bounds on proton decay is for models of baryogenesis, similar to the detailed discussion one sees for GUT models, where experimental results have led to some of these models being effectively ruled out.

But I gather that models of baryogenesis just aren't detailed enough in themselves to admit any of this? I gather there is just Sakharov's conditions justifying the general possibility of baryogenesis, without any quantitative ideas of the process. Is that right?

I suppose quantitative understanding of baryogenesis via chiral anomay $\mathrm{div} J_{\mathrm{quark}} \propto tr(F \wedge F)$ all depends on having some idea of $tr(F \wedge F)$ in the early universe, or maybe at least its local fluctuations or something? Maybe there is some indirect (probably very indirect?) experimental bounds on what that could have been?

Anyway, it's these concrete implications of experimental bounds on proton-decay to models of baryogenesis that I never saw discussed, also not in the references that you googled, or else I missed them. Probably they just don't exist. That's fine, I just wanted to know.

#### snorkack

Occam's Razor.
Why would Occam´s razor favour baryogenesis?
What is wrong with baryon number being initial parametre of universe? Why are models with primordial baryon number against mainstream and why does Occam´s razor favour models with zero primordial baryon number PLUS baryogenesis imbalance that somehow produces just the observed number of baryons, PLUS predicted proton lifetime that always seems to be not observed at the rate expected?

#### ohwilleke

Gold Member
Why would Occam´s razor favour baryogenesis?
What is wrong with baryon number being initial parametre of universe? Why are models with primordial baryon number against mainstream and why does Occam´s razor favour models with zero primordial baryon number PLUS baryogenesis imbalance that somehow produces just the observed number of baryons, PLUS predicted proton lifetime that always seems to be not observed at the rate expected?
I'm with you on this. A positive finite baryon number of the universe as one of its initial conditions is no more problematic than a positive finite mass-energy of the universe at its inception and I have yet to see anyone argue for a Big Bang mass-energy in the universe that is either zero or infinite.

A positive baryon number of the universe as an initial condition is the only possibility that is consistent with the SM.

A vague desire for an initial condition of the universe to be different because it looks pretty if that is the case is not a very compelling reason to go head to head with overwhelming evidence that there are no observed cases of either baryon number violation in general by any means, or proton decay, or lepton number violation by any means, up to very, very stringent limits. It also goes up against the theoretical reality that the only possible baryon number violating process in the SM, the sphaeleron (forgive me if I've spelled it incorrectly), cannot account for baryon number asymmetry in the universe.

Also, keep in mind that the energy levels to which the SM has been experimentally tested are higher than those present in any natural phenomena in the universe for something like 13.5 billion years +/-. We haven't (and never will be able to) experimentally tested the SM at the energy scales of the Big Bang and a brief period of time immediately thereafter, but, any baryon number violating process has to be confined to a very short period of time. Any process that takes even hundreds of millions of years to produce the observed matter-antimatter asymmetry of the universe is too slow to be consistent with the experimental proof of the SM at energy scales we have tested. And, now that the Higgs boson mass has been determined, we know that the SM is theoretically consistent up to the GUT scale.

#### Urs Schreiber

Gold Member
the energy levels to which the SM has been experimentally tested are higher than those present in any natural phenomena
Maybe it doesn't matter for your argument, but ultra-high energy cosmic ray particles way beyond LHC energies are rare, but routinely seen by the Pierre Auger observatory.

#### Urs Schreiber

Gold Member
A positive baryon number of the universe as an initial condition is the only possibility that is consistent with the SM.
I am wondering if this is true. The chiral anomaly in the standard model says that baryon current conservation is violated by instantons, via $\partial_\mu J^\mu_{B} \propto tr(F \wedge F)$. While it is true that this violation is not seen in perturbation theory, it is present non-perturbatively in the standard model.

Thus it seems that just as with GUTs, already in the plain SM the question is not "why would it violate baryon number conservation at a measurable rate" but "why would it not?".

I have been trying to discuss this with "mfb" above. For some reason much less seems to be known about this for the plain SM baryon number violation, than for hypothetical GUT extensions.

#### ohwilleke

Gold Member
Maybe it doesn't matter for your argument, but ultra-high energy cosmic ray particles way beyond LHC energies are rare, but routinely seen by the Pierre Auger observatory.
Fair point. But, it doesn't really affect the argument as these show no sign of SM violations.

#### ohwilleke

Gold Member
I am wondering if this is true. The chiral anomaly in the standard model says that baryon current conservation is violated by instantons, via $\partial_\mu J^\mu_{B} \propto tr(F \wedge F)$. While it is true that this violation is not seen in perturbation theory, it is present non-perturbatively in the standard model.
I have seen several papers that calculated this and found the numbers to be insufficient. I'll try to find a cite to one or two of them.

#### Urs Schreiber

Gold Member
doesn't really affect the argument as these show no sign of SM violations.
Right. It's sometimes used to argue that the SM vacuum, should it be metastable, is stable at least to these much higher energies than accelerators can probe. Which may be comforting to know. :-)

#### Urs Schreiber

Gold Member
I have seen several papers that calculated this and found the numbers to be insufficient. I'll try to find a cite to one or two of them.
I'd be really interested. Thanks.

#### ohwilleke

Gold Member
I'd be really interested. Thanks.
Here are some papers that establish the general proposition that: "A positive baryon number of the universe as an initial condition is the only possibility that is consistent with the SM.", although not in as many words, relying on prior scholarship for it as they explore BSM scenarios, some of which reference older papers containing the actual analysis and calculations in the Standard Model case.

Here's a quote from an October 17, 2016 preprint:

It is by now a standard statement, repeated at the beginning of (nearly) any talk on baryogenesis, that while the Standard Model (SM) includes nonzero effects for all three Sakharov’s ingredients – baryon number violation, CP violation and deviation from equilibrium – their products falls short of the observed baryonic asymmetry of the Universe nB/nγ ∼ 6 ∗ 10−10 (1) by many orders of magnitude.

As a result, the mainstream of baryogenesis studies focus mostly on “beyond the Standard Model” (BSM) scenarios, in which new sources of CP violation are introduced, e.g. in the extended Higgs or neutrino sector. Leptogenesis scenarios are based on superheavy neutrino decays, occurring at very high scales and satisfying both large CP and out-of-equilibrium requirements, with lepton asymmetry then transformed into baryon asymmetry at the electroweak scale.

However, as all BSM scenarios remain at this time purely hypothetical, without support from the LHC and other experiments so far, perhaps it is warranted to revisit the SM-based scenarios.

While most of this paper will be focused on the CP violation during baryon-number producing sphaleron transitions, let us here comment on the third necessary ingredient, a deviation from thermal equilibrium. Standard cosmology assumes that reheating and entropy production of the Universe take place at a scale much higher than the electroweak scale. In addition, the standard model with the Higgs mass at 125 GeV has an electroweak transition only of a smooth crossover type. If the assumption is correct, and there would be no new particles at the electroweak scale found, the transition would be very smooth, without significant out-of-equilibrium effects.
A July 11, 2011 pre-print which has since been published in Physics D opens with the same conclusion:

The question how the observed baryonic asymmetry of the Universe was produced is among the most difficult open questions of physics and cosmology. The observed effect is usually expressed as the ratio of the baryon density to that of the photons n B/n γ ∼ 10 −10 . Sakharov [1] had formulated three famous necessary conditions: the (i) baryon number and (ii) the CP violation, with (iii) obligatory deviations from the thermal equilibrium. Although all of them are present in the Standard Model (SM) and standard Big Bang cosmology, the baryon asymmetry which is produced by the known CKM matrix is completely insufficient to solve this puzzle.
A December 14, 2017 pre-print simply states at the outset:

Electroweak baryogenesis provides a minimal and compelling scenario for testing the idea that the matter / antimatter asymmetry of the Universe arose at the electroweak phase transition [1–6]. The Standard Model lacks the necessary ingredients for electroweak baryogenesis, and in general new physics is required if this scenario is to be successful. . . .

[1] V. A. Kuzmin, V. A. Rubakov, and M. E. Shaposhnikov, Phys. Lett. B155, 36 (1985).
[2] M. Shaposhnikov, JETP Lett. 44, 465 (1986).
[3] M. E. Shaposhnikov, Nucl. Phys. B299, 797 (1988).
[4] M. E. Shaposhnikov, Nucl. Phys. B287, 757 (1987).
[5] A. G. Cohen, D. B. Kaplan, and A. E. Nelson, Nucl.Phys. B349, 727 (1991).
[6] A. G. Cohen, D. B. Kaplan, and A. E. Nelson, Phys.Lett. B245, 561 (1990).
And here's a quote from a November 28, 2017 pre-print:

What these models have in common is that in order to generate light Majorana masses for the active neutrinos, L needs to be broken. This symmetry, along with baryon number B symmetry, is accidentally conserved in the SM at the perturbative level. Weak nonperturbative instanton and sphaleron effects through the chiral Adler-Bell-Jackiw anomaly do in fact violate baryon and lepton number but only in the combination (B+L). The ‘orthogonal’ combination (B −L) remains conserved and thus lepton number violation (LNV), or more generally (B−L) violation, along with the generation of Majorana neutrino masses requires the presence of New Physics beyond the SM (BSM).

In this context, a clear hint for physics beyond the SM is the observation of a baryon asymmetry of our Universe, quantified in terms of the baryon-to-photon number density η obs B = (6.20 ± 0.15) × 10−10 . (1) In order to generate a baryon asymmetry the three Sakharov conditions have to be fulfilled, namely (1) B violation, (2) C and CP violation and (3) out-of-equilibrium dynamics.

Different approaches exist which exhibit these conditions, one popular scenario is baryogenesis via leptogenesis (LG). In the standard “vanilla” scenario, a right handed heavy neutrino decays out of equilibrium via a lepton number violating decay and introduces a new source of CP violation. As long as this happens before the EW phase transition, the lepton asymmetry is translated into a baryon asymmetry.

While the violation of lepton number is a crucial ingredient e.g. in the leptogenesis scenario, in order to satisfy the third Sakharov condition, the LNV interactions must not be too efficient. Otherwise they remove the lepton number asymmetry and, due to the presence of sphaleron transitions in the SM, also the baryon number asymmetry before it is frozen in at the EW breaking scale.

The search for LNV processes, with neutrinoless double beta (0νββ) as the most prominent example, therefore provides a potential pathway to probe or rather falsify certain baryogenesis scenarios, if the lepton number washout in the early universe can be correlated with the LNV process rate. In this paper, we take a model-independent approach and study SM invariant operators of mass dimension 5, 7, 9 and 11 that violate lepton number by two units, ∆L = 2. We correlate their contribution to 0νββ, either at tree level or induced by radiative effects, with the lepton number washout 3 in the early universe. Assuming the observation of 0νββ decay, where we take the expected sensitivity of T 0νββ 1/2 ≈ 10^27 y of future 0νββ experiments, we determine the temperature range where the corresponding lepton number washout is effective. After the discovery of the sphaleron transitions, the constraint on LNV operators from the requirement to protect the observed baryon asymmetry was soon realized with the Weinberg operator as the most prominent example. More generic nonrenormalizable operators were discussed in while the argument can also be extended to baryon number violating ∆B = 2 operators inducing neutron-antineutron oscillations. More recently, we have shown that searches for resonant LNV processes at the LHC can be used to infer strong lepton number washout and in we have demonstrated the principle to correlate the washout rate with non-standard 0νββ contributions. . . .

Our results show that the scale Λ of many of the ∆L = 2 operators and the corresponding temperature range of strong washout are O(TeV) assuming an observation of 0νββ in future or planned experiments with a sensitivity of T1/2 ≈ 10^27 y. In this case, there is underlying new physics at work, potentially within the reach of the LHC and future colliders. Together with the B + L-violating sphalerons, the presence of LNV can erase a pre-existing baryon and lepton asymmetry generated at high temperatures. As a result, the observation of 0νββ decay will strongly constrain models of high-scale (& TeV) scenarios of baryogenesis and leptogenesis.
Some sources may be found in the citations to the introduction to this September 26 2017 pre-print (updated December 30, 2017) which begins:

The origin of BAU has long been a question of great interest in explaining why there is more baryon than anti-baryon in nature. Big bang nucleosynthesis (BBN) [1] and cosmic microwave background [2] measurements give the BAU as η ≡ nB/s ⋍ 10−10, where nB is the baryon number density and s is the entropy density. In order to address this issue, many different models and mechanisms have been proposed [3–7]. The mechanisms discussed in the literature satisfy the three Sakharov conditions [3], namely, (i) baryon number (B) violation, (ii) charge (C) and charge-parity (CP) violations, and (iii) a departure from the thermal equilibrium. For reviews of different types of models and mechanisms, see, for example, [8–10]. Recently, the variety of the method for the calculation of BAU has been also developed [11–13]. . . .

[1] J. P. Kneller and G. Steigman, New Journal of Physics 6, 117 (2004).
[2] P. A. R. Ade and Others, Astron. Astrophys. 594, A13 (2016).
[3] A. D. Sakharov, Pisma Zh. Eksp. Teor. Fiz. 5, 32 (1967).
[4] M. Yoshimura, Phys. Rev. Lett. 41, 281 (1978).
[5] M. Fukugita and T. Yanagida, Physics Letters B 174, 45 (1986).
[6] I. Affleck and M. Dine, Nucl. Phys B 249, 361 (1985).
[7] A. G. Cohen and D. B. Kaplan, Nuclear Physics B 308, 913 (1988).
[8] S. Davidson, E. Nardi, and Y. Nir, Physics Reports 466, 105 (2008).
[9] M. Trodden, Rev. Mod. Phys. 71, 1463 (1999).
[10] K. M. Zurek, Physics Reports 537, 91 (2013).
[11] A. Kobakhidze and A. Manning, Phys. Rev. D 91, 123529 (2015).
[12] D. Zhuridov, Phys. Rev. D 94, 035007 (2016).
[13] N. Blinov and A. Hook, Phys. Rev. D 95, 095014 (2017).

Last edited:

#### ohwilleke

Gold Member
For what it is worth, near complete matter-antimatter asymmetry follows almost trivially in any case where particles are well mixed in a compact space in the Big Bang and immediately after and the initial baryon number and lepton numbers of the universe are not zero. And, almost every matter-antimatter balanced decay process from a Big Bang of the type observed in the SM is going to produce this kind of well mixed set of particles in a compact space.

Heuristically, matter particles can be taken without loss of generality to be the more numerous type and anti-matter particles can be taken to be the less numerous type. Matter and antimatter particles annihilate each other on a 1-1 basis leaving only matter particles and a tiny share of particles that were not fully mixed and continued to stay unmixed for billions of years. Ergo, in every scenario except the one in which B and L are exactly zero at the beginning of the Big Bang, you expect extreme BAU which is what we observe. The more homogeneous the expanding universe is, the longer the time available for this mutual annihilation until only matter or only antimatter is left to take place is, since you don't have large pockets that would be annihilation free.

A rigorous derivation of this result would be a considerably more laborious enterprise and would call for more formal assumptions but would follow the same basis logic. You could have a small but non-zero value of B or L or both and still end up without BAU, but the further you get from zero the less likely the system is to end up this way. But, if you simply track the SM back to the beginning without new physics, you get a minimum absolute value of B and L that very far indeed from zero by many orders of magnitude, so the marginal cases don't matter.

If you engage in the rather deplorable Baysean notion that the many worlds folks try to engaged in that there is a distribution of possible B and L values at the beginning of the universe from which the initial conditions of the universe are chosen on a probabilistic basis, there is only one infinitessimal point that is B=0 and L=0, while all other points lead to BAU, so BAU is almost infinitely likely relative to any other initial condition.

Last edited:

#### mfb

Mentor
there is only one infinitessimal point that is B=0 and L=0, while all other points lead to BAU, so BAU is almost infinitely likely relative to any other initial condition.
That is not how probability works.

If you roll a die, are numbers different from 1 "almost infinitely likely" relative to the single number 1?

#### ohwilleke

Gold Member
That is not how probability works.
You will note that I prefaced this statement with the statement "If you engage in the rather deplorable Baysean notion that the many worlds folks try to engaged in that there is a distribution of possible B and L values at the beginning of the universe from which the initial conditions of the universe are chosen on a probabilistic basis," specifically disavowing this technique which I agree is dubious because it formulates a prior distribution without any basis for doing so, when the only reason to use Baysean statistics in the first place is that you have some basis for presuming one prior distribution over another. But, because this kind of reasoning is frequently used by practicing cosmologists, I nonetheless raised it.

If you roll a die, are numbers different from 1 "almost infinitely likely" relative to the single number 1?
If your die has an infinite number of sides, then yes, it is true. The only known limit in principle on the possible values of B or L is that they be on the line of integers, which is, for B and for L, independently, an infinite set that can take any positive or negative integer value.

Looking at the actual values, which are really, really large, is instructive of what kind of probability distribution might make sense, if you are inclined to give the admittedly dubious Baysean multi-worlds approach to fundamental constants of the universe including the initial values of B and L any credit. This size of these values suggest very extreme tails.

Astronomers have been able to estimate the baryon number of the universe to one or two significant digits, but while they have a good estimate of the number of Standard Model leptons in the universe (to one significant digit) they have a less accurate estimate of the lepton number of the universe since they don't know the relative number of neutrinos and antineutrinos.

As one science education website explains:

Scientific estimates say that there are about 300 [neutrinos] per cubic centimeter in the universe. . . . Compare that to the density of what makes up normal matter as we know it, protons electrons and neutrons (which together are called “baryons”) – about 10-7 per cubic centimeter. . . . the size of the observable universe is a sphere about 92 million light-years across. So the total number of neutrinos in the observable universe is about 1.2 x 10^89! That’s quite a lot – about a billion times the total number of baryons in the observable universe.
Thus, the baryon number of the universe is roughly 1.2x10^80, to about one significant digit.

To determine B-L for the universe, the number of protons and number of charged leptons cancel, leaving the number of neutrons (about 5*10^78 given the ratio of protons to neutrons in the universe) minus the number of neutrinos plus the number of antineutrinos. The combined number of neutrinos is 1.2 x 10^89, but we don't have nearly as good of an estimate of the relative number of neutrinos and antineutrinos.

The number of neutrinos in the universe outnumber the number of neutrons in the universe by about 2.4*10^10 (i.e. about 24 billion to 1), so the conserved quantity B-L in the universe (considering only Standard Model fermions) is almost exactly equal to the number of antineutrinos in the universe minus the number of neutrinos in the universe.

As of March 2013, the best available observational evidence suggests that antineutrinos overwhelmingly outnumber neutrinos in the universe. As the abstract of a paper published in the March 15, 2013 edition of the New Journal of Physics by Dominik J Schwarz and Maik Stuke, entitled "Does the CMB prefer a leptonic Universe?" (open access preprint here), explains:

Recent observations of the cosmic microwave background at smallest angular scales and updated abundances of primordial elements indicate an increase of the energy density and the helium-4 abundance with respect to standard big bang nucleosynthesis with three neutrino flavour. This calls for a reanalysis of the observational bounds on neutrino chemical potentials, which encode the number asymmetry between cosmic neutrinos and anti-neutrinos and thus measures the lepton asymmetry of the Universe. We compare recent data with a big bang nucleosynthesis code, assuming neutrino flavour equilibration via neutrino oscillations before the onset of big bang nucleosynthesis. We find a preference for negative neutrino chemical potentials, which would imply an excess of anti-neutrinos and thus a negative lepton number of the Universe. This lepton asymmetry could exceed the baryon asymmetry by orders of magnitude.
Specifically, they found that the neutrino-antineutrino asymmetry supported by each of the several kinds of CMB data was in the range of 38 extra-antineutrinos per 100 neutrinos to 2 extra neutrinos per 100 neutrinos, a scenario that prefers an excess of antineutrinos, but is not inconsistent with zero at the one standard deviation level. The mean value of 118 antineutrinos per 100 neutrinos would mean that B-L is hopelessly out of whack relative to zero.

A 2011 paper considering newly measured PMNS matrix mixing angles (especially theta13), and WMAP data had predicted a quite modest relative neutrino-antineutrino asymmetry (if any), but it doesn't take much of an asymmetry at all to make B-L positive and for the neutrino contribution to this conserved quantity to swamp the baryon number contribution.

It is also worth recalling that while the Standard Model does have one process that violates B and L conservation that even that process conserves B-L. So, unless the ratio of antineutrinos to neutrinos in the universe is just right, it is impossible under any Standard Model process to have initial conditions of B=0 and L=0. To the extent that the ratio of antineutrinos and neutrinos is not exactly 1-1 right down to 89 orders of magnitude, the beauty of B=0 and L=0 that you get in a pure energy initial condition is unattainable anyway even considering sphaleron processes at the very dawn of the Big Bang.

In the Standard Model, baryon number (the number of quarks minus the number of antiquarks, divided by three) is conserved as is lepton number (the number of charged leptons and neutrinos minus the number of charged antileptons and antineutrinos), except in sphaleron process. Per Wikipedia:

A sphaleron (Greek: σφαλερός "weak, dangerous") is a static (time-independent) solution to the electroweak field equations of the Standard Model of particle physics, and it is involved in processes that violate baryon and lepton number. Such processes cannot be represented by Feynman diagrams, and are therefore called non-perturbative. Geometrically, a sphaleron is simply a saddle point of the electroweak potential energy (in the infinite-dimensional field space), much like the saddle point of the surface z(x,y)=x2−y2 in three dimensional analytic geometry.

In the standard model, processes violating baryon number convert three baryons to three antileptons, and related processes. This violates conservation of baryon number and lepton number, but the difference B−L is conserved. In fact, a sphaleron may convert baryons to anti-leptons and anti-baryons to leptons, and hence a quark may be converted to 2 anti-quarks and an anti-lepton, and an anti-quark may be converted to 2 quarks and a lepton. A sphaleron is similar to the midpoint (\tau=0) of the instanton, so it is non-perturbative. This means that under normal conditions sphalerons are unobservably rare. However, they would have been more common at the higher temperatures of the early universe.
The trouble is that if you start with B=0 and L=0, as you would expect to in a Big Bang comprised initially of pure energy, it is hard to determine how you end up with the observed values of B and L in the universe which are so far from zero.

The mainstream view among physicists, although there are some theorists who dissent from this analysis (there was a link for this to a paper from Helesinki U. but it went bad and I can't figure out how to fix it), is that Standard Model sphaleron processes in the twenty minutes during which Big Bang Nucleosynthesis is believed to have taken place, or the preceding ten seconds between the Big Bang and the onset of Big Bang Nucleosynthesis, can't account for the massive asymmetry between baryons made of matter and baryonic anti-matter that is observed in the universe (also here) without beyond the Standard Model physics (also here).

For example, even if you have BSM processes that allow neutrinoless double beta decay and proton decay, that isn't good enough. Those processes have to produce the huge B and L numbers seen today for the entire universe fast, in just half an hour or so.

In any case, the line between Big Ban Nucleosynthesis which is figured out using Standard Model physics with some very basic assumptions and compared to real evidence that matches it except for a modest Lithium-7 problem, and the ten seconds before, is really where the science of cosmology gives way to speculation. Physics is incredibly powerful, but our understanding of the first ten seconds out of 13 and change billion years still has mostly questions and few answers. Still, the credibility we get from applying the SM to BBN means that once you get to BBN the BSM physics have to be pretty modest to non-existent at that point. You need to fit more or less almost all of your BSM physics that violations B number and L number and B-L number conservation into the first ten seconds or however long the pre-BBN period actually was, which means you need a phase transition and it has to be an incredibly fast and efficient process working in one direction on the B side and in one direction on the L side, and unless you get a perfect balance of neutrinos and antineutrinos in the universe today, you still can't have B=0, L=0, which is pretty pointless with B=0 but L not equal to 0, in terms of beauty.

Likewise, after that point, sphaleron processes should be so rare that they can't explain the baryon asymmetry of the universe (BAU), which is one of the great unsolved problems in physics, unless you come to terms with the totally arbitrary assumption that the initial conditions of the universe had equal amounts of matter and antimatter, which has no real solid basis except that it seems pretty and unique.

Also, even if you have an alternative to sphaleron processes, most of the popular supersymmetric and other BSM models which have B and L violations, like the sphaleron process, conserve B-L at least, so if the neutrino-antineutrino balance isn't perfect it doesn't work anyway.

(By the way, if your DM has the appropriate matter v. antimatter character to partially balance out a B-L issue with ordinary matter, you still have a problem if your DM is in the keV mass range or more because the number of neutrinos is so huge that any meaningful imbalance there can't be meaningfully mitigated with DM because there are orders of magnitude fewer DM particles than neutrinos. The only kind of dark matter that is numerous enough to address a B-L imbalance due to neutrinos not being evenly balanced between neutrinos and antineutrinos to many significant digits is axion-like dark matter because only it has enough particles to rival the number of neutrinos in the universe. And, if you use axion-like DM to address a B-L imbalance you also need to have all of it created prior to BBN if the processes affecting ordinary matter conserve B-L.)

Another reference that could have been included earlier: As of a 2006 paper discussing the experimental evidence for baryon number non-conservation (and citing S. Eidelman et al. (Particle Data Group), Phys. Lett. B592 (2004) 1): "No baryon number violating processes have yet been observed." Some of the processes that B-L conservation might make possible in some beyond the standard model theories, such as proton decay, are also not observed.

Last edited:

### Physics Forums Values

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving