# Neutrino theory regarding rest masses

1. Jun 24, 2015

### Buzz Bloom

In another thread a point was raised that current theory (or perhaps experimental results) establishes a definite (or appromimate) relationship between the average and the variance of the rest masses for the three flavors of neutrinos. I have tried to educated myself from material I can find on the internet, but I find myself confused by what I read. I would much appreciate any help.

(The other thread is:

The following is from an article, with a corresponding quote, that seem to disccuss this question, but I feel my understanding remains marginal at best.

http://arxiv.org/abs/1308.5870
A consistent picture emerges and including a prior for the cluster constraints and BAOs we find that: for an active neutrino model with 3 degenerate neutrinos, ∑mν=(0.320±0.081)eV, whereas for a sterile neutrino, in addition to 3 neutrinos with a standard hierarchy and ∑mν=0.06eV, meffν,sterile=(0.450±0.124)eV and ΔNeff=0.45±0.23.​
I find this languge confusing. What are the conceptual differences between:
1) an active neutrino model with 3 degenerate neutrinos, ∑mν=(0.320±0.081)eV, AND
2) 3 neutrinos with a standard hierarchy and ∑mν=0.06eV, meffν,sterile=(0.450±0.124)eV and ΔNeff=0.45±0.23?​

I interpret (1) to mean the the sum of the rest masses for the three flavors of neutrinos is 320 meV, and the experimental error range for this sum is +/- 81 meV. I don't undestand (2) at all. Can someone offer an explanation?

Assuming I am correct about (1) and ignoring the error range, I interpret that the possible difference between the largest and smallest rest mass could be almost as large as 320 meV, say 319 meV, and as small as a very small number, say perhaps 1 meV.

If I am incorrect in my interpretations, I hope someone will post an explanation about my errors.

Does anyone know of any other similar experiments, or theory, that would substantially narrow the range of possible differences between the largest and smallest rest mass?

Last edited: Jun 24, 2015
2. Jun 24, 2015

### mathman

Have you tried Google "neutrino"?

3. Jun 24, 2015

### Buzz Bloom

Hi Mathman:

Yes. Several times with various other technical words in my search as well. After reading your post, I just did one more search which produced 870 lines. Scanning these lines, none seemed to have anything new beyond what I found previously.

Thanks for your post,
Buzz

4. Jun 24, 2015

### fzero

5. Jun 25, 2015

### ChrisVer

page 2 in the paper describes how the two analyses are different.

6. Jun 25, 2015

### Buzz Bloom

Hi fzero and ChrisVer:

Thanks for the citation. I do also have an interest in cosmology, but the citation is particularly welcome for its different result from the citation
http://arxiv.org/abs/1308.5870
I gave in post #1.

I am looking at page 2 of the article cited by fzero:
(http://arxiv.org/abs/1404.1740)[/PLAIN] [Broken] : Neutrino cosmology and Planck by Julien Lesgourgues and Sergio Pastor, New Journal of Physics 16 (2014) 065002.

I don't see anything there about comparing the two analyses, which give the following different resulsts:
1) ∑mν=(0.320±0.081)eV
2) 0.23 eV at the 95% confidence level​
I do calculate that the (2) result seems to be marginally inside the error range of (1):
.320 - .081 = .239.​

ChrisVer, can you post a quote from fzero's citation that relates to showing "how the two analyses are different"?

Thanks for your posts,
Buzz

Last edited by a moderator: May 7, 2017
7. Jun 25, 2015

### fzero

I was mistakenly under the impression that the 1404.1740 would discuss precisely how the sterile neutrino is included in the analysis via a contribution to the energy density, Friedmann equation, etc., that it does in fact discuss for the standard active neutrino species. It is my understanding that the neutrino cosmology itself is well-established, and the difference in the three-neutrino models from paper to paper is related to how the authors attempt to combine several different datasets. I don't understand the statistical analysis well enough to comment further.

To the best of my understanding, the sterile neutrino models treat 3 of the neutrinos exactly the same way as in the standard analysis. So these active neutrinos decouple at some temperature $T_\text{dec}\sim 1~\text{MeV} \gg m_\nu$. After decoupling the neutrinos act like relativistic particles with temperature $T_\nu$. Shortly after neutrinos decouple, the photon itself decouples. From entropy considerations, the photon and neutrino temperatures are related and we end up with a relationship betwen the energy density fractions

$$\frac{\rho_\nu}{\rho_\gamma} = \frac{7}{8} \left( \frac{4}{11} \right)^{4/3} N_\text{eff},$$

whose derivation is described with more detail in that 1404.1740 paper or a typical cosmology text. The effective number of neutrinos $N_\text{eff}$ turns out to be slightly greater than 3 because there are still some small interactions between neutrinos and electrons at the time of photon decoupling.

As $T_\nu$ approaches $m_\nu$ the relativistic limit ceases to be a good approximation, so the energy density must be calculated numerically without the approximation. At present, $T_\nu\sim 10^{-4}~\text{eV}$ so at least the heaviest neutrinos are nonrelativistic today based on the mass splittings inferred from neutrino oscillations.

The fourth neutrino is sterile, which means that it only interacts with ordinary matter via gravity and some small Yukawa couplings. Hence it decouples at a temperature much higher than $T_\text{dec}$. Below the electroweak scale, the sterile neutrino mixes with the active neutrinos via a mass term, so the density of sterile neutrinos is related to that of the active neutrinos via the mixing angle. The analysis is going to be fairly model dependent and I haven't found a reference that clearly spells out the state of the art.

Anyhow, back to your original question, i.e. what is the difference between

1) an active neutrino model with 3 degenerate neutrinos, ∑mν=(0.320±0.081)eV, AND
2) 3 neutrinos with a standard hierarchy and ∑mν=0.06eV, meffν,sterile=(0.450±0.124)eV and ΔNeff=0.45±0.23?

So in scenario 1, the analysis proceeds by assuming a common mass $m_\nu$ for the neutrinos, since the cosmological data is not precise enough to be sensitive to the details of the mass splitting between neutrinos. $\sum m_\nu$ is the free parameter added to the cosmological model.

In scenario 2, $\sum m_\nu=0.6~\text{eV}$ is assumed for the 3 active neutrinos, while the mass of the sterile neutrino and the value of $N_\text{eff}$ are taken as the free parameters.

8. Jun 25, 2015

### ChrisVer

I was referring to your post, not fzero's.

9. Jun 26, 2015

### Buzz Bloom

Hi fzero:

I much appreciate your post, although there are some points that I don't understand as well as I would like to. I have read about the cosmological decoupling before, so I am comfortable with your expanation apout that. I am however still confused about scenario (2).
I found the following definition at https://en.wikipedia.org/wiki/Sterile_neutrino .
Sterile neutrinos (or inert neutrinos) are hypothetical particles (neutral leptonsneutrinos) that do not interact via any of the fundamental interactions of the Standard Model except gravity. The term sterile neutrino is used to distinguish them from the known active neutrinos in the Standard Model, which are charged under the weak interaction.​
a) Is there a consensus among the physisist community about sterile neutrinos: that they are a hypothetical particle (that might possibly be an explanation about the nature of dark matter). Is there any other respected concept of what they are?
b) What is the definition of Neff?
c) What is the theoretical explanation that a relationship exists between the masses of active neutrinos and sterile neutrinos? The existance of such a relationship seems to be implied by: "the mass of the sterile neutrino and the value of Neff are taken as the free parameters."

From my post #6
d) What does "95% confidence level" mean? Is there an implied error range that 95% corresponds to? If not, how can it be judged whether this result is experimentally compatible with the Battye and Moss result ∑mν=(0.320±0.081)eV?
e) An error range (like +/- 0.081) is usually understood to be some number of standard deviations, or a specific probability that the actual physical value as it is untimately measured to be will turnout to be within the error range. I guess that there must be a convention within the community of physics researchers about what this error range means in such terms. Please post what this is. (I know that in the community of socal psychologists, for example, this probability convention may typically be 80%).

Buzz

Last edited: Jun 26, 2015
10. Jun 26, 2015

### Buzz Bloom

Hi ChrisVer:

I applogize for my misunderstanding. My only excuse is its another of my all to frequent senior moments.

After looking at page 2 of the Battye and Moss article, I now see that the ∑mν=0.06eV value represents a lower bound on the sum while ∑mν=(0.320±0.081)eV is a measurment of an actual value for the sum.

Thanks for your help clarifying this for me,
Buzz

11. Jun 26, 2015

### Buzz Bloom

Hi fzero and ChrisVer:

Based on the discussion in the thread, I now conclude that my interpretation in post #1 is correct:
∑mν=320 meV (ignoring the error range) means that the possible difference between the largest and smallest rest mass could be almost as large as 320 meV, say for example 319 meV, and as small as a very small number, say for example 1 meV.​
I also conclude that is means that the largest of the three masses cannot be less than 1/3 ×320 meV = 106.7 meV. Is there any theory about which of the three flavors is expected to have the largest rest mass?

12. Jun 26, 2015

### fzero

Sterile neutrinos are hypothetical, as there's no direct evidence for them and no indirect evidence that couldn't be explained by some other hypothetical scenario. From the point of particle physics they are most closely analogous to the Standard Model right-handed leptons, like the RH electron, $e_R$. $e_R$ is an $SU(2)$-singlet, but has an electric charge, so it doesn't participate in the weak interaction directly, but does have EM interactions. A sterile neutrino would be like $e_R$, but with no electric charge. It can participate in Yukawa couplings to the Higgs in order to give mass to the neutrinos, etc.

I tried to outline a bit in the last post, but I would direct you to a cosmo text for a more detailed explanation. Basically you need to compute the energy density of neutrinos and you'd generally expect it to be proportional to the number of species. If neutrino decoupling were perfectly instantaneous, we could just use Friedmann's equation to describe the evolution from the the temperature at decoupling $T_{\nu,\text{dec}}$ to lower temperatures. However, at some slightly later time after neutrino decoupling, the temperature of the universe drops below the electron mass, so $e^\pm$ annihilation to photons is favored, leading to decoupling of the photon. It turns out that there are still some residual interactions between the electrons and neutrinos, so some of the energy that would have gone to photons is instead transferred to neutinos. Someone made a choice long ago to parameterize this by adding $\Delta N$ to $3$ to get an effective number of neutrinos $N_\text{eff}$.

As I mentioned above, if a sterile RH neutrino is added to the list of particles, we can generate a Dirac mass term for neutrinos, analogous to the one for electrons. Since the sterile neutrino is neutral under the electroweak interaction, we could also have a Majorana mass term for it. So generally we can write the mass terms using a mass matrix in the form (for convenience we show only one active neutrino $\nu_a$)

$$\begin{pmatrix} \bar{\nu}_a & \bar{\nu}_s \end{pmatrix} \begin{pmatrix} 0 & M \\ M & B \end{pmatrix} \begin{pmatrix} \nu_a \\ \nu_s \end{pmatrix},$$

where $M$ is the Dirac mass and $B$ is the Majorana mass. If $B=0$, the eigenvalues are $\pm M$, so the mass-squared values of the mass eigenstates are $M^2$. If $B\neq 0$, there is a mass-splitting between the eigenstates. If $B$ is large enough, one eigenstate will be very massive compared to the lighter state.

I think fixing the $\sum m_\nu$ for the active species is mainly made so that the numerical analysis is much easier. Presumably the addition of another parameter to the physics computations represents an exponential increase in complexity, while the statistical analysis is also much more complicated.

The question really doesn't have an answer, since we don't expect the flavor eigenstates to be the same as the mass eigenstates. This was already evident for the sterile neutrino system above, but in the context of active neutrinos alone, there is a neutrino mixing matrix, called the PMNS matrix, that relates the flavor and mass eigenstates. This is why in the literature you see the observed mass splittings parameterized as $\Delta m_{23}^2$, etc, instead of $\nu_\tau-\nu_\mu$ splittings. The latter splittings don't really make sense because of the mixing.

13. Jun 26, 2015

### ChrisVer

Confidence level is associated to confidence intervals and sampling theory so it doesn't only concern the neutrinos but is a general statistical quantity.
http://stattrek.com/statistics/dictionary.aspx?definition=confidence_level

14. Jun 26, 2015

### Buzz Bloom

Hi Fzero:

Your post #12 is very helpful to my understanding. The following is a my attempt to play-back my new understanding related to my (a), (b), and (c) questions to see if I got it right,

a) The reality of sterile neutrinos is experimentally unconfirmed, but there is an elaborate theory about many of their properties.

b) In interpreting cosmological eveidence, it is convenient to include the theoretical consequences of the sterile neutrinos someday becoming confirmed to be real. Regarding decoupling in particular, Neff is the calculated total rest mass of the three neutrino flavors assuming sterile neutrinos are real.

c) When an experiment attempts to indirectly determine the mass of any particular flavor of neutrino, the result must be probabilistic. Assuming the experiments include enough samples to calculate precise enough values, three distinct values for the mass of a particuar flavor neutrino (with an error range for each). The values for the relative frequency of these mass values for the population of samples would depend on the nature of the particular experiment.​

With respect to (c), I am much less confident regarding the following conjectures:

As an example, the in-process KATRIN experiment based on the measuring the distibution of the energies of the electron emitted during tritium beta decay, might get such a result. Different experiments (perhaps being based on the beta decay of other atoms) (if sufficiently precise) would get the same three values for the three distinct masses, but with (perhaps) different relative frequencies of occurrence.

An experiment attempting to measure the mass of νμ or ντ (if sufficiently precise) would (perhaps) get the same three values for the three distinct masses as those in a νe experiment, but with different relative frequencies of occurrence. (I have not the slightest idea about how such an experiment might be set up so that the neutrinios associated with the creation of muons or taus which could be measured in an analogous way as the KATRIN experiment measured electrons. It would presumable measure the energy of muons or taus produced.)
Assuming the above conjectures are correct, is there any reason to expect that the variety of relative frequencies from election emitting experiments would have a pattern distinctly different than the corresponding patterns of muon and tau emitting experiments?

Thank you again for your discussion,
Buzz

Last edited: Jun 26, 2015
15. Jun 26, 2015

### Buzz Bloom

Hi ChrisVer:

I looked at the site you posted and found the defintions there helpful. Although the math appears to be the same as it was when I took cources in probability and statistics as an undergraduate in the 1950s, the language (jargon) has changed quite a bit.

Here was a particualry helpful example:
A 95% confidence level implies that 95% of the confidence intervals would include the true population parameter.​

I now see that my assumption was correct: that the 95% confedence level is calculated based on some probability distribution, which of cource has a mean and percentiles. However, the particular relationhsip between a confidence level and an error range may well depend upon the particular probability distribution involved. In spite of this, a specific confidence level would imply that an estimate for the error range can be calculated, althought the nature of the distribution might make this very difficult. Its just too bad it wasn't calulated and reported, since it's absence makes it impossible to relate this result with others that do have error ranges. It is also impossible to include this result to calculate a weighted average of the mean using the error ranges to determine the appropriate weights. This would produce a smaller error range than any of the results included in the average.

Thanks for the post,
Buzz

16. Jun 26, 2015

### fzero

No, the cosmological analysis I described was originally developed for massless, active neutrinos. I.e., before neutrino masses or sterile neutrinos were seriously considered. Furthermore, decoupling (neutrino or photon) occurs at energies of order the electron mass, so around $10^5~\text{eV}$. Even if neutrino masses were a few eV, this is safely in the range where the neutrinos can be considered to be ultrarelativistic, i.e. massless. So around these energies, the equation of state is $\rho_\nu = (\text{const}) T_\nu^4$. The deviation of $N_\text{eff}$ from 3 has nothing to do with neutrino masses or sterile neutrinos, but has to do with the reaction rate for $e^+e^-\rightarrow \gamma$ in a bath of photons and neutrinos and similar concerns.

Now, if we add neutrino masses to the calculation, at decoupling it is only an order one in $10^{-5}$ or so correction, which is beyond experimental accuracy for the observational data, so I'm sure it is ignored. Where the masses come in is when we follow the evolution of the universe to lower temperatures. As $T_\nu$ approaches $m_\nu$, the equation of state above becomes less and less valid, so one must resort to a more detailed description that seems to require numerical techniques.

If we had added another active neutrino, we'd expect $N_\text{eff}$ to be $4 + \Delta N$, where $\Delta N$ is whatever you get by computing the corrections due to residual interactions including all 4 neutrino species. If the added neutrino is instead sterile, it would be expected to decouple from electrons at a much higher temperature than the active neutrinos. However, since the sterile neutrino can oscillate into active neutrinos, it should have a contribution to $N_\text{eff}$ that is suppressed relative to the result for a 4th active neutrino. The result from the original paper that $\Delta N_\text{eff} \sim 0.45<1$ is consistent with this expectation.

This is fairly accurate, but I will take the indulgence to make it more precise. Suppose we had a source that emitted a beam of electron neutrinos. If we knew precisely the PMNS matrix, we could write the electron neutrino state as a linear superposition of mass eigenstates. Over time, the coefficients in the superposition evolve in a way described by Schrodinger's equation. At a random point in time, we could operate with the inverse of the PMNS matrix and we would find that the state is now also in a superposition of the flavor eigenstates. Referring to the collection of neutrinos in the beam, we interpret this to mean that some of the electron neutrinos have oscillated into muon and tau neutrinos.

Now suppose that we had a detector that could measure the mass of a neutrino in the beam. If we performed a large number of measurements, we would measure three distinct values of mass. The relative frequency of the measurement of different values of the mass would be related to the modulus squared of the corresponding coefficients in the superposition of mass eigenstates.

What is more typical is that we would have a detector that would measure the flavor of a neutrino from the beam. Then the distribution we'd measure over a large number of measurements would give us information about the coefficients in the superposition of flavor eigenstates. This doesn't allow us to perfectly reconstruct the neutrino mass values, but it does give us valuable information about certain combinations of the mass and PMNS matrix elements..

As you say, the nature of the experiment matters quite a lot. Generally we cannot construct a beam of purely one flavor of neutrino, so impurities will affect our measurements. Also, since the state is evolving with time, the distance between the source and detector has a particular effect on the measurements.

I'm not familiar with the details of the experiment, but from the original proposal for the experiment, they seem to claim that beta decay is sensitive to an effective electron neutrino mass (from eq 9)

$$m^2(\nu_e) = \sum_i |U_{ei}|^2 m^2_i,$$

where $m_i^2$ are the mass squared eigenvalues and $U_{ij}$ is the PMNS matrix. So I don't think the way the experiment works would measure the actual mass squared eigenvalues.

I think any experiment will be highly sensitive to exactly what is measured. Perhaps there is a process that is sensitive to an analogous effective muon neutrino mass $m^2(\nu_\mu)$ given by a similar expression as above. But it is hard for me to generalize.

Most of the differences might be captured in the explanation I gave above about the behavior of the superpositions of eigenstates. But there might also be differences and limitations due to the specific processes that the detectors rely on to make measurements. For example, the beta decay is only sensitive to a certain linear combination of masses, but not to the mass eigenvalues independently. Another type of experiment might try to actually measure the electron or muon emitted when a neutrino collides with a component of the detector, so again we don't isolate a pure mass eigenstate.

17. Jun 27, 2015

### snorkack

Why not? Generally beta decay emits a single flavour, whether electron antineutrino or electron neutrino.
Why not? If a neutrino collides with a component of the detector, then the electron is a stable state whose energy and momentum could, in principle, be measured with arbitrary precision. If we measure the energies and momenta of all visible components involved with enough precision, could we determine the energy and momentum of the neutrino so as to ascertain its rest mass as having been a specific mass eigenstate, and it was electron neutrino flavour because it formed an electron.

18. Jun 27, 2015

### fzero

Yes, but I wouldn't call that a beam, since the neutrinos are emitted in all directions. Generally a neutrino "beam" is generated by colliding high energy protons at a fixed target, which produce pions and kaons. These can be focused while most of them have not decayed. Some fraction decay to products including electron and muon neutrinos and some further fraction of these neutrinos have momenta along the direction of their parents momenta.

The KATRIN experiment mentioned above focuses the electrons from the beta decay and sends them into a very precise spectrometer.

The mass eigenstates are not the momentum eigenstates. When we measure an electron, we know that, at the interaction, the neutrino was in the state

$$|\nu_e\rangle = \sum_i U_{ei} |\nu_i\rangle,$$

where $|\nu_i\rangle$ are the mass eigenstates. The expectation value of mass squared is

$$\langle \nu_e|m^2|\nu_e\rangle= \sum_i |U_{ei}|^2 m^2_i,$$

which is where that formula for the effective mass mentioned earlier came from.

19. Jun 27, 2015

### ohwilleke

A 95% confidence interval is equivalent to +/- 2 standard deviations. A +/- 1 standard deviation confidence interval (which is the convention in the absence of notation to the contrary) is a 68% confidence interval. In this area of physics, unless noted otherwise, the probability distribution is assumed to be Gaussian, which is to say that it is the normal Bell curve distribution. There are tables to convert percentiles to standard deviations, but most people just memorize at least a couple of key values.

20. Jun 27, 2015