B -> s µµ decays: Current status

In summary, the current status of the B->s mu+ mu- decays, specifically B^0 -> K^0 mu mu and B^+ -> K* mu mu, shows a potential deviation from the Standard Model predictions. These measurements point towards fewer muons in the decays, and the angular distribution analysis of B^0 -> K* mu mu also shows an interesting deviation at intermediate q-values. Theoretically, the deviations could be explained by new interactions between four particles or new heavy particles, but these explanations are not yet conclusive. Other measurements, such as the rare decay B_s -> mu mu, could help shed light on the situation. The LHCb seminar on Tuesday may provide more updates on these measurements.
  • #36
What they saw is a result that is consistent with the standard model which is lepton universal.
It should not be "we saw evidence that the standard model is incorrect (=breaking of LFU in b-quark decays), with a significance of 3.1 std", but something different and precise. When the 3.1 std is calculated you don't assume the SM to be incorrect.
Concise is good but it should not come at the expense of clarity and precision in scientific articles. There is enough confusion around p-values and its meaning in other fields.
 
Physics news on Phys.org
  • #37
It is correct, precise, and it is absolutely clear to every reader of the intended audience. The authors are not responsible for you not understanding what significance means.
 
  • Like
Likes Vanadium 50
  • #38
mfb said:
It is correct, precise, and it is absolutely clear to every reader of the intended audience. The authors are not responsible for you not understanding what significance means.

So, if it is clear, does the alternative hypothesis come with some measurable significance by the collab (which happens to be 3.1σ)?
Let's hope the journal will correct it.
 
  • #39
ChrisVer said:
the alternative hypothesis

Which alternative hypothesis? There are literally an infinite number of them.

ChrisVer said:
Let's hope the journal will correct it.

Correct what? They have done nothing wrong.
 
  • #40
Vanadium 50 said:
Which alternative hypothesis? There are literally an infinite number of them.

There is the null hypothesis, which is the SM, and which is lepton universal. The "alternative" hypothesis is that there is lepton universality violation ("breaking of LFU"). The abstract states that the alternative hypothesis comes with a significance of 3.1 std.
 
  • #41
The null hypothesis is that R = 1.0. There are an infinite number of alternative hypotheses - R could be 2, or 0.5, or 3, or some other number. I'm afraid I am 100% with @mfb here: It is correct, precise, and it is absolutely clear to every reader of the intended audience. The authors are not responsible for you not understanding what significance means.
 
  • #42
Vanadium 50 said:
That said, there is zero chance that this is a real effect due to SM miscalculation.

Am I reading this right, if R is not 1.0, then this is 100% incompatible with SM, and not some calculation that we're doing wrong? Is there a running tally of things this rules out/does not rule out?
 
  • #43
Vanadium 50 said:
The null hypothesis is that R = 1.0. There are an infinite number of alternative hypotheses - R could be 2, or 0.5, or 3, or some other number.

Saying that it could be 2, 0.5, 3 or any other number is equivalent to saying it is [itex]R\ne 1[/itex]. For rejecting the null you don't need to know the value of R, just the incompatibility of your measurement with the value "1". In particular the measurement outcome is a boolean: LFU or LFUV.

So, can you explain how the significance is being calculated from the non-null hypothesis? Would it be from the [itex]p_{s+b}[/itex]? If yes, I will agree with you and mfb that what they say is correct, precise and clear. If it comes from [itex]p_b[/itex] (or what is generally referred to as a p-value given the background only hypothesis), then it is unclear. To my knowledge the significance is calculated from the [itex]p_b[/itex]. There is a whole article on pdg you can study:
https://pdg.lbl.gov/2019/reviews/rpp2018-rev-statistics.pdf
about its calculation. In particular, you can have a look between 39.45 (definition of the p-value) and 39.46 (definition of signficance). The significance is associated with the p-value obtained from the null hypothesis, and not from the not-null.

So the result cannot be the "evidence of the non-null hypothesis with a significance of XX", but "agreement with the null hypothesis with and effect of XX std, which indicates evidence of the non-null hypothesis" or something similar.
 
Last edited:
  • #44
At this point, the thread is well and truly derailed. Mfb said it well: " It is correct, precise, and it is absolutely clear to every reader of the intended audience. The authors are not responsible for you not understanding what significance means."

My advice to you is, if you are convinced you are correct, is to publish a Comment on this paper explaining how the authors got it so wrong.
 
  • #45
nolxiii said:
Am I reading this right, if R is not 1.0, then this is 100% incompatible with SM, and not some calculation that we're doing wrong? Is there a running tally of things this rules out/does not rule out?
The SM expectation is not exactly 1.0000 but it's so close that the difference doesn't matter for now. Any deviation from the SM prediction is 100% incompatible with it.
ChrisVer said:
can you explain how the significance is being calculated from the non-null hypothesis?
There is no such thing, and please start a new thread if you want to discuss this further. This thread is the wrong place.
 
  • #46
Statistically, my main issue is cherry picking.

They've found several instances where you have LFV and they combine those to get their significance in sigma. They ignore the many, many other instances where you have results that are consistent with LFU when the justification for excluding those results is non-obvious.

For example, lepton universality violations are not found in tau lepton decays or pion decays, and are not found in anti-B meson and D* meson decays or in Z boson decays. There is no evidence of LFV in Higgs boson decays either.

As one paper notes: "Many new physics models that explain the intriguing anomalies in the b-quark flavour sector are severely constrained by Bs-mixing, for which the Standard Model prediction and experiment agreed well until recently." Luca Di Luzio, Matthew Kirk and Alexander Lenz, "One constraint to kill them all?" (December 18, 2017).

Similarly, see Martin Jung, David M. Straub, "Constraining new physics in b→cℓν transitions" (January 3, 2018) ("We perform a comprehensive model-independent analysis of new physics in b→cℓν, considering vector, scalar, and tensor interactions, including for the first time differential distributions of B→D∗ℓν angular observables. We show that these are valuable in constraining non-standard interactions.")

An anomaly disappeared between Run-1 and Run-2 as documented in Mick Mulder, for the LHCb Collaboration, "The branching fraction and effective lifetime of B0(s)→μ+μ− at LHCb with Run 1 and Run 2 data" (9 May 2017) and was weak in the Belle Collaboration paper, "Lepton-Flavor-Dependent Angular Analysis of B→K∗ℓ+ℓ−" (December 15, 2016).

When you are looking a deviations from a prediction you should include all experiments that implicate that prediction.

In a SM-centric view, all leptonic or semi-leptonic W boson decays arising when a quark decays to another kind of quark should be interchangeable parts (subject to mass-energy caps on final states determined from the initial state), and since all leptonic or semi-leptonic W boson decays (either at tree level or removed one step at the one loop level) and are deep down the same process. See, e.g., Simone Bifani, et al., "Review of Lepton Universality tests in B decays" (September 17, 2018). So, you should be lumping them all together to determine if the significance of evidence for LFV.

Their justification for not pooling the anomalous results with the non-anomalous ones is weak and largely not stated expressly. At a minimum, the decision to draw a line regarding what should be looked at in the LFV bunch of results to get the 3.1 sigma and what should be looked at in the LFU bunch of results that isn't used to moderate the 3.1 sigma in any way is highly BSM model dependent, and the importance of that observation is understated in the analysis (and basically just ignored).

The cherry picking also gives rise to look elsewhere effect issues. If you've made eight measurements in all, divided among three different processes, the look elsewhere effect is small. If the relevant universe is all leptonic and semi-leptonic W and Z boson decays, in contrast, there are hundreds of measurements out there and even after you prune the matter-energy conservation limited measurements, you still have huge look elsewhere effects that trim one or two sigma from the significance of your results.
 
Last edited:
  • Like
Likes vanhees71
  • #47
ohwilleke said:
In a SM-centric view, all leptonic or semi-leptonic W boson decays arising when a quark decays to another kind of quark should be interchangeable parts
The whole point is to look for deviations from the SM. Flavor-blind new physics is possible but it's not the only option. LFV new physics will necessarily show up in some spots more than others, it's perfectly possible to have new physics in bsmumu while not seeing strong changes in tau decays or whatever. So unless you show how the proposed new models would influence the measurements you cited I don't see their relevance.

Of course there are many couplings that could be changed. Everyone is aware of that.
 
  • Like
Likes vanhees71
  • #48
Is there are reason that the following analysis is wrong?

It is much easier to come up with a Standard Model-like scenario in which there are too many electron-positron pairs produced, than it is to come up with one where there are too muon or tau pairs produced, because a limited amount of mass-energy in a decay channel source can make it possible to produce electron-positron pairs but not muon-antimuon or tau-antitau pairs.

The ratios seem to be coming up at more than 80% but less than 90% of the expected Standard Model number of muon pair decays relative to electron-positron decays.

The simplest answer would be that there are two processes.

One produces equal numbers of electron-positron and muon pair decays together with a positively charged kaon in each case, as expected. The pre-print states this about this process:
The B+ hadron contains a beauty antiquark, b, and the K+ a strange antiquark, s, such that at the quark level the decay involves a b → s transition. Quantum field theory allows such a process to be mediated by virtual particles that can have a physical mass larger than the mass difference between the initial- and final-state particles. In the SM description of such processes, these virtual particles include the electroweak-force carriers, the γ, W± and Z bosons, and the top quark. Such decays are highly suppressed and the fraction of B+ hadrons that decay into this final state (the branching fraction, B) is of the order of 10^−6.

A second process with a branching fraction of about 1/6th that of the primary process produces a positively charged kaon together with an electromagnetically neutral particle that has more than about 1.22 MeV of mass (enough to decay to a positron-electron pair), but less than about 211.4 MeV mass necessary to produce a muon pair when it decays.

It turns out that there is exactly one such known particle fundamental or composite, specifically, a neutral pion, with a mass of about 134.9768(5) MeV. About 98.8% of the time, a πº decays to a pair of photons and that decay would be ignored as the end product doesn't match the filtering criteria. But about 1.2% of the time, it decays to an electron-positron pair together with a photon, and all other possible decays are vanishing rare by comparison.

So, we need a decay of a B+ meson to a K+ meson and a neutral pion with a branching fraction of about (10^-6)*(1/6)*(1/0.012)= 1.4 * 10^-5.

It turns out that B+ mesons do indeed decay to K+ mesons and neutral pions with a branching fraction of 1.29(5)*10^-5 which is exactly what it needs to be to produce the apparent violation of lepton universality observed in B+ meson decay.

It also appears to me that the theoretical calculation of the K+µ+µ- to K+e+e- ratio discussed in the preprint isn't considering this decay, although it seems mind boggling to me that so many physicists in such a carefully studied process would somehow overlook the B+ --> K+πº decay channel impact on their expected outcome, which is the obvious way to reverse engineer the process.

I acknowledge that I could easily have missed something. But, if it works, doesn't this entirely solve the problem in a mundane, Standard Model manner, at least for B+ meson decays?
 
Last edited:
  • Like
Likes vanhees71
  • #49
Electrons from pi0 decays would be blatantly obvious in the phase space and localized in q2. They are also trivial to veto if they are in the acceptance often enough.
 
  • Like
Likes vanhees71
  • #50
mfb said:
Electrons from pi0 decays would be blatantly obvious in the phase space and localized in q2. They are also trivial to veto if they are in the acceptance often enough.
Maybe so, but only if someone is looking for the distinction and not focused on other considerations. I haven't seen that analysis, which doesn't mean that it isn't there.

I also haven't seen anything discussing the presence of such a veto in any of the methodology discussions in the relevant papers, although I acknowledge that there are a lot of them and some of them recursively reference older papers for analysis that I haven't rigorously reviewed.

If someone didn't do the veto the first time, could they reanalyze the same data with it to check out that possibility?

Quite frankly, I was stunned that the number matched when I did the math, as I had no idea what the relevant B+ and πº branching fractions would be. If it hadn't unexpectedly popped out like that I would have dismissed it entirely. It is also the only SM explanation that I can think of (other than underestimated error).
 
Last edited:
  • Like
Likes vanhees71

Similar threads

  • High Energy, Nuclear, Particle Physics
Replies
30
Views
2K
  • High Energy, Nuclear, Particle Physics
Replies
4
Views
2K
  • High Energy, Nuclear, Particle Physics
Replies
2
Views
1K
  • High Energy, Nuclear, Particle Physics
Replies
17
Views
5K
  • High Energy, Nuclear, Particle Physics
Replies
11
Views
1K
  • High Energy, Nuclear, Particle Physics
Replies
7
Views
1K
  • High Energy, Nuclear, Particle Physics
Replies
2
Views
1K
  • High Energy, Nuclear, Particle Physics
Replies
13
Views
2K
  • High Energy, Nuclear, Particle Physics
Replies
7
Views
4K
  • High Energy, Nuclear, Particle Physics
Replies
13
Views
3K
Back
Top