I B -> s µµ decays: Current status

Messages
37,372
Reaction score
14,198
Something curious is going on with these decays. LHCb gives a seminar talk Tuesday next week, a livestream will be available.
Edit: See results discussed starting here

I'll summarize the current status here. I tried to keep most at the advanced (I)-level, but I don't think that always worked. (B)-level summary: We might have found signs of new unexpected physical effects, but the situation is still unclear.##b \to s \mu^+ \mu^-## is a rare process in the Standard Model, it involves a flavor-changing neutral current. New physics could introduce a coupling to these particles and alter the frequency or the dynamics of the process. We cannot see isolated quarks of course, so the experiments measure the decays of various B mesons to a hadron containing a strange quark plus the muons, an example is shown in the following image.

decays.png
An important test is the frequency of these decays ("branching fraction"). To get more precise theoretical predictions, the branching fraction is often compared to the equivent decay with electrons instead of muons. Based on the production processes, the branching ratios for these decays should be nearly identical, only the slightly different phase space leads to a (well-predicted and tiny) difference.

##B^0 \to K^0 \mu \mu## and ##B^+ \to K^* \mu \mu## -> too low by 2.0 sigma and 2.2 sigma, respectively (the other measurement has been superseded, see the following entry)
##B^+ \to K^+ ll## -> muons are too rare by 2.6 sigma
##B_s^0 \to \phi ll## -> muons are too rare by 3 sigma

Individually, all these measurements look like statistical fluctuations. But they all point in the same direction, towards fewer muons.

##B^0 \to K^{*}(\to K^+ \pi^-) \mu \mu## is a 3-body decay, but due to the quick decay of the ##K^*## 4 particles are produced in total, that makes it interesting to look at the angular distributions. Typically they are studied as function of the invariant mass of the muon pair. Going into all the details would be beyond the scope of this thread (you can read the papers), in summary many parameters are measured. LHCb did the most precise measurement of them so far. One of them, called P'5, shows an interesting deviation at intermediate q-values, see the figure below. How to interpret this?

Pprime5.png


Wilson coefficients are a set of parameters describing a generic new interaction between four particles via heavy particles. Here is an introduction. LHCb did a fit to these parameters based on the angular analysis of ##B^0 \to K^{*}(\to K^+ \pi^-) \mu \mu##. ##C_9## showed a shift of 3.4 standard deviations relative to the SM value. A new spin-1 particle could lead to such a deviation.

This triggered a lot of interest, so other collaborations measured the same decay as well or updated their results for the recent Moriond conference.
Belle result - a similar deviation in P'5, 2.6 sigma
ATLAS result - a similar deviation in P'5, 2 sigma due to the larger uncertainty.
CMS result - no visible deviation, but also with a significant uncertainty

Individually, all these measurements look like statistical fluctuations. But they all point in the same direction, at the same point that is closely linked to ##C_9##.
Theorists combined all these results (and a few more with larger uncertainties) to global fits to the Wilson coefficients: Status of the B->K*µ+µ- anomaly after Moriond 2017.
The result? The best fit value for ##C_9## differs from the Standard Model expectation by 4.9 sigma, supported by both the lower number of muons in the decays and the P'5 measurements. Another option is a deviation in both ##C_9## and ##C_{10}## with opposite sign. This also fits well to the experimental results (with a slightly lower significance), and would correspond to new heavy particles only coupling to left-handed leptons.
LHCb did a similar analysis just based on their own measurements, and got results consistent with the global fit.

Theoretical flavor physics is complicated. It could be that some effects related to QCD were neglected that will turn out to be larger than expected, and explain at least parts of the deviations seen in P'5. It is unlikely that they explain the observed deviations in the decay probabilities, however.What is next?
There will certainly be more work on the theory side to see how the observed deviations can be explained - either by SM effects or by some plausible new physics model. Personally I am more waiting for updated measurements, either showing this was all a weird statistical fluctuation, or establishing the deviations beyond reasonable doubt. There are deviations inconsistent with plausible new physics models, in this case it is a problem of our understanding of the SM. If the deviations are consistent with new physics models, then these models will make predictions for other measurements. One important example is the rare decay ##B_s \to \mu \mu## - it has the same particles contributing. So far, the uncertainty on its decay frequency is too large to contribute notably in global fits, but that will change soon.

There is the LHCb seminar on Tuesday, and we can expect new results. So far the collaboration has shown the results for ##B^0 \to K^{*} \mu \mu## based on Run 1 data (2011-2012), the dataset collected in 2015 to 2016 should have a similar size. They might double their statistics. I expect more big updates in the next two years based on the 2017 and then 2018 datasets, and ATLAS and CMS can improve their measurements as well.

Edit: See results discussed starting here. More branching fractions with missing muons.There is another related measurement: The ratio of ##B^0 \to D^* \tau \nu## to ##B^0 \to D^* \mu \nu## (and equivalent with ##D^0## instead of ##D^*##). As the ##\tau## has a large mass, it is smaller than 1: The SM prediction is 0.25. Belle, BaBar and LHCb measured it. All experimental values are higher, with a combined significance of 3.9 sigma. Again fewer muons...As various 3-5 sigma excesses in the past showed, new physics is always the most unlikely explanation, unless all possible alternatives have been ruled out. It is probably not new physics. But at least it is a promising place to look. And we'll know more next week.
 
Last edited:
  • Like
Likes Amrator, arivero, QuantumQuest and 7 others
Physics news on Phys.org
mfb said:
There is the LHCb seminar on Tuesday, and we can expect new results. So far the collaboration has shown the results for ##B^0 \to K^{*} \mu \mu## based on Run 1 data (2011-2012), the dataset collected in 2015 to 2016 should have a similar size. They might double their statistics. I expect more big updates in the next two years based on the 2017 and then 2018 datasets, and ATLAS and CMS can improve their measurements as well.

I should say that the result being presented on Tuesday is Run 1 only. We have not shown this particular measurement before.
 
  • Like
Likes ohwilleke
Why would those effects appear in loops and not tree-level decays? (originates from your mention to the BD*τν vs BD*μν excess)...
 
  • Like
Likes ohwilleke
ChrisVer said:
Why would those effects appear in loops and not tree-level decays? (originates from your mention to the BD*τν vs BD*μν excess)...
If it is a coupling to ##b##,##s##,##\mu##,##\mu##: There is no SM tree-level diagram.

In general, processes without SM tree-level mechanism are more promising places to search for new physics as the SM amplitudes are smaller.

@dukwon: A completely new measurement sounds interesting.
 
There are previous measurements from the B factories, I should add. Their error bars are much larger though.
 
Well, here are the slides: https://indico.cern.ch/event/580620/attachments/1442409/2226501/cern_2017_04_18.pdf

The quantity being measured is $$R(K^*) \equiv \frac{B^0 \to K^{*0} \mu^+ \mu^-}{B^0 \to K^{*0} e^+ e^-}$$
Results are on slides 32 and 33:
##R(K^*)=0.660^{+0.110}_{-0.070}\pm0.024## in ##q^2 \in [0.045,1.1]\text{ GeV}^2/c^4## (2.2~2.4σ below SM)
and
##R(K^*)=0.685^{+0.113}_{-0.069}\pm0.047## in ##q^2 \in [1.1,6.0]\text{ GeV}^2/c^4## (2.4~2.5σ below SM)

Attached is a plot comparing the results to SM predictions:
lhcb-public_RKstar_sm.png


This result agrees with the B-factory measurements, but their errors were ~30%.
 
  • Like
Likes vanhees71 and mfb
Ratio of ##B^0 \to K^* \mu \mu## to ##B^0 \to K^* ee##.
2.2-2.4 sigma below the different SM predictions in the lowest bin of dilepton invariant mass.
2.4-2.5 sigma below the different SM predictions in the second lowest bin of dilepton invariant mass.

Again missing muons...

Edit: dukwon was faster.

5/fb expected in Run 2, together with higher energy and better triggers this could give a 5 times larger dataset. As the analysis is limited by statistics, the uncertainty should reduce by more than a factor 2. If the central value stays the same, it would give a 5 sigma deviation in both bins.

An interesting detail about the electrons:
The LHCb electron energy measurement is mainly based on the tracking system, which means bremsstrahlung emitted before passing the magnet is a problem. LHCb developed a system to add bremsstrahlung photons to the reconstructed electrons. This is different to ATLAS and CMS which mainly rely on their calorimeters.
 
  • Like
Likes Greg Bernhardt and vanhees71
@dukwon collected some theory papers that popped up.

Patterns of New Physics in b→sℓ+ℓ− transitions in the light of recent data
Interpreting Hints for Lepton Flavor Universality Violation
Flavour anomalies after the RK∗ measurement
RK and RK∗ beyond the Standard Model
Towards the discovery of new physics with lepton-universality ratios of b→sℓℓ decays
On Flavourful Easter eggs for New Physics hunger and Lepton Flavour Universality violation

Different flavours sorry[/size] of the same interpretation: 3.5 to 5 sigma tension with the Standard Model depending on what exactly you consider. Could be explained by a variation of C9, or potentially C9 and C10.

A larger C10 would make the decay Bs -> μμ more frequent, but even the recent LHCb measurement is not yet precise enough to contribute notably to fits.

Leptoquarks are a viable model.
A Z' could explain it.
Even with more precise measurements, if the deviation gets more significant, it will be challenging to figure out what exactly is correct.
 
  • Like
Likes Lord Crc, nolxiii and Greg Bernhardt
mfb said:
There is another related measurement: The ratio of ##B^0 \to D^* \tau \nu## to ##B^0 \to D^* \mu \nu## (and equivalent with ##D^0## instead of ##D^*##). As the ##\tau## has a large mass, it is smaller than 1: The SM prediction is 0.25. Belle, BaBar and LHCb measured it. All experimental values are higher, with a combined significance of 3.9 sigma. Again fewer muons..
LHCb added another measurement

Guess the direction of the deviation.
Fewer muons than expected. How did you guess that?

A naive average puts the new significance at 4.1 sigma.

It is expected that the full Run 2 dataset (including data up to 2018) will lead to an LHCb measurement more precise than the current world average. If the central value stays the same, we would expect more than 5 sigma from LHCb alone, and even more as world average. The analysis is challenging, it will probably take until late 2019 or 2020 until we see the result.

Belle II is expected to start taking data in 2018, but their 2018 dataset will probably be too small to beat LHCb's precision. The dataset size will increase rapidly in 2019-2020.
 
  • Like
Likes vanhees71
  • #10
Well, I believe "physics beyond the Standard Model" when it's really discovered. So from this measurement, it'll take about 2 years :-(.
 
  • #11
Well, when is it discovered? The LHCb datasets are growing continuously, and we get an update once in a while. BaBar/Belle are still working on some analyses, but I wouldn't expect too many updates from them.
If it is new physics, then this trend will just go on. We will get more and more measurements with odd 2-3 sigma effects, that slowly get 3-4 sigma effects and eventually 4-5 sigma effects, while the combined significance grows as well - hitting 5 sigma before individual measurements do that, but with different results from different theorists. There won't be a single date where we go from "this is curious" to "this has to be new physics".

In ATLAS and CMS, we had several years where everything was possible - the first 7 TeV data in 2010, the first large 7 TeV dataset in 2011, the first 8 TeV data in 2012, the first 13 TeV data in 2015, the first large 13 TeV dataset in 2016 - all could have had some 5 sigma effect out of nowhere (and searches with 2016 data are still ongoing). That is not the case for flavor physics at LHCb, where you just accumulate more and more B-mesons over time - LHCb exceeded its design luminosity long ago.
 
  • Like
Likes vanhees71
  • #12
Gamaliel`s principle :rolleyes:

if this counsel or this work be of men, it will come to nough
 
  • #13
What measurements, if any, can ATLAS and CMS do to shed some light on these discrepancies? The LHCb presentation mentioned charged Higgs, which I assume means a multiple Higgs model, is that something that would/should show up in ATLAS and CMS?
 
  • #14
Charged Higgs bosons could be found by ATLAS and CMS.

For ##B_s \to \mu \mu##, they can contribute a lot to the precision.

Everything involving kaons should be out of reach as the big experiments cannot distinguish them from the much more frequent pions.
I don't see much hope for the other decays either. B decays are very low-energetic for these detectors - two muons are rare enough to trigger on them despite the low energy, but hadronic or semileptonic decays are way too frequent to record them (or even read them out).
 
  • Like
Likes Lord Crc
  • #15
Yet another LHCb measurement
This time ##R_{K^{*0}} = \left( B(B \to K^{*0} \mu \mu) \right) / \left( B(B \to K^{*0} ee \right)## (ignoring some technical details) in two bins of the dilepton invariant mass.
Muons are too rare by 2.2 and 2.4 standard deviations, respectively, again the same direction.

lhcbrkstar.png


Edit: Forgot the plot. Note the tiny theory uncertainties, and how the measurement is dominated by statistical uncertainties - more data will make it more accurate.

An LHCb member gave a talk about the current status at EPS, the topic discussed here starts at slide 15.CMS updated its P'5 measurement. The result is very close to the SM prediction - sometimes above sometimes below it. They also show a more recent theoretical prediction, which estimates the parameter to be closer to the LHCb/Belle/BaBar measurements (slide 13).

Belle https://indico.cern.ch/event/466934/contributions/2588875/attachments/1487735/2315899/slides.pdf (slides 20 and 21), but the uncertainty is large. The value is ~0.5 sigma above the SM prediction and 1 sigma below the world average.

Edit: More updates, done now.Edit: I missed an older measurement, ##\Lambda_b^0 \to \Lambda \mu \mu##. Here it is - see figure 5 on page 13. Same trend as observed everywhere else.
 
Last edited:
  • Like
Likes vanhees71, Lord Crc and arivero
  • #16
And one more LHCb measurement

##B(B^+_c→J/\Psi \tau^+ \nu_\tau)\,/\,B(B^+_c→J/\Psi \mu^+ \nu_\mu)## (including the charge conjugated modes).

The SM prediction is about 0.25 to 0.28, the experimental result is 0.71±0.17(stat)±0.18(syst), or about two standard deviations higher than expected, corresponding to more taus or fewer muons.

The pattern continues...
 
  • Like
Likes vanhees71
  • #17
There is a nice non-mathematical article on this stuff ("Measuring Beauty" by Guy Wilkinson, who works on LHCb) in the November issue of Scientific American.
 
  • #18
I know of mechanisms that enhance the 3rd generation couplings and can lead to non-universalities, but I don't know of any mechanism that predicts less muons?
 
  • #19
Why only 2011-2012 for this? (if i read that right) Does it take that long to process that mountain of data?
 
  • #20
LHCb collected large datasets in 2011, 2012, 2016 and 2017, and another one is expected for 2018. Afterwards the LHC will shut down for two years for upgrades. While there are some results based on 2016 data alone and a few more are probably in preparation, in most cases it is more useful to prepare the analysis now and complete it once the 2018 dataset can be included. That increases the statistics a lot, and it is a useful step before the upgrade comes and the detector changes a lot.

Apart from that: Precision measurements take time. A few months if you have a big team and if you want to get it done quickly, but 1-2 years is more typical. While you can do some things in advance, there are many things that can only be done (or have to be repeated) once the full dataset is available.
Searches for new particles are much faster as they don't need the precision. It doesn't matter much if you have 10% uncertainty somewhere if your main result is basically binary ("we found nothing" or "we might have found something").
 
  • #21
Upcoming seminar: New results on theoretically clean observables in rare B-meson decays from LHCb
3.1 sigma away from the SM prediction of lepton universality in the ##B^+ \to K^+ \mu \mu## vs. ##B^+ \to K^+ e e ## comparison. Again the same direction, muons are less common than electrons.
9/fb, i.e. the whole LHCb dataset so far.
Preprint on arXiv

This is an updated measurement of the 2.6 sigma result I linked in the first post. A bit more luminosity, a bit more significance.
 
  • Like
Likes vanhees71 and arivero
  • #22
So, where do we stand, as of now? Is there some recent review of all the anomalous results?
 
  • #23
I think, it's this preprint, we are talking about:

https://arxiv.org/abs/2103.11769

According to the abstract the violation of lepton universiality in ##\mathrm{B}^+ \rightarrow \mathrm{K}^+ + \ell^+ + \ell^-## decays is violated in comparing ##\ell=\text{e}## to ##\ell=\mu## at ##3.1 \sigma## significance now.

It's getting more significance for some "beyond the standard model physics", but on the other hand we've seen many ##3\sigma## signals gone with getting more data before...
 
  • Like
Likes Dale
  • #24
I miss some explanation about why the decays towards particles with different masses, as muon and electron are, should be expected to have the same branch ratio. Same coupling, yes. But branch ratio should depend on the energy and momentum avalaible for each decay.
 
  • #25
The Q of the two decays is very similar: muons are light compared to that.
 
  • Like
Likes vanhees71
  • #26
And, of course, the "trivial" effect of the different phase space due to the different masses of the electrons and muons is taken into account too.
 
  • Like
Likes Vanadium 50 and arivero
  • #27
It is taken into account, but it is quite small. Unmeasurably small. I don't know the number off the top of my head, but it's like a fraction of a percent. Maybe 0.1%?
 
  • Like
Likes vanhees71
  • #28
LHCb says "1.00+-0.01" (page 7) as theory expectation. Their statistical uncertainty is 4% and the central value for the ratio is 0.846, and that small 1% theory uncertainty comes from higher order effects, not from the lepton masses.

Gudrun Hiller et al updated their numbers on the theory side: Flavorful leptoquarks at the LHC and beyond: Spin 1
 
  • Like
Likes ohwilleke and vanhees71
  • #29
Ah yep I see that it is because of the high Q, that then theory expectation is near 1. In the theory papers it is mentioned that LHCb has also low-medium Q experiments, for which experimental result is about 0.6 and theory expectation is about 0.8
 
  • Like
Likes ohwilleke
  • #30
The Q of the decay (total Q) is not the same as the Q of the window in which they look. Both channels can go low in decay Q. Only the very highest Qs are accessible to electrons and not muons. But this effect is tiny.

I haven't looked at the calculation in a long time (and I should - Gudrun was once my office-mate at Kavli) but I expect the effects are due to what used to be called "vector meson dominance" in photoproduction. You can replace the photon with a virtual omega (e.g.) or phi(1020) and the relative phases can be slightly different for e's and mu's.

That said, there is zero chance that this is a real effect due to SM miscalculation.
 
  • Like
  • Informative
Likes vanhees71, ohwilleke and mfb
  • #31
Vanadium 50 said:
I expect the effects are due to what used to be called "vector meson dominance" in photoproduction
[...]
That said, there is zero chance that this is a real effect due to SM miscalculation.
I want to understand this comment (I'm interested in vector dominance), so just to be sure:

are you referring in the first part, to @mfb's higher order effects (#28), and in the second part, to the B-meson decay anomalies?
 
  • #32
vanhees71 said:
I think, it's this preprint, we are talking about:

https://arxiv.org/abs/2103.11769

According to the abstract the violation of lepton universiality in ##\mathrm{B}^+ \rightarrow \mathrm{K}^+ + \ell^+ + \ell^-## decays is violated in comparing ##\ell=\text{e}## to ##\ell=\mu## at ##3.1 \sigma## significance now.

It's getting more significance for some "beyond the standard model physics", but on the other hand we've seen many ##3\sigma## signals gone with getting more data before...

That is why I am somewhat hesitant of calling them " 3\sigma signals "... The probability distribution out of which those numbers appear is calculated given the null hypothesis, p( x>x^* | !M) , and doesn't correspond to the probability of the signal hypothesis , P(M). In that respect I am not very fond of the way the abstract is typed (although they call the evidence of LFU a 3σ effect and not the LFU)
 
  • Like
Likes ohwilleke
  • #33
I am referring to higher order effects in the calculations

ChrisVer said:
That is why I am somewhat hesitant of calling them "3σ signals "...

Then you should be happy that the authors don't call them that. Further, it is well understood that significance refers to the probability of the null hypothesis alone producing an effect as large or larger and nothing like "the probability the signal is real". (Which is not well-defined.)
 
  • #34
Vanadium 50 said:
Then you should be happy that the authors don't call them that. Further, it is well understood that significance refers to the probability of the null hypothesis alone producing an effect as large or larger and nothing like "the probability the signal is real". (Which is not well-defined.)

I agree up to the last parenthesis. However, I think they could have phrased the abstract better to not give any false impression.
 
  • #35
Which false impression? This is what they write:
This article presents evidence for the breaking of lepton universality in beauty-quark decays, with a significance of 3.1 standard deviations
Papers are written by experts for experts and this is a very concise and clear summary of what they measure. But even if we consider non-experts, everyone who can understand what "significance of 3.1 standard deviations" means at all should know that "the probability the signal is real" is not a thing the analysis can answer.
 
  • Like
Likes Dale
  • #36
What they saw is a result that is consistent with the standard model which is lepton universal.
It should not be "we saw evidence that the standard model is incorrect (=breaking of LFU in b-quark decays), with a significance of 3.1 std", but something different and precise. When the 3.1 std is calculated you don't assume the SM to be incorrect.
Concise is good but it should not come at the expense of clarity and precision in scientific articles. There is enough confusion around p-values and its meaning in other fields.
 
  • #37
It is correct, precise, and it is absolutely clear to every reader of the intended audience. The authors are not responsible for you not understanding what significance means.
 
  • Like
Likes Vanadium 50
  • #38
mfb said:
It is correct, precise, and it is absolutely clear to every reader of the intended audience. The authors are not responsible for you not understanding what significance means.

So, if it is clear, does the alternative hypothesis come with some measurable significance by the collab (which happens to be 3.1σ)?
Let's hope the journal will correct it.
 
  • #39
ChrisVer said:
the alternative hypothesis

Which alternative hypothesis? There are literally an infinite number of them.

ChrisVer said:
Let's hope the journal will correct it.

Correct what? They have done nothing wrong.
 
  • #40
Vanadium 50 said:
Which alternative hypothesis? There are literally an infinite number of them.

There is the null hypothesis, which is the SM, and which is lepton universal. The "alternative" hypothesis is that there is lepton universality violation ("breaking of LFU"). The abstract states that the alternative hypothesis comes with a significance of 3.1 std.
 
  • #41
The null hypothesis is that R = 1.0. There are an infinite number of alternative hypotheses - R could be 2, or 0.5, or 3, or some other number. I'm afraid I am 100% with @mfb here: It is correct, precise, and it is absolutely clear to every reader of the intended audience. The authors are not responsible for you not understanding what significance means.
 
  • #42
Vanadium 50 said:
That said, there is zero chance that this is a real effect due to SM miscalculation.

Am I reading this right, if R is not 1.0, then this is 100% incompatible with SM, and not some calculation that we're doing wrong? Is there a running tally of things this rules out/does not rule out?
 
  • #43
Vanadium 50 said:
The null hypothesis is that R = 1.0. There are an infinite number of alternative hypotheses - R could be 2, or 0.5, or 3, or some other number.

Saying that it could be 2, 0.5, 3 or any other number is equivalent to saying it is R\ne 1. For rejecting the null you don't need to know the value of R, just the incompatibility of your measurement with the value "1". In particular the measurement outcome is a boolean: LFU or LFUV.

So, can you explain how the significance is being calculated from the non-null hypothesis? Would it be from the p_{s+b}? If yes, I will agree with you and mfb that what they say is correct, precise and clear. If it comes from p_b (or what is generally referred to as a p-value given the background only hypothesis), then it is unclear. To my knowledge the significance is calculated from the p_b. There is a whole article on pdg you can study:
https://pdg.lbl.gov/2019/reviews/rpp2018-rev-statistics.pdf
about its calculation. In particular, you can have a look between 39.45 (definition of the p-value) and 39.46 (definition of signficance). The significance is associated with the p-value obtained from the null hypothesis, and not from the not-null.

So the result cannot be the "evidence of the non-null hypothesis with a significance of XX", but "agreement with the null hypothesis with and effect of XX std, which indicates evidence of the non-null hypothesis" or something similar.
 
Last edited:
  • #44
At this point, the thread is well and truly derailed. Mfb said it well: " It is correct, precise, and it is absolutely clear to every reader of the intended audience. The authors are not responsible for you not understanding what significance means."

My advice to you is, if you are convinced you are correct, is to publish a Comment on this paper explaining how the authors got it so wrong.
 
  • #45
nolxiii said:
Am I reading this right, if R is not 1.0, then this is 100% incompatible with SM, and not some calculation that we're doing wrong? Is there a running tally of things this rules out/does not rule out?
The SM expectation is not exactly 1.0000 but it's so close that the difference doesn't matter for now. Any deviation from the SM prediction is 100% incompatible with it.
ChrisVer said:
can you explain how the significance is being calculated from the non-null hypothesis?
There is no such thing, and please start a new thread if you want to discuss this further. This thread is the wrong place.
 
  • #46
Statistically, my main issue is cherry picking.

They've found several instances where you have LFV and they combine those to get their significance in sigma. They ignore the many, many other instances where you have results that are consistent with LFU when the justification for excluding those results is non-obvious.

For example, lepton universality violations are not found in tau lepton decays or pion decays, and are not found in anti-B meson and D* meson decays or in Z boson decays. There is no evidence of LFV in Higgs boson decays either.

As one paper notes: "Many new physics models that explain the intriguing anomalies in the b-quark flavour sector are severely constrained by Bs-mixing, for which the Standard Model prediction and experiment agreed well until recently." Luca Di Luzio, Matthew Kirk and Alexander Lenz, "One constraint to kill them all?" (December 18, 2017).

Similarly, see Martin Jung, David M. Straub, "Constraining new physics in b→cℓν transitions" (January 3, 2018) ("We perform a comprehensive model-independent analysis of new physics in b→cℓν, considering vector, scalar, and tensor interactions, including for the first time differential distributions of B→D∗ℓν angular observables. We show that these are valuable in constraining non-standard interactions.")

An anomaly disappeared between Run-1 and Run-2 as documented in Mick Mulder, for the LHCb Collaboration, "The branching fraction and effective lifetime of B0(s)→μ+μ− at LHCb with Run 1 and Run 2 data" (9 May 2017) and was weak in the Belle Collaboration paper, "Lepton-Flavor-Dependent Angular Analysis of B→K∗ℓ+ℓ−" (December 15, 2016).

When you are looking a deviations from a prediction you should include all experiments that implicate that prediction.

In a SM-centric view, all leptonic or semi-leptonic W boson decays arising when a quark decays to another kind of quark should be interchangeable parts (subject to mass-energy caps on final states determined from the initial state), and since all leptonic or semi-leptonic W boson decays (either at tree level or removed one step at the one loop level) and are deep down the same process. See, e.g., Simone Bifani, et al., "Review of Lepton Universality tests in B decays" (September 17, 2018). So, you should be lumping them all together to determine if the significance of evidence for LFV.

Their justification for not pooling the anomalous results with the non-anomalous ones is weak and largely not stated expressly. At a minimum, the decision to draw a line regarding what should be looked at in the LFV bunch of results to get the 3.1 sigma and what should be looked at in the LFU bunch of results that isn't used to moderate the 3.1 sigma in any way is highly BSM model dependent, and the importance of that observation is understated in the analysis (and basically just ignored).

The cherry picking also gives rise to look elsewhere effect issues. If you've made eight measurements in all, divided among three different processes, the look elsewhere effect is small. If the relevant universe is all leptonic and semi-leptonic W and Z boson decays, in contrast, there are hundreds of measurements out there and even after you prune the matter-energy conservation limited measurements, you still have huge look elsewhere effects that trim one or two sigma from the significance of your results.
 
Last edited:
  • Like
Likes vanhees71
  • #47
ohwilleke said:
In a SM-centric view, all leptonic or semi-leptonic W boson decays arising when a quark decays to another kind of quark should be interchangeable parts
The whole point is to look for deviations from the SM. Flavor-blind new physics is possible but it's not the only option. LFV new physics will necessarily show up in some spots more than others, it's perfectly possible to have new physics in bsmumu while not seeing strong changes in tau decays or whatever. So unless you show how the proposed new models would influence the measurements you cited I don't see their relevance.

Of course there are many couplings that could be changed. Everyone is aware of that.
 
  • Like
Likes vanhees71
  • #48
Is there are reason that the following analysis is wrong?

It is much easier to come up with a Standard Model-like scenario in which there are too many electron-positron pairs produced, than it is to come up with one where there are too muon or tau pairs produced, because a limited amount of mass-energy in a decay channel source can make it possible to produce electron-positron pairs but not muon-antimuon or tau-antitau pairs.

The ratios seem to be coming up at more than 80% but less than 90% of the expected Standard Model number of muon pair decays relative to electron-positron decays.

The simplest answer would be that there are two processes.

One produces equal numbers of electron-positron and muon pair decays together with a positively charged kaon in each case, as expected. The pre-print states this about this process:
The B+ hadron contains a beauty antiquark, b, and the K+ a strange antiquark, s, such that at the quark level the decay involves a b → s transition. Quantum field theory allows such a process to be mediated by virtual particles that can have a physical mass larger than the mass difference between the initial- and final-state particles. In the SM description of such processes, these virtual particles include the electroweak-force carriers, the γ, W± and Z bosons, and the top quark. Such decays are highly suppressed and the fraction of B+ hadrons that decay into this final state (the branching fraction, B) is of the order of 10^−6.

A second process with a branching fraction of about 1/6th that of the primary process produces a positively charged kaon together with an electromagnetically neutral particle that has more than about 1.22 MeV of mass (enough to decay to a positron-electron pair), but less than about 211.4 MeV mass necessary to produce a muon pair when it decays.

It turns out that there is exactly one such known particle fundamental or composite, specifically, a neutral pion, with a mass of about 134.9768(5) MeV. About 98.8% of the time, a πº decays to a pair of photons and that decay would be ignored as the end product doesn't match the filtering criteria. But about 1.2% of the time, it decays to an electron-positron pair together with a photon, and all other possible decays are vanishing rare by comparison.

So, we need a decay of a B+ meson to a K+ meson and a neutral pion with a branching fraction of about (10^-6)*(1/6)*(1/0.012)= 1.4 * 10^-5.

It turns out that B+ mesons do indeed decay to K+ mesons and neutral pions with a branching fraction of 1.29(5)*10^-5 which is exactly what it needs to be to produce the apparent violation of lepton universality observed in B+ meson decay.

It also appears to me that the theoretical calculation of the K+µ+µ- to K+e+e- ratio discussed in the preprint isn't considering this decay, although it seems mind boggling to me that so many physicists in such a carefully studied process would somehow overlook the B+ --> K+πº decay channel impact on their expected outcome, which is the obvious way to reverse engineer the process.

I acknowledge that I could easily have missed something. But, if it works, doesn't this entirely solve the problem in a mundane, Standard Model manner, at least for B+ meson decays?
 
Last edited:
  • Like
Likes vanhees71
  • #49
Electrons from pi0 decays would be blatantly obvious in the phase space and localized in q2. They are also trivial to veto if they are in the acceptance often enough.
 
  • Like
Likes vanhees71
  • #50
mfb said:
Electrons from pi0 decays would be blatantly obvious in the phase space and localized in q2. They are also trivial to veto if they are in the acceptance often enough.
Maybe so, but only if someone is looking for the distinction and not focused on other considerations. I haven't seen that analysis, which doesn't mean that it isn't there.

I also haven't seen anything discussing the presence of such a veto in any of the methodology discussions in the relevant papers, although I acknowledge that there are a lot of them and some of them recursively reference older papers for analysis that I haven't rigorously reviewed.

If someone didn't do the veto the first time, could they reanalyze the same data with it to check out that possibility?

Quite frankly, I was stunned that the number matched when I did the math, as I had no idea what the relevant B+ and πº branching fractions would be. If it hadn't unexpectedly popped out like that I would have dismissed it entirely. It is also the only SM explanation that I can think of (other than underestimated error).
 
Last edited:
  • Like
Likes vanhees71
Back
Top