Sabine Hossenfelder on strong CP, hierarchy

  • I
  • Thread starter kodama
  • Start date
In summary, high energy physicists think it's unlikely that the mass of the Higgs is exactly 1.1370982612166126 times the Planck mass because that would require two constants to cancel each other out.
  • #1
kodama
978
132
sabine hossenfelder

Here is a different example for this idiocy. High energy physicists think it’s a problem that the mass of the Higgs is 15 orders of magnitude smaller than the Planck mass because that means you’d need two constants to cancel each other for 15 digits. That’s supposedly unlikely, but please don’t ask anyone according to which probability distribution it’s unlikely. Because they can’t answer that question. Indeed, depending on character, they’ll either walk off or talk down to you. Guess how I know.

Now consider for a moment that the mass of the Higgs was actually about as large as the Planck mass. To be precise, let’s say it’s 1.1370982612166126 times the Planck mass. Now you’d again have to explain how you get exactly those 16 digits. But that is, according to current lore, not a finetuning problem. So, erm, what was the problem again?
...
And there are more numerological arguments in the foundations of physics, all of which are wrong, wrong, wrong for the same reasons. The unification of the gauge couplings. The so-called WIMP-miracle (RIP). The strong CP problem. All these are numerical coincidence that supposedly need an explanation. But you can’t speak about coincidence without quantifying a probability!
http://backreaction.blogspot.com/

High energy physicists any comments?
 
Last edited by a moderator:
  • Like
Likes ohwilleke and MrRobotoToo
Physics news on Phys.org
  • #2
The specific link is here.

I'm not sure there's much point having a parallel debate about this here on PF, rather than in the comments section on Bee's blog (which Bee is more likely to read, and maybe respond). At last count there were already 94 comments there.
 
  • Like
Likes arivero
  • #3
No matter which prior you use, the probability will be extremely small in all these cases. Which means explanations for a small mass gain orders of magnitude in terms of their relative likelihood.

If you measure a constant of nature to be 1.0000000000000000000146, you would expect some deeper reason why it has to be extremely close to 1 (but not exactly 1). It is not absolutely necessary that there is a reason, but it looks likely. The exact value is not the point, measuring it to more precision doesn't change the argument. The point is the vastly different probability for "so close to 1" and "somewhere between 0 and 2" for every reasonable probability distribution.
The strong CP problem is similar. Yes, it can happen that a phase between 0 and 2 pi is smaller than 0.000000000001 by accident. But do you really expect that?

This is different from the curvature in cosmology. I don't have any problems with parameters that happen to be small where there is no natural scale for them. But the Higgs has a natural scale (the Planck mass), and the CP phase as well (it is an angle).
 
  • Like
Likes kodama
  • #4
mfb said:
If you measure a constant of nature to be 1.0000000000000000000146, you would expect some deeper reason why it has to be extremely close to 1 (but not exactly 1). It is not absolutely necessary that there is a reason, but it looks likely. The exact value is not the point, measuring it to more precision doesn't change the argument. The point is the vastly different probability for "so close to 1" and "somewhere between 0 and 2" for every reasonable probability distribution.

Hm, so 1.1234567891234567892146 (intended to be random digits) does not need explanation? It would seem that "so close to 1.1234567891234567892000" is as probable as "so close to 1" for some equally reasonable distribution?
 
  • #5
mfb said:
No matter which prior you use, the probability will be extremely small in all these cases.
Well, this is not exactly true as stated. You would need to add some qualifiers on what type of priors you consider "natural". I also must disagree with Sabine, I don't know exactly who she has been talking to but I know several high-energy physicists that would happily tell you that fine tuning may not be a problem depending on your assumptions (and some that are way too happy to fine tune their models).

When it comes to the strong CP-problem and things like flavour mixing, there is a natural measure on those parameters, the Haar measure. Indeed, this would give a flat distribution on the circle for the strong CP phase.

Dr.AbeNikIanEdL said:
Hm, so 1.1234567891234567892146 (intended to be random digits) does not need explanation? It would seem that "so close to 1.1234567891234567892000" is as probable as "so close to 1" for some equally reasonable distribution?
With this approach you will get nowhere. The entire experimental analysis using frequentist statistics is based on ordering different experimental outcomes based on how "extreme" they would be within the model. In the case of the strong CP-phase, measuring a value close to zero is extreme in the sense of giving no CP-violation in contrast to the rest of the parameter space. You might, as Sabine says, consider a different prior distribution, but what should the prior distribution depend on if not your underlying model?

I agree with Sabine that you must pay attention to your prior assumptions, but many people will inherently assume some prior and the prior can very well be based on your model assumptions.
 
  • #6
Dr.AbeNikIanEdL said:
Hm, so 1.1234567891234567892146 (intended to be random digits) does not need explanation? It would seem that "so close to 1.1234567891234567892000" is as probable as "so close to 1" for some equally reasonable distribution?
That would be odd as well, although in a different way. The 123456789123456789 would suggest our decimal system is special in some way.

You can find many examples of values that would be strange, but the largest part of the interval [0,2] is not close to some value we would consider "strange".
Orodruin said:
You would need to add some qualifiers on what type of priors you consider "natural".
Let's be generous: Everything that does not differ by more than 10 orders of magnitude on the interval [0,2], or if it does differ by more, favors smaller values (to give logarithms some love).
A prior that gives values like 1.0000000000000000000146 a 1015 times higher probability than values around 1.362567697948747224894 or any other number like this doesn't look natural to me.

Edit: Finite probabilities for discrete values like 1, 0, 1/2 and similar are fine as well.
 
Last edited:
  • #7
The mass of elephant is 9 orders of magnitude larger than the mass of ant. Why? This is a hard hierarchy problem in biology that lacks any natural explanation. o0)
 
  • Like
  • Love
Likes malawi_glenn and ohwilleke
  • #8
Demystifier said:
The mass of elephant is 9 orders of magnitude larger than the mass of ant. Why? This is a hard hierarchy problem in biology that lacks any natural explanation. o0)
That has no similarity to the question of the Higgs mass.
We have 13 orders of magnitude between the top and the neutrinos, but unlike the Higgs that doesn't require any fine-tuning.
 
  • #9
Demystifier said:
The mass of elephant is 9 orders of magnitude larger than the mass of ant

I think a closer analogy would be if the mass of an elephant is the same as the mass of a birch tree - to within a nanogram.
 
  • Like
  • Love
Likes malawi_glenn, atyy and arivero
  • #10
Vanadium 50 said:
I think a closer analogy would be if the mass of an elephant is the same as the mass of a birch tree - to within a nanogram.
And birch trees are the only food elephants eat. And elephants are the only type of animals - to avoid look-elsewhere effects.
 
  • #11
Orodruin said:
With this approach you will get nowhere. The entire experimental analysis using frequentist statistics is based on ordering different experimental outcomes based on how "extreme" they would be within the model. In the case of the strong CP-phase, measuring a value close to zero is extreme in the sense of giving no CP-violation in contrast to the rest of the parameter space. You might, as Sabine says, consider a different prior distribution, but what should the prior distribution depend on if not your underlying model?

Ok, in the example @mfb it was not indicated that there would be something physically special about 1. In the case of the CP problem I get that the value of 0 at least is singled out as significant by physics.

mfb said:
That would be odd as well, although in a different way. The 123456789123456789 would suggest our decimal system is special in some way.

Thats why I wrote "intended to be random numbers"...

mfb said:
You can find many examples of values that would be strange, but the largest part of the interval [0,2] is not close to some value we would consider "strange".

I am not sure about this. I guess you could find patterns in almost all finite series of digits, or at least in so many that it is not surprising if one turns up somewhere, so this seems to depend on the arbitrary decision what kind of patterns you allow to consider a number interesting.
 
  • #12
Dr.AbeNikIanEdL said:
I am not sure about this. I guess you could find patterns in almost all finite series of digits, or at least in so many that it is not surprising if one turns up somewhere, so this seems to depend on the arbitrary decision what kind of patterns you allow to consider a number interesting.
You can always find a "777" in the digit sequence or something like that, but that is nowhere close to the pattern 1.0000000000000000000246 has (the last digits here are arbitrary, the zeros are not). It does not matter what kind of patterns you include - the observed Higgs bare mass to Planck mass ratio will stand out for every collection that is somewhat reasonable.
 
  • #13
On the Strong CP problem, does she really believe that this is an accident? "Oh, the angle has to be something - why not less than 10^-10 radians?"

On the Higgs hierarchy problem, there is a difference between having one number take the value that it does to ~36 decimal places, and I agree, how does one even talk about this in a probabilistic sense. But that's not what we have. We have two numbers with two different physical sources that are the same to 36 decimal places. This seems unlikely to be accidental.
 
  • #14
mfb said:
. But the Higgs has a natural scale (the Planck mass), .

what about theories that suggest Higgs natural scale is at the fermi scale?
 
  • #15
Vanadium 50 said:
On the Strong CP problem, does she really believe that this is an accident? "Oh, the angle has to be something - why not less than 10^-10 radians?"

On the Higgs hierarchy problem, there is a difference between having one number take the value that it does to ~36 decimal places, and I agree, how does one even talk about this in a probabilistic sense. But that's not what we have. We have two numbers with two different physical sources that are the same to 36 decimal places. This seems unlikely to be accidental.

I wrote on her board already, and this was my point. It's one thing when you have an effective field theory, and a cutoff scale and you worry about what natural values dimensionless ratios must be . There you can definitely talk about Bayesian priors, and I agree with her that this is a fuzzy question, (I also agree with others that almost any prior you pick disfavors a small value).

But this isn't what the core problem is. JUst like Newton didn't worry about priors and small numbers in front of his law of gravity, when you write down a theory that explains the Higgs mass^2 term, you have much bigger problems to worry about. Namely how to make extremely normal contributions within your Planckian theory produce an incredibly tiny number without badly breaking some other part of the theory.

Stated this way the hierarchy problem is really about how difficult it is for a theorist to come up with a sensible theory in the UV.
 
  • #16
kodama said:
what about theories that suggest Higgs natural scale is at the fermi scale?
They are in the group of "proposed solutions to the hierarchy problem".
 
  • #17
Vanadium 50 said:
On the Higgs hierarchy problem, there is a difference between having one number take the value that it does to ~36 decimal places, and I agree, how does one even talk about this in a probabilistic sense. But that's not what we have. We have two numbers with two different physical sources that are the same to 36 decimal places. This seems unlikely to be accidental.

But there is still only one value that enters the calculation of any observable, the resulting higgs mass?

Haelfix said:
But this isn't what the core problem is. JUst like Newton didn't worry about priors and small numbers in front of his law of gravity, when you write down a theory that explains the Higgs mass^2 term, you have much bigger problems to worry about. Namely how to make extremely normal contributions within your Planckian theory produce an incredibly tiny number without badly breaking some other part of the theory.

Stated this way the hierarchy problem is really about how difficult it is for a theorist to come up with a sensible theory in the UV.

What exactly is meant by "explain" the ##m^2## term? Could you suggest some reference for further reading?
 
  • #18
The square of the observable Higgs mass is the sum of a squared "bare" mass and loop corrections. These loop corrections depend on all the particles and the scale where new physics comes in. If we just assume "Standard Model up to the Planck scale", we would expect the loop corrections to be of the order of the squared Planck mass: ## m_{bare}^2 + c\, m_P^2 = m_{obs}^2## where mobs is the mass we measure in the lab, and c is some numerical prefactor that depends on details not relevant here. There is no known relation between mbare and c mP.
We know mobs = 125 GeV, and mP = 1.22*1019 GeV. Plugging that in, we get something like 1502407283632643267022981020544340468283664 + (completely unrelated value) = 15625. The completely unrelated value has to match the other value extremely closely to get a result that is so much smaller. Possible? Sure. Likely? Nah.
 
  • Like
Likes Spinnor
  • #19
mfb said:
The square of the observable Higgs mass is the sum of a squared "bare" mass and loop corrections. These loop corrections depend on all the particles and the scale where new physics comes in. If we just assume "Standard Model up to the Planck scale", we would expect the loop corrections to be of the order of the squared Planck mass: ## m_{bare}^2 + c\, m_P^2 = m_{obs}^2## where mobs is the mass we measure in the lab, and c is some numerical prefactor that depends on details not relevant here. There is no known relation between mbare and c mP.
We know mobs = 125 GeV, and mP = 1.22*1019 GeV. Plugging that in, we get something like 1502407283632643267022981020544340468283664 + (completely unrelated value) = 15625. The completely unrelated value has to match the other value extremely closely to get a result that is so much smaller. Possible? Sure. Likely? Nah.
is conformal solution still viable or has LHC ruled it out?
 
  • #20
mfb said:
The square of the observable Higgs mass is the sum of a squared "bare" mass and loop corrections. These loop corrections depend on all the particles and the scale where new physics comes in. If we just assume "Standard Model up to the Planck scale", we would expect the loop corrections to be of the order of the squared Planck mass: m2bare+cm2P=m2obsmbare2+cmP2=mobs2 m_{bare}^2 + c\, m_P^2 = m_{obs}^2 where mobs is the mass we measure in the lab, and c is some numerical prefactor that depends on details not relevant here. There is no known relation between mbare and c mP.
We know mobs = 125 GeV, and mP = 1.22*1019 GeV. Plugging that in, we get something like 1502407283632643267022981020544340468283664 + (completely unrelated value) = 15625.

I know. Still, ##m_\mathrm{bare}## is not a measurable value and should have no meaning at all. Only ##m_\mathrm{obs}## should be relevant for any physics and is the only value that enters any calculation?
 
  • #21
Dr.AbeNikIanEdL said:
I know. Still, ##m_\mathrm{bare}## is not a measurable value and should have no meaning at all. Only ##m_\mathrm{obs}## should be relevant for any physics and is the only value that enters any calculation?
It is not observable, but it enters calculations - the calculation of the observable Higgs mass, in particular. You can't work without it.
 
  • Like
Likes ohwilleke
  • #22
But the observable higgs mass is not calculable in the standard model, it is fixed to observation as any other mass in the SM. Anything else only depends on this value of ##m_\mathrm{obs}## and never on ##m_\mathrm{bare}##.
 
  • Like
Likes ohwilleke
  • #23
Guys, I don't see why there is such a focus on Higgs mass problem per se.

Sure, the corrections for it are huge and this does not look good, but this is not the only such problem in SM. For example, vacuum energy situation is even worse - vacuum energy, if calculated in SM, is divergent. This is worse than any fine-tuning.

To me, it is not a disaster, it just implies that more development of the theory is in order.

And there are hints we now have after Higgs discovery.

Prior to that, we only knew that sum of squares of masses of all fermions is suspiciously close to half of square of Higgs VEV, which might be just a hint that squares of all fermion yukawa couplings must (for some yet unknown reason) add up to one.

But now, when we know the mass of Higgs, sum of squares of masses of all bosons *also* is very close to half of square of Higgs VEV (within ~0.3%).

And Higgs and top masses are such that SM vacuum seems to lie very close to stability/metastability line.

And also, good old Koide rule is there too.

Something is fishy here. The mass of Higgs is not "randomly chosen by Nature", neither masses of fermions.
 
  • Like
Likes ohwilleke
  • #24
Haelfix said:
I wrote on her board already, and this was my point. It's one thing when you have an effective field theory, and a cutoff scale and you worry about what natural values dimensionless ratios must be . There you can definitely talk about Bayesian priors, and I agree with her that this is a fuzzy question, (I also agree with others that almost any prior you pick disfavors a small value).

But this isn't what the core problem is. JUst like Newton didn't worry about priors and small numbers in front of his law of gravity, when you write down a theory that explains the Higgs mass^2 term, you have much bigger problems to worry about. Namely how to make extremely normal contributions within your Planckian theory produce an incredibly tiny number without badly breaking some other part of the theory.

Stated this way the hierarchy problem is really about how difficult it is for a theorist to come up with a sensible theory in the UV.
You wrote a very good comment on her board. She obviously lacks the understanding of the subject, but nevertheless makes strong claims (even hinting that she considers 1000's of other physicists as being wrong - that is always a very bad sign).

One may also add the following perspective to the argument: even after the QCD phase transition with vacuum condensate, the net result of the cc is almost zero. This is a relatively low-energy phenomenon compared to the scale of gravity. Assuming that the vacuum energy goes with some power of Lambda_QCD, Lambda_QCD must have the correct value fine tuned to a fantastic precision to exactly the right value so that after the phase transition it almost precisely cancels all the other contributions. How would the dynamics of the big bang know beforehand that after following through all sorts of phase transitions, including the QCD one, that the net effect if practically zero?

This problem is entirely different than what she misleadingly writes in her blog. Sigh.
 
  • Like
Likes atyy, vanhees71 and protonsarecool
  • #25
I would like to make a comment.

The hierarchy problem has been bugging the community for a long time, and for a long time very clever people have been going around making claims like the ones we read here, i.e. There is a natural scale (the Planck mass) for the higgs mass, there is therefore a fine tuning problem, there must be therefore new physics at the TeV. And this claim has been made so boldly and arrogantly, in the sense that almost everyone would just give it for granted and entire physics careers have been build on solutions to the hierarchy problem.

To me, while the problem is inded an interesting one, the really issue here was and is precisely the arrogance and superiority with which these claims were and sometimes still are done, as if this were so obvious and whoever does not see it "has some kind of problem".

Now of course we can discuss as much as we want about this, but there is an indisputable fact. LHC has not seen anything, in spite of the claims of all those very clever people. So this is a fact, the claims, as they were done, were plainly wrong. Period. This does not mean that the problem does not exists or that there is no new physics, but that for sure the implications that were given for granted before as practically obvious, were and are far from obvious, and as a matter of fact wrong, at least in the form they were made. So this just to say, to whoever does not see this hierarchy problem, that experimental evidence seems to be on your side., which is what we should always remember of, as physicists. We might be clever, but nature doesn't need to follow our logic.

Having said that, there are two (for me) assumptions that are made here. First of all, people talk about the Planck mass as a natural scale. This assumes somehow that gravity has a role in all this. You need G to enter one way or another. Now we know that gravity is a very resilient theory, does not like to be quantized and seems to be deeply different than anything else we know. So how can we so easily make such a strong claim? Who tells us that gravity does not follow completely different patterns? Might not even be a fundamental force, but maybe an emergent one, as some people have claimed, and G would be more like the boltzman constant. But without having to go so far, QFT without gravity does not know anything about G, the higgs does not know anything about G. So how can we so boldly state that the natural scale for the higgs mass is the Planck mass?

We can look at this from a slightly different perspective, namely computing the radiative corrections to the higgs mass. People say, oh my god, the radiative corrections go as the cutoff energy squared, which is a disaster ( if the cut off is the Planck scale, it's easy to see where the fine tuning comes from)

But again, we are assuming here that there is a cutoff at the Planck scale, we are computing just one loop corrections to the higgs mass, very well knowing that perturbation theory is a partial answer to what happens in reality (the perturbation series it self does not converge and we have no idea of what kind of physics could arise non pertubatively) and finally we are using a regularization scheme which is beyond good and bad. Cut offs are bad, break any possible symmetry of the theory. If you use a reasonable scheme, like dim reg, the mass squared leaves the place to usual 1/eps terms, that will disappear in UV renormalization as any other infinity in quantum field theory and no one would be bothered.

Now please don't isunderstand me. Here I don't want to say that there is no need for new physics, nor that the SM of particle physics is the end of the story. But clearly, if we remove the cutoff and assume that qft is mathematically well defined everywhere (still possibly without being the final theory!) then the problem simply does not exist.

So we might now end up in a philosophical discussion, but what is more probable here? That the higgs mass is affected by the Planck scale, that we should see some physics at some TeV that cures this, but everything conspires aginst us seeing signs of any new phenomena up to a couple of TeV, etc etc or are we maybe just reckoning without our host here?

Cheers
 
  • #26
Sleuth said:
LHC has not seen anything
With 1% the final dataset.

Did you see anyone claiming that the LHC absolutely has to find something beside the Higgs? Or, the much stronger claim, that it has to happen so early? I did not.
Sleuth said:
So how can we so boldly state that the natural scale for the higgs mass is the Planck mass?
We cannot. That is the whole point. The natural scale for the Higgs mass is at least the scale where new physics gets relevant. New physics at the TeV scale would put this closer to the observed Higgs mass.

There are models that avoid this issue completely, of course, like the relaxion and similar approaches.
 
  • #27
This problem has many facets, even without perturbation theory and the Planck scale. That's why I highlighted in my previous post a related problem, which is non-perturbative in nature, namely the QCD phase transition (one may also consider the weak symmetry breaking scale). While the hierarchy between Lambda_QCD and the cc is much smaller than the one related to the Planck scale, it still involves many orders of magnitude. How would early cosmology know that in the future there will be this non-perturbative phase transition (at relatively low energy) and arrange "beforehand" that after the transition the vacuum energy is practically zero?

It seems pretty obvious that this is an ill-posed question, rather one would expect that there should be some self-tuning, continuously adapting mechanism that more or less automatically guarantees that at very large times of the universe, the vacuum energy tends to zero, independent of the various phase transitions along the way. There are a few proposals of this kind around, but none seem particularly convincing.

So this *is* a most important open problem, despite perceived arrogance (btw there is a saying that arrogance is just competence if perceived from below). The Planck scale is not important, you may call it infinity for any practical purposes at low energies. The question remains how to reconcile this infinite number (or to lesser extreme, any other large number such as Lambda_QCD) with the measured "almost zero" value of the cc. To me it appears that this kind of questions cannot be meaningfully addressed in the context of particle CFT, so the one-loop argument could be nil anyway (this often happens when particle phenomenologists try to adress questions related to gravity using QFT methods). Phenomena like UV/IR mixing and the holographic nature of quantum gravity go far beyond naive particle QFT, and all the difficulties one encounters may be just an indication that one uses too a limited framework in the first place (much like trying to use standard QFT to address the information loss problem of black holes).
 
  • Like
Likes nikkkom
  • #28
Just to clarify, I did not mean that any answers in this thread were arrogant. I meant the way the hierarchy problem was presented before people crushed againsted LHC disappointing results (from this point of view). What I can see, for example, is that the way people talk about this problem today is much different then what it was 15 years ago or still right before the higgs discovery.

About arrogance, I don't this I agree on the saying. I believe that if you know what you are talking about, you have no reason to become arrogant. When you do, usually the reason is precisely the opposite.

Finally, it's true, the LHC has just started and we don't know what will happen until 2035. But what I was referring to, was the prospect of finding a huge number of new states, mainly based on hierarchy related arguments, and that simply did not happen.
The LHC has reached practically its max energy and, while we of course still don't know, I think it is reasonable to expect that if it will see something, this will be deviations here and there. Which would be good of course, but it is far from what people were talking about some years ago.

In any case, there is no gain in such discussions, we will just have to wait and see, and I will be happy to be proven wrong, if this will be the case. I just wanted to stress that many things that are given for granted and said as if they were obvious, are often not, and we had a proof of this with the hierarchy problem.

Cheers
 
  • #29
A naive scaling suggests that a "10 sigma" observation with 3000/fb could be a ~1 sigma effect today, which means it wouldn't be visible at all.. Sure, it is unlikely that we'll find 50 new particles, but even a single particle would be amazing. A single clear violation of the SM elsewhere would be great as well.
LHCb has some curious results in that aspect...
 
  • #30
Ok, to see why DimReg isn't an answer to the hierarchy problem, or alternatively why you have to talk about standard model cutoffs, could I recommend an excellent presentation by Nathaniel Craig at a recent IAS summer school that has conveniently appeared on the internet in the past week or so. For people that are serious about learning why the hierarchy problem keeps a lot of physicists up and night (and perhaps why we don't all suffer from mass delusion) I can't think of a better place to start.

Lecture one goes through a lot of what was discussed here in some detail and the following lectures are interesting as well, highly recommended.
 
  • Like
Likes vanhees71
  • #31
It has been said several times in this thread already, but I'll say it myself too, in case it helps someone else get the message. The real hierarchy problem is not the problem that one number is small and the other number is big. The problem is that we have theories in which, to match experiment, we need an observed quantity to come out small, and the way we do that is to employ a fundamental parameter that is very big, but which is finetuned so as to be almost entirely canceled out by quantum effects.

Originally I thought Hossenfelder understood this, and was taking the attitude, so what? ... in an example of that hardboiled empiricism which says, to hell with preconceptions and common sense and human intuition; what matters in science is agreement with experiment, and these finetuned theories agree with experiment. She does actually say something like that, it's just that I am no longer sure whether she thinks finetuning means huge cancellations, or just small numbers.

Anyway, for an example of a paper which overtly says, let's forget concerns about finetuning and just see what works, see "The new minimal standard model" of 2004. It never swept the world and it's already out of date (the actual Higgs is a little lighter than the range it allows), but it's an example of what finetuning-be-damned looks like. In that regard, I would contrast it with the more recent "SMASH" model (which has had some fans here), because SMASH includes an axion in order to explain why strong CP violation is effectively zero, something that a willfully finetuned theory like NMSM can just posit.

For my part, I do think finetuning (serious finetuning, involving magic cancellations that wipe out many orders of magnitude) does need to be explained or avoided; and I am one of those people who is impressed by the asymptotic safety prediction of the Higgs mass. That seems to require a "desert" (no new physics) above the electroweak scale, and something unexpected at the quantum gravity scale. For example, despite my love of string theory, I can now appreciate the interest in unconventional models of micro black holes, if that would remove one cause of the need for finetuning.

Two sets of papers that I have started looking at, are Strumia et al on agravity, and Dubovsky et al on the T-Tbar deformation of CFTs. I don't know what agravity says about black holes, but it is apparently designed to allow what Gorsky et al call asymptotic security - Higgs mass goes to zero at the Planck scale, and Higgs mass beta function goes to zero at the Planck scale. The T-Tbar deformation, meanwhile, is said to have asymptotic fragility - whatever that is, I haven't got the gist of it yet. But apparently Dubovsky's work has caught Strumia's eye, so it goes on the list for consideration.
 
  • Like
Likes Urs Schreiber
  • #32
I don't know about Sabine, cannot talk for here, but on my side, I think I did my best to understand the arguments that people talk about. My point was entirely another.

There is no doubt that, if there was really a magic cancellation of 15 orders of magnitude, everyone should be bothered. And in my argument I was not trying to say that I can prove that such a cancellation does not take place. The problem is, I am not sure I believe that this cancellation is really inevitable. The problem, in my perspective, is that we insist in wanting to apply a model of reasoning (based on QFT and EFT if you want) to a problem of which we understand practically nothing, and the reason for this is because it involves gravity at very high energies and its quantization.

Now, of course, lacking anything else better, we have to try to do something, and it is good that people think about these scenarios, and "what would ti be if..." . The problem is that these things are stated as if they were carved in stone, namely :

"there MUST be something, because there IS such a huge cancellation, don't YOU see it, you fools ?".

I was arguing precisely against this attitude. I always found it pretentious and non scientific. Therefore I am happy that nature (at the LHC in this case) is showing us that things are not so obvious as they were presented and that maybe we should cut down our egos a bit sometime, and try to think of something new. This is a much more exciting perspective to me, than keeping on adding new gauge groups and new particles and new interactions than no one ever saw, to explain phenomena that no one ever saw, just because of a hunch, sorry... there are indeed things we see and we don't understand, dark matter and dark energy for example, which indeed all have to do always and only (as far as we can say now) with gravity.

I understand you will not like the example, but to me this is so similar to Ptolemaic cosmology and epicycles. People had a framework that seemed to work, namely a system made of things rotating on circular orbits around the earth, and in their mind there was no other possible system. Therefore, of course, they tried to bend it and adapt it adding always a new piece, a new circle, to try to reproduce observation. It was "clear and obvious" to everyone, that if the system did not work, it meant that there HAD to be another epicycle, what else could it be?

Fortunately, at some point someone came about and changed the perspective, and then everything was simple and beautiful again and none of those crazy (according to the Ptolemaic system "inevitable") epicycles was actually needed.

I know you all understand this story better than me, so I ask myself, how is it possible that our minds are so powerful that we can imagine Gauge theories, symmetry breaking, bending of space-time under the action of gravity, and we still we cannot learn a bit of humbleness from stories that repeated over and over again in our past? So in conclusion I say, fine if we want to do models and present ideas using the framework that we have trying to look for a hint of a solution, it is the best we can do now and it is completely legitimate; On my side, I know I am not the new Einstein and I probably will not be the one changing perspective, so all fine. But at least, let's give it the benefit of the doubt, ok?

Cheers
 
  • Like
Likes cosmik debris and ohwilleke
  • #33
mfb said:
A naive scaling suggests that a "10 sigma" observation with 3000/fb could be a ~1 sigma effect today, which means it wouldn't be visible at all.. Sure, it is unlikely that we'll find 50 new particles, but even a single particle would be amazing. A single clear violation of the SM elsewhere would be great as well.
LHCb has some curious results in that aspect...

It doesn't work like that for a variety of reasons. It turns out that once you get the LHC running at peak energy, there are huge spoilers very early on in the results and there are never cliffhangers.

One is that more data doesn't change your systemic error, it only changes you statistical error. And, statistical error has a non-linear relationship to the size of your sample.

Statistical error is basically a function of sqrt(1/sample size). To use round numbers, if we have 50/fb now, and will have 3000/fb then, the relative improvement in statistical error is sqrt(1/60), so the statistical margin of error might decrease by a factor of about 0.129

But, the relationship between systemic error, statistical error and total error isn't linear either. If your systemic error is 1 and your statistical error is 1 (and currently a lot of results have comparable amounts of systemic and statistical error so this crude estimate isn't too far off), your total error is about 1.4.

But, if your system error is 1 and your statistical error is 0.129, your total error is about 1.008.

So a 5 sigma result when it is all over ought to be a 3.6 sigma result now.

We have no 3 sigma new physics resonances yet, and that implies that we are almost certain not to discover any new particles at a "discovery" threshold by the time that the LHC is done.

In truth, it is even worse than that.

Why?

Because lots of SM observations are subject to loop effects that involve all possible ways that a phenomena could happen including any new particle that is out there. So, if there is a BSM phenomena out there, you are not going to see just one experimental anomaly standing alone. You would see multiple experimental anomalies in measurements of different things. And, the kind of measurements where you would be seeing multiple anomalies at CMS would be the same kind of measurements where you would be seeing multiple anomalies at ATLAS.

With the possible exception of charged lepton non-universality, we haven't seen anything remotely like that at the LHC. If there were 50 new particles out there, we'd have hundreds of anomalies that we'd be seeing at a significance of about two-thirds of what we will see in a final result (in sigma units). Yet, this clearly isn't happening.
 
Last edited:
  • #34
@Sleuth: I don't see anyone claiming that there must be many orders of cancellation. That is the point: We don't expect such fine-tuning, so we have to figure out what is going on. The ideal result is an explanation for the Higgs mass that does not have excessive fine-tuning.

@ohwilleke: For most searches, the region of phase space accessible with 3000/fb is more than twice as large as the region accessible with 40/fb on a linear mass scale, and you also get a factor of nearly 10 in sensitivity in cross section for a given mass in most places. Per day of running, the early data is more exciting, but taking much more data helps a lot. Systematic errors are not an issue for most searches, and most of them also go down with luminosity because they are uncertainties relative to the signal strength (e. g. luminosity, identification efficiency and so on) or the fitting procedure. Very few searches have systematic uncertainties corresponding to a fixed uncertainty in the signal yield.
What you say is not directly wrong, but your estimate of the importance of these things is completely off.
ohwilleke said:
We have no 3 sigma new physics resonances yet, and that implies that we are almost certain not to discover any new particles at a "discovery" threshold by the time that the LHC is done.
That statement is completely wrong.
 
  • #35
mfb said:
That statement is completely wrong.

His whole point about systematics is completely wrong. More data means more control over systematics.
 

Similar threads

Replies
14
Views
258
  • Beyond the Standard Models
Replies
2
Views
2K
Replies
5
Views
3K
Replies
24
Views
7K
  • MATLAB, Maple, Mathematica, LaTeX
Replies
20
Views
4K
Back
Top