I Sabine Hossenfelder on strong CP, hierarchy

  • Thread starter Thread starter kodama
  • Start date Start date
Click For Summary
Sabine Hossenfelder critiques the fine-tuning problems in high-energy physics, particularly the strong CP problem and the Higgs mass hierarchy issue. She argues that the improbability of certain numerical coincidences, such as the Higgs mass being significantly smaller than the Planck mass, raises questions about underlying principles in physics. The discussion highlights the challenge of assigning meaningful probabilities to these coincidences, emphasizing the need for a deeper understanding rather than mere numerical explanations. Participants express differing views on the necessity of fine-tuning and the implications of prior assumptions in theoretical models. The conversation underscores the complexities surrounding the interpretation of fundamental constants and their significance in physics.
kodama
Messages
1,082
Reaction score
144
sabine hossenfelder

Here is a different example for this idiocy. High energy physicists think it’s a problem that the mass of the Higgs is 15 orders of magnitude smaller than the Planck mass because that means you’d need two constants to cancel each other for 15 digits. That’s supposedly unlikely, but please don’t ask anyone according to which probability distribution it’s unlikely. Because they can’t answer that question. Indeed, depending on character, they’ll either walk off or talk down to you. Guess how I know.

Now consider for a moment that the mass of the Higgs was actually about as large as the Planck mass. To be precise, let’s say it’s 1.1370982612166126 times the Planck mass. Now you’d again have to explain how you get exactly those 16 digits. But that is, according to current lore, not a finetuning problem. So, erm, what was the problem again?
...
And there are more numerological arguments in the foundations of physics, all of which are wrong, wrong, wrong for the same reasons. The unification of the gauge couplings. The so-called WIMP-miracle (RIP). The strong CP problem. All these are numerical coincidence that supposedly need an explanation. But you can’t speak about coincidence without quantifying a probability!
http://backreaction.blogspot.com/

High energy physicists any comments?
 
Last edited by a moderator:
  • Like
Likes ohwilleke and MrRobotoToo
Physics news on Phys.org
The specific link is here.

I'm not sure there's much point having a parallel debate about this here on PF, rather than in the comments section on Bee's blog (which Bee is more likely to read, and maybe respond). At last count there were already 94 comments there.
 
  • Like
Likes arivero
No matter which prior you use, the probability will be extremely small in all these cases. Which means explanations for a small mass gain orders of magnitude in terms of their relative likelihood.

If you measure a constant of nature to be 1.0000000000000000000146, you would expect some deeper reason why it has to be extremely close to 1 (but not exactly 1). It is not absolutely necessary that there is a reason, but it looks likely. The exact value is not the point, measuring it to more precision doesn't change the argument. The point is the vastly different probability for "so close to 1" and "somewhere between 0 and 2" for every reasonable probability distribution.
The strong CP problem is similar. Yes, it can happen that a phase between 0 and 2 pi is smaller than 0.000000000001 by accident. But do you really expect that?

This is different from the curvature in cosmology. I don't have any problems with parameters that happen to be small where there is no natural scale for them. But the Higgs has a natural scale (the Planck mass), and the CP phase as well (it is an angle).
 
  • Like
Likes kodama
mfb said:
If you measure a constant of nature to be 1.0000000000000000000146, you would expect some deeper reason why it has to be extremely close to 1 (but not exactly 1). It is not absolutely necessary that there is a reason, but it looks likely. The exact value is not the point, measuring it to more precision doesn't change the argument. The point is the vastly different probability for "so close to 1" and "somewhere between 0 and 2" for every reasonable probability distribution.

Hm, so 1.1234567891234567892146 (intended to be random digits) does not need explanation? It would seem that "so close to 1.1234567891234567892000" is as probable as "so close to 1" for some equally reasonable distribution?
 
mfb said:
No matter which prior you use, the probability will be extremely small in all these cases.
Well, this is not exactly true as stated. You would need to add some qualifiers on what type of priors you consider "natural". I also must disagree with Sabine, I don't know exactly who she has been talking to but I know several high-energy physicists that would happily tell you that fine tuning may not be a problem depending on your assumptions (and some that are way too happy to fine tune their models).

When it comes to the strong CP-problem and things like flavour mixing, there is a natural measure on those parameters, the Haar measure. Indeed, this would give a flat distribution on the circle for the strong CP phase.

Dr.AbeNikIanEdL said:
Hm, so 1.1234567891234567892146 (intended to be random digits) does not need explanation? It would seem that "so close to 1.1234567891234567892000" is as probable as "so close to 1" for some equally reasonable distribution?
With this approach you will get nowhere. The entire experimental analysis using frequentist statistics is based on ordering different experimental outcomes based on how "extreme" they would be within the model. In the case of the strong CP-phase, measuring a value close to zero is extreme in the sense of giving no CP-violation in contrast to the rest of the parameter space. You might, as Sabine says, consider a different prior distribution, but what should the prior distribution depend on if not your underlying model?

I agree with Sabine that you must pay attention to your prior assumptions, but many people will inherently assume some prior and the prior can very well be based on your model assumptions.
 
Dr.AbeNikIanEdL said:
Hm, so 1.1234567891234567892146 (intended to be random digits) does not need explanation? It would seem that "so close to 1.1234567891234567892000" is as probable as "so close to 1" for some equally reasonable distribution?
That would be odd as well, although in a different way. The 123456789123456789 would suggest our decimal system is special in some way.

You can find many examples of values that would be strange, but the largest part of the interval [0,2] is not close to some value we would consider "strange".
Orodruin said:
You would need to add some qualifiers on what type of priors you consider "natural".
Let's be generous: Everything that does not differ by more than 10 orders of magnitude on the interval [0,2], or if it does differ by more, favors smaller values (to give logarithms some love).
A prior that gives values like 1.0000000000000000000146 a 1015 times higher probability than values around 1.362567697948747224894 or any other number like this doesn't look natural to me.

Edit: Finite probabilities for discrete values like 1, 0, 1/2 and similar are fine as well.
 
Last edited:
The mass of elephant is 9 orders of magnitude larger than the mass of ant. Why? This is a hard hierarchy problem in biology that lacks any natural explanation. o0)
 
  • Like
  • Love
Likes malawi_glenn and ohwilleke
Demystifier said:
The mass of elephant is 9 orders of magnitude larger than the mass of ant. Why? This is a hard hierarchy problem in biology that lacks any natural explanation. o0)
That has no similarity to the question of the Higgs mass.
We have 13 orders of magnitude between the top and the neutrinos, but unlike the Higgs that doesn't require any fine-tuning.
 
Demystifier said:
The mass of elephant is 9 orders of magnitude larger than the mass of ant

I think a closer analogy would be if the mass of an elephant is the same as the mass of a birch tree - to within a nanogram.
 
  • Like
  • Love
Likes malawi_glenn, atyy and arivero
  • #10
Vanadium 50 said:
I think a closer analogy would be if the mass of an elephant is the same as the mass of a birch tree - to within a nanogram.
And birch trees are the only food elephants eat. And elephants are the only type of animals - to avoid look-elsewhere effects.
 
  • #11
Orodruin said:
With this approach you will get nowhere. The entire experimental analysis using frequentist statistics is based on ordering different experimental outcomes based on how "extreme" they would be within the model. In the case of the strong CP-phase, measuring a value close to zero is extreme in the sense of giving no CP-violation in contrast to the rest of the parameter space. You might, as Sabine says, consider a different prior distribution, but what should the prior distribution depend on if not your underlying model?

Ok, in the example @mfb it was not indicated that there would be something physically special about 1. In the case of the CP problem I get that the value of 0 at least is singled out as significant by physics.

mfb said:
That would be odd as well, although in a different way. The 123456789123456789 would suggest our decimal system is special in some way.

Thats why I wrote "intended to be random numbers"...

mfb said:
You can find many examples of values that would be strange, but the largest part of the interval [0,2] is not close to some value we would consider "strange".

I am not sure about this. I guess you could find patterns in almost all finite series of digits, or at least in so many that it is not surprising if one turns up somewhere, so this seems to depend on the arbitrary decision what kind of patterns you allow to consider a number interesting.
 
  • #12
Dr.AbeNikIanEdL said:
I am not sure about this. I guess you could find patterns in almost all finite series of digits, or at least in so many that it is not surprising if one turns up somewhere, so this seems to depend on the arbitrary decision what kind of patterns you allow to consider a number interesting.
You can always find a "777" in the digit sequence or something like that, but that is nowhere close to the pattern 1.0000000000000000000246 has (the last digits here are arbitrary, the zeros are not). It does not matter what kind of patterns you include - the observed Higgs bare mass to Planck mass ratio will stand out for every collection that is somewhat reasonable.
 
  • #13
On the Strong CP problem, does she really believe that this is an accident? "Oh, the angle has to be something - why not less than 10^-10 radians?"

On the Higgs hierarchy problem, there is a difference between having one number take the value that it does to ~36 decimal places, and I agree, how does one even talk about this in a probabilistic sense. But that's not what we have. We have two numbers with two different physical sources that are the same to 36 decimal places. This seems unlikely to be accidental.
 
  • #14
mfb said:
. But the Higgs has a natural scale (the Planck mass), .

what about theories that suggest Higgs natural scale is at the fermi scale?
 
  • #15
Vanadium 50 said:
On the Strong CP problem, does she really believe that this is an accident? "Oh, the angle has to be something - why not less than 10^-10 radians?"

On the Higgs hierarchy problem, there is a difference between having one number take the value that it does to ~36 decimal places, and I agree, how does one even talk about this in a probabilistic sense. But that's not what we have. We have two numbers with two different physical sources that are the same to 36 decimal places. This seems unlikely to be accidental.

I wrote on her board already, and this was my point. It's one thing when you have an effective field theory, and a cutoff scale and you worry about what natural values dimensionless ratios must be . There you can definitely talk about Bayesian priors, and I agree with her that this is a fuzzy question, (I also agree with others that almost any prior you pick disfavors a small value).

But this isn't what the core problem is. JUst like Newton didn't worry about priors and small numbers in front of his law of gravity, when you write down a theory that explains the Higgs mass^2 term, you have much bigger problems to worry about. Namely how to make extremely normal contributions within your Planckian theory produce an incredibly tiny number without badly breaking some other part of the theory.

Stated this way the hierarchy problem is really about how difficult it is for a theorist to come up with a sensible theory in the UV.
 
  • #16
kodama said:
what about theories that suggest Higgs natural scale is at the fermi scale?
They are in the group of "proposed solutions to the hierarchy problem".
 
  • #17
Vanadium 50 said:
On the Higgs hierarchy problem, there is a difference between having one number take the value that it does to ~36 decimal places, and I agree, how does one even talk about this in a probabilistic sense. But that's not what we have. We have two numbers with two different physical sources that are the same to 36 decimal places. This seems unlikely to be accidental.

But there is still only one value that enters the calculation of any observable, the resulting higgs mass?

Haelfix said:
But this isn't what the core problem is. JUst like Newton didn't worry about priors and small numbers in front of his law of gravity, when you write down a theory that explains the Higgs mass^2 term, you have much bigger problems to worry about. Namely how to make extremely normal contributions within your Planckian theory produce an incredibly tiny number without badly breaking some other part of the theory.

Stated this way the hierarchy problem is really about how difficult it is for a theorist to come up with a sensible theory in the UV.

What exactly is meant by "explain" the ##m^2## term? Could you suggest some reference for further reading?
 
  • #18
The square of the observable Higgs mass is the sum of a squared "bare" mass and loop corrections. These loop corrections depend on all the particles and the scale where new physics comes in. If we just assume "Standard Model up to the Planck scale", we would expect the loop corrections to be of the order of the squared Planck mass: ## m_{bare}^2 + c\, m_P^2 = m_{obs}^2## where mobs is the mass we measure in the lab, and c is some numerical prefactor that depends on details not relevant here. There is no known relation between mbare and c mP.
We know mobs = 125 GeV, and mP = 1.22*1019 GeV. Plugging that in, we get something like 1502407283632643267022981020544340468283664 + (completely unrelated value) = 15625. The completely unrelated value has to match the other value extremely closely to get a result that is so much smaller. Possible? Sure. Likely? Nah.
 
  • Like
Likes Spinnor
  • #19
mfb said:
The square of the observable Higgs mass is the sum of a squared "bare" mass and loop corrections. These loop corrections depend on all the particles and the scale where new physics comes in. If we just assume "Standard Model up to the Planck scale", we would expect the loop corrections to be of the order of the squared Planck mass: ## m_{bare}^2 + c\, m_P^2 = m_{obs}^2## where mobs is the mass we measure in the lab, and c is some numerical prefactor that depends on details not relevant here. There is no known relation between mbare and c mP.
We know mobs = 125 GeV, and mP = 1.22*1019 GeV. Plugging that in, we get something like 1502407283632643267022981020544340468283664 + (completely unrelated value) = 15625. The completely unrelated value has to match the other value extremely closely to get a result that is so much smaller. Possible? Sure. Likely? Nah.
is conformal solution still viable or has LHC ruled it out?
 
  • #20
mfb said:
The square of the observable Higgs mass is the sum of a squared "bare" mass and loop corrections. These loop corrections depend on all the particles and the scale where new physics comes in. If we just assume "Standard Model up to the Planck scale", we would expect the loop corrections to be of the order of the squared Planck mass: m2bare+cm2P=m2obsmbare2+cmP2=mobs2 m_{bare}^2 + c\, m_P^2 = m_{obs}^2 where mobs is the mass we measure in the lab, and c is some numerical prefactor that depends on details not relevant here. There is no known relation between mbare and c mP.
We know mobs = 125 GeV, and mP = 1.22*1019 GeV. Plugging that in, we get something like 1502407283632643267022981020544340468283664 + (completely unrelated value) = 15625.

I know. Still, ##m_\mathrm{bare}## is not a measurable value and should have no meaning at all. Only ##m_\mathrm{obs}## should be relevant for any physics and is the only value that enters any calculation?
 
  • #21
Dr.AbeNikIanEdL said:
I know. Still, ##m_\mathrm{bare}## is not a measurable value and should have no meaning at all. Only ##m_\mathrm{obs}## should be relevant for any physics and is the only value that enters any calculation?
It is not observable, but it enters calculations - the calculation of the observable Higgs mass, in particular. You can't work without it.
 
  • Like
Likes ohwilleke
  • #22
But the observable higgs mass is not calculable in the standard model, it is fixed to observation as any other mass in the SM. Anything else only depends on this value of ##m_\mathrm{obs}## and never on ##m_\mathrm{bare}##.
 
  • Like
Likes ohwilleke
  • #23
Guys, I don't see why there is such a focus on Higgs mass problem per se.

Sure, the corrections for it are huge and this does not look good, but this is not the only such problem in SM. For example, vacuum energy situation is even worse - vacuum energy, if calculated in SM, is divergent. This is worse than any fine-tuning.

To me, it is not a disaster, it just implies that more development of the theory is in order.

And there are hints we now have after Higgs discovery.

Prior to that, we only knew that sum of squares of masses of all fermions is suspiciously close to half of square of Higgs VEV, which might be just a hint that squares of all fermion yukawa couplings must (for some yet unknown reason) add up to one.

But now, when we know the mass of Higgs, sum of squares of masses of all bosons *also* is very close to half of square of Higgs VEV (within ~0.3%).

And Higgs and top masses are such that SM vacuum seems to lie very close to stability/metastability line.

And also, good old Koide rule is there too.

Something is fishy here. The mass of Higgs is not "randomly chosen by Nature", neither masses of fermions.
 
  • Like
Likes ohwilleke
  • #24
Haelfix said:
I wrote on her board already, and this was my point. It's one thing when you have an effective field theory, and a cutoff scale and you worry about what natural values dimensionless ratios must be . There you can definitely talk about Bayesian priors, and I agree with her that this is a fuzzy question, (I also agree with others that almost any prior you pick disfavors a small value).

But this isn't what the core problem is. JUst like Newton didn't worry about priors and small numbers in front of his law of gravity, when you write down a theory that explains the Higgs mass^2 term, you have much bigger problems to worry about. Namely how to make extremely normal contributions within your Planckian theory produce an incredibly tiny number without badly breaking some other part of the theory.

Stated this way the hierarchy problem is really about how difficult it is for a theorist to come up with a sensible theory in the UV.
You wrote a very good comment on her board. She obviously lacks the understanding of the subject, but nevertheless makes strong claims (even hinting that she considers 1000's of other physicists as being wrong - that is always a very bad sign).

One may also add the following perspective to the argument: even after the QCD phase transition with vacuum condensate, the net result of the cc is almost zero. This is a relatively low-energy phenomenon compared to the scale of gravity. Assuming that the vacuum energy goes with some power of Lambda_QCD, Lambda_QCD must have the correct value fine tuned to a fantastic precision to exactly the right value so that after the phase transition it almost precisely cancels all the other contributions. How would the dynamics of the big bang know beforehand that after following through all sorts of phase transitions, including the QCD one, that the net effect if practically zero?

This problem is entirely different than what she misleadingly writes in her blog. Sigh.
 
  • Like
Likes atyy, vanhees71 and protonsarecool
  • #25
I would like to make a comment.

The hierarchy problem has been bugging the community for a long time, and for a long time very clever people have been going around making claims like the ones we read here, i.e. There is a natural scale (the Planck mass) for the higgs mass, there is therefore a fine tuning problem, there must be therefore new physics at the TeV. And this claim has been made so boldly and arrogantly, in the sense that almost everyone would just give it for granted and entire physics careers have been build on solutions to the hierarchy problem.

To me, while the problem is inded an interesting one, the really issue here was and is precisely the arrogance and superiority with which these claims were and sometimes still are done, as if this were so obvious and whoever does not see it "has some kind of problem".

Now of course we can discuss as much as we want about this, but there is an indisputable fact. LHC has not seen anything, in spite of the claims of all those very clever people. So this is a fact, the claims, as they were done, were plainly wrong. Period. This does not mean that the problem does not exists or that there is no new physics, but that for sure the implications that were given for granted before as practically obvious, were and are far from obvious, and as a matter of fact wrong, at least in the form they were made. So this just to say, to whoever does not see this hierarchy problem, that experimental evidence seems to be on your side., which is what we should always remember of, as physicists. We might be clever, but nature doesn't need to follow our logic.

Having said that, there are two (for me) assumptions that are made here. First of all, people talk about the Planck mass as a natural scale. This assumes somehow that gravity has a role in all this. You need G to enter one way or another. Now we know that gravity is a very resilient theory, does not like to be quantized and seems to be deeply different than anything else we know. So how can we so easily make such a strong claim? Who tells us that gravity does not follow completely different patterns? Might not even be a fundamental force, but maybe an emergent one, as some people have claimed, and G would be more like the boltzman constant. But without having to go so far, QFT without gravity does not know anything about G, the higgs does not know anything about G. So how can we so boldly state that the natural scale for the higgs mass is the Planck mass?

We can look at this from a slightly different perspective, namely computing the radiative corrections to the higgs mass. People say, oh my god, the radiative corrections go as the cutoff energy squared, which is a disaster ( if the cut off is the Planck scale, it's easy to see where the fine tuning comes from)

But again, we are assuming here that there is a cutoff at the Planck scale, we are computing just one loop corrections to the higgs mass, very well knowing that perturbation theory is a partial answer to what happens in reality (the perturbation series it self does not converge and we have no idea of what kind of physics could arise non pertubatively) and finally we are using a regularization scheme which is beyond good and bad. Cut offs are bad, break any possible symmetry of the theory. If you use a reasonable scheme, like dim reg, the mass squared leaves the place to usual 1/eps terms, that will disappear in UV renormalization as any other infinity in quantum field theory and no one would be bothered.

Now please don't isunderstand me. Here I don't want to say that there is no need for new physics, nor that the SM of particle physics is the end of the story. But clearly, if we remove the cutoff and assume that qft is mathematically well defined everywhere (still possibly without being the final theory!) then the problem simply does not exist.

So we might now end up in a philosophical discussion, but what is more probable here? That the higgs mass is affected by the Planck scale, that we should see some physics at some TeV that cures this, but everything conspires aginst us seeing signs of any new phenomena up to a couple of TeV, etc etc or are we maybe just reckoning without our host here?

Cheers
 
  • #26
Sleuth said:
LHC has not seen anything
With 1% the final dataset.

Did you see anyone claiming that the LHC absolutely has to find something beside the Higgs? Or, the much stronger claim, that it has to happen so early? I did not.
Sleuth said:
So how can we so boldly state that the natural scale for the higgs mass is the Planck mass?
We cannot. That is the whole point. The natural scale for the Higgs mass is at least the scale where new physics gets relevant. New physics at the TeV scale would put this closer to the observed Higgs mass.

There are models that avoid this issue completely, of course, like the relaxion and similar approaches.
 
  • #27
This problem has many facets, even without perturbation theory and the Planck scale. That's why I highlighted in my previous post a related problem, which is non-perturbative in nature, namely the QCD phase transition (one may also consider the weak symmetry breaking scale). While the hierarchy between Lambda_QCD and the cc is much smaller than the one related to the Planck scale, it still involves many orders of magnitude. How would early cosmology know that in the future there will be this non-perturbative phase transition (at relatively low energy) and arrange "beforehand" that after the transition the vacuum energy is practically zero?

It seems pretty obvious that this is an ill-posed question, rather one would expect that there should be some self-tuning, continuously adapting mechanism that more or less automatically guarantees that at very large times of the universe, the vacuum energy tends to zero, independent of the various phase transitions along the way. There are a few proposals of this kind around, but none seem particularly convincing.

So this *is* a most important open problem, despite perceived arrogance (btw there is a saying that arrogance is just competence if perceived from below). The Planck scale is not important, you may call it infinity for any practical purposes at low energies. The question remains how to reconcile this infinite number (or to lesser extreme, any other large number such as Lambda_QCD) with the measured "almost zero" value of the cc. To me it appears that this kind of questions cannot be meaningfully addressed in the context of particle CFT, so the one-loop argument could be nil anyway (this often happens when particle phenomenologists try to adress questions related to gravity using QFT methods). Phenomena like UV/IR mixing and the holographic nature of quantum gravity go far beyond naive particle QFT, and all the difficulties one encounters may be just an indication that one uses too a limited framework in the first place (much like trying to use standard QFT to address the information loss problem of black holes).
 
  • Like
Likes nikkkom
  • #28
Just to clarify, I did not mean that any answers in this thread were arrogant. I meant the way the hierarchy problem was presented before people crushed againsted LHC disappointing results (from this point of view). What I can see, for example, is that the way people talk about this problem today is much different then what it was 15 years ago or still right before the higgs discovery.

About arrogance, I don't this I agree on the saying. I believe that if you know what you are talking about, you have no reason to become arrogant. When you do, usually the reason is precisely the opposite.

Finally, it's true, the LHC has just started and we don't know what will happen until 2035. But what I was referring to, was the prospect of finding a huge number of new states, mainly based on hierarchy related arguments, and that simply did not happen.
The LHC has reached practically its max energy and, while we of course still don't know, I think it is reasonable to expect that if it will see something, this will be deviations here and there. Which would be good of course, but it is far from what people were talking about some years ago.

In any case, there is no gain in such discussions, we will just have to wait and see, and I will be happy to be proven wrong, if this will be the case. I just wanted to stress that many things that are given for granted and said as if they were obvious, are often not, and we had a proof of this with the hierarchy problem.

Cheers
 
  • #29
A naive scaling suggests that a "10 sigma" observation with 3000/fb could be a ~1 sigma effect today, which means it wouldn't be visible at all.. Sure, it is unlikely that we'll find 50 new particles, but even a single particle would be amazing. A single clear violation of the SM elsewhere would be great as well.
LHCb has some curious results in that aspect...
 
  • #30
Ok, to see why DimReg isn't an answer to the hierarchy problem, or alternatively why you have to talk about standard model cutoffs, could I recommend an excellent presentation by Nathaniel Craig at a recent IAS summer school that has conveniently appeared on the internet in the past week or so. For people that are serious about learning why the hierarchy problem keeps a lot of physicists up and night (and perhaps why we don't all suffer from mass delusion) I can't think of a better place to start.

Lecture one goes through a lot of what was discussed here in some detail and the following lectures are interesting as well, highly recommended.
 
  • Like
Likes vanhees71

Similar threads

Replies
62
Views
10K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
Replies
6
Views
4K
Replies
5
Views
4K
Replies
24
Views
8K
  • · Replies 20 ·
Replies
20
Views
5K