I Sabine Hossenfelder on strong CP, hierarchy

  • I
  • Thread starter Thread starter kodama
  • Start date Start date
kodama
Messages
1,072
Reaction score
144
sabine hossenfelder

Here is a different example for this idiocy. High energy physicists think it’s a problem that the mass of the Higgs is 15 orders of magnitude smaller than the Planck mass because that means you’d need two constants to cancel each other for 15 digits. That’s supposedly unlikely, but please don’t ask anyone according to which probability distribution it’s unlikely. Because they can’t answer that question. Indeed, depending on character, they’ll either walk off or talk down to you. Guess how I know.

Now consider for a moment that the mass of the Higgs was actually about as large as the Planck mass. To be precise, let’s say it’s 1.1370982612166126 times the Planck mass. Now you’d again have to explain how you get exactly those 16 digits. But that is, according to current lore, not a finetuning problem. So, erm, what was the problem again?
...
And there are more numerological arguments in the foundations of physics, all of which are wrong, wrong, wrong for the same reasons. The unification of the gauge couplings. The so-called WIMP-miracle (RIP). The strong CP problem. All these are numerical coincidence that supposedly need an explanation. But you can’t speak about coincidence without quantifying a probability!
http://backreaction.blogspot.com/

High energy physicists any comments?
 
Last edited by a moderator:
  • Like
Likes ohwilleke and MrRobotoToo
Physics news on Phys.org
The specific link is here.

I'm not sure there's much point having a parallel debate about this here on PF, rather than in the comments section on Bee's blog (which Bee is more likely to read, and maybe respond). At last count there were already 94 comments there.
 
  • Like
Likes arivero
No matter which prior you use, the probability will be extremely small in all these cases. Which means explanations for a small mass gain orders of magnitude in terms of their relative likelihood.

If you measure a constant of nature to be 1.0000000000000000000146, you would expect some deeper reason why it has to be extremely close to 1 (but not exactly 1). It is not absolutely necessary that there is a reason, but it looks likely. The exact value is not the point, measuring it to more precision doesn't change the argument. The point is the vastly different probability for "so close to 1" and "somewhere between 0 and 2" for every reasonable probability distribution.
The strong CP problem is similar. Yes, it can happen that a phase between 0 and 2 pi is smaller than 0.000000000001 by accident. But do you really expect that?

This is different from the curvature in cosmology. I don't have any problems with parameters that happen to be small where there is no natural scale for them. But the Higgs has a natural scale (the Planck mass), and the CP phase as well (it is an angle).
 
  • Like
Likes kodama
mfb said:
If you measure a constant of nature to be 1.0000000000000000000146, you would expect some deeper reason why it has to be extremely close to 1 (but not exactly 1). It is not absolutely necessary that there is a reason, but it looks likely. The exact value is not the point, measuring it to more precision doesn't change the argument. The point is the vastly different probability for "so close to 1" and "somewhere between 0 and 2" for every reasonable probability distribution.

Hm, so 1.1234567891234567892146 (intended to be random digits) does not need explanation? It would seem that "so close to 1.1234567891234567892000" is as probable as "so close to 1" for some equally reasonable distribution?
 
mfb said:
No matter which prior you use, the probability will be extremely small in all these cases.
Well, this is not exactly true as stated. You would need to add some qualifiers on what type of priors you consider "natural". I also must disagree with Sabine, I don't know exactly who she has been talking to but I know several high-energy physicists that would happily tell you that fine tuning may not be a problem depending on your assumptions (and some that are way too happy to fine tune their models).

When it comes to the strong CP-problem and things like flavour mixing, there is a natural measure on those parameters, the Haar measure. Indeed, this would give a flat distribution on the circle for the strong CP phase.

Dr.AbeNikIanEdL said:
Hm, so 1.1234567891234567892146 (intended to be random digits) does not need explanation? It would seem that "so close to 1.1234567891234567892000" is as probable as "so close to 1" for some equally reasonable distribution?
With this approach you will get nowhere. The entire experimental analysis using frequentist statistics is based on ordering different experimental outcomes based on how "extreme" they would be within the model. In the case of the strong CP-phase, measuring a value close to zero is extreme in the sense of giving no CP-violation in contrast to the rest of the parameter space. You might, as Sabine says, consider a different prior distribution, but what should the prior distribution depend on if not your underlying model?

I agree with Sabine that you must pay attention to your prior assumptions, but many people will inherently assume some prior and the prior can very well be based on your model assumptions.
 
Dr.AbeNikIanEdL said:
Hm, so 1.1234567891234567892146 (intended to be random digits) does not need explanation? It would seem that "so close to 1.1234567891234567892000" is as probable as "so close to 1" for some equally reasonable distribution?
That would be odd as well, although in a different way. The 123456789123456789 would suggest our decimal system is special in some way.

You can find many examples of values that would be strange, but the largest part of the interval [0,2] is not close to some value we would consider "strange".
Orodruin said:
You would need to add some qualifiers on what type of priors you consider "natural".
Let's be generous: Everything that does not differ by more than 10 orders of magnitude on the interval [0,2], or if it does differ by more, favors smaller values (to give logarithms some love).
A prior that gives values like 1.0000000000000000000146 a 1015 times higher probability than values around 1.362567697948747224894 or any other number like this doesn't look natural to me.

Edit: Finite probabilities for discrete values like 1, 0, 1/2 and similar are fine as well.
 
Last edited:
The mass of elephant is 9 orders of magnitude larger than the mass of ant. Why? This is a hard hierarchy problem in biology that lacks any natural explanation. o0)
 
  • Like
  • Love
Likes malawi_glenn and ohwilleke
Demystifier said:
The mass of elephant is 9 orders of magnitude larger than the mass of ant. Why? This is a hard hierarchy problem in biology that lacks any natural explanation. o0)
That has no similarity to the question of the Higgs mass.
We have 13 orders of magnitude between the top and the neutrinos, but unlike the Higgs that doesn't require any fine-tuning.
 
Demystifier said:
The mass of elephant is 9 orders of magnitude larger than the mass of ant

I think a closer analogy would be if the mass of an elephant is the same as the mass of a birch tree - to within a nanogram.
 
  • Like
  • Love
Likes malawi_glenn, atyy and arivero
  • #10
Vanadium 50 said:
I think a closer analogy would be if the mass of an elephant is the same as the mass of a birch tree - to within a nanogram.
And birch trees are the only food elephants eat. And elephants are the only type of animals - to avoid look-elsewhere effects.
 
  • #11
Orodruin said:
With this approach you will get nowhere. The entire experimental analysis using frequentist statistics is based on ordering different experimental outcomes based on how "extreme" they would be within the model. In the case of the strong CP-phase, measuring a value close to zero is extreme in the sense of giving no CP-violation in contrast to the rest of the parameter space. You might, as Sabine says, consider a different prior distribution, but what should the prior distribution depend on if not your underlying model?

Ok, in the example @mfb it was not indicated that there would be something physically special about 1. In the case of the CP problem I get that the value of 0 at least is singled out as significant by physics.

mfb said:
That would be odd as well, although in a different way. The 123456789123456789 would suggest our decimal system is special in some way.

Thats why I wrote "intended to be random numbers"...

mfb said:
You can find many examples of values that would be strange, but the largest part of the interval [0,2] is not close to some value we would consider "strange".

I am not sure about this. I guess you could find patterns in almost all finite series of digits, or at least in so many that it is not surprising if one turns up somewhere, so this seems to depend on the arbitrary decision what kind of patterns you allow to consider a number interesting.
 
  • #12
Dr.AbeNikIanEdL said:
I am not sure about this. I guess you could find patterns in almost all finite series of digits, or at least in so many that it is not surprising if one turns up somewhere, so this seems to depend on the arbitrary decision what kind of patterns you allow to consider a number interesting.
You can always find a "777" in the digit sequence or something like that, but that is nowhere close to the pattern 1.0000000000000000000246 has (the last digits here are arbitrary, the zeros are not). It does not matter what kind of patterns you include - the observed Higgs bare mass to Planck mass ratio will stand out for every collection that is somewhat reasonable.
 
  • #13
On the Strong CP problem, does she really believe that this is an accident? "Oh, the angle has to be something - why not less than 10^-10 radians?"

On the Higgs hierarchy problem, there is a difference between having one number take the value that it does to ~36 decimal places, and I agree, how does one even talk about this in a probabilistic sense. But that's not what we have. We have two numbers with two different physical sources that are the same to 36 decimal places. This seems unlikely to be accidental.
 
  • #14
mfb said:
. But the Higgs has a natural scale (the Planck mass), .

what about theories that suggest Higgs natural scale is at the fermi scale?
 
  • #15
Vanadium 50 said:
On the Strong CP problem, does she really believe that this is an accident? "Oh, the angle has to be something - why not less than 10^-10 radians?"

On the Higgs hierarchy problem, there is a difference between having one number take the value that it does to ~36 decimal places, and I agree, how does one even talk about this in a probabilistic sense. But that's not what we have. We have two numbers with two different physical sources that are the same to 36 decimal places. This seems unlikely to be accidental.

I wrote on her board already, and this was my point. It's one thing when you have an effective field theory, and a cutoff scale and you worry about what natural values dimensionless ratios must be . There you can definitely talk about Bayesian priors, and I agree with her that this is a fuzzy question, (I also agree with others that almost any prior you pick disfavors a small value).

But this isn't what the core problem is. JUst like Newton didn't worry about priors and small numbers in front of his law of gravity, when you write down a theory that explains the Higgs mass^2 term, you have much bigger problems to worry about. Namely how to make extremely normal contributions within your Planckian theory produce an incredibly tiny number without badly breaking some other part of the theory.

Stated this way the hierarchy problem is really about how difficult it is for a theorist to come up with a sensible theory in the UV.
 
  • #16
kodama said:
what about theories that suggest Higgs natural scale is at the fermi scale?
They are in the group of "proposed solutions to the hierarchy problem".
 
  • #17
Vanadium 50 said:
On the Higgs hierarchy problem, there is a difference between having one number take the value that it does to ~36 decimal places, and I agree, how does one even talk about this in a probabilistic sense. But that's not what we have. We have two numbers with two different physical sources that are the same to 36 decimal places. This seems unlikely to be accidental.

But there is still only one value that enters the calculation of any observable, the resulting higgs mass?

Haelfix said:
But this isn't what the core problem is. JUst like Newton didn't worry about priors and small numbers in front of his law of gravity, when you write down a theory that explains the Higgs mass^2 term, you have much bigger problems to worry about. Namely how to make extremely normal contributions within your Planckian theory produce an incredibly tiny number without badly breaking some other part of the theory.

Stated this way the hierarchy problem is really about how difficult it is for a theorist to come up with a sensible theory in the UV.

What exactly is meant by "explain" the ##m^2## term? Could you suggest some reference for further reading?
 
  • #18
The square of the observable Higgs mass is the sum of a squared "bare" mass and loop corrections. These loop corrections depend on all the particles and the scale where new physics comes in. If we just assume "Standard Model up to the Planck scale", we would expect the loop corrections to be of the order of the squared Planck mass: ## m_{bare}^2 + c\, m_P^2 = m_{obs}^2## where mobs is the mass we measure in the lab, and c is some numerical prefactor that depends on details not relevant here. There is no known relation between mbare and c mP.
We know mobs = 125 GeV, and mP = 1.22*1019 GeV. Plugging that in, we get something like 1502407283632643267022981020544340468283664 + (completely unrelated value) = 15625. The completely unrelated value has to match the other value extremely closely to get a result that is so much smaller. Possible? Sure. Likely? Nah.
 
  • Like
Likes Spinnor
  • #19
mfb said:
The square of the observable Higgs mass is the sum of a squared "bare" mass and loop corrections. These loop corrections depend on all the particles and the scale where new physics comes in. If we just assume "Standard Model up to the Planck scale", we would expect the loop corrections to be of the order of the squared Planck mass: ## m_{bare}^2 + c\, m_P^2 = m_{obs}^2## where mobs is the mass we measure in the lab, and c is some numerical prefactor that depends on details not relevant here. There is no known relation between mbare and c mP.
We know mobs = 125 GeV, and mP = 1.22*1019 GeV. Plugging that in, we get something like 1502407283632643267022981020544340468283664 + (completely unrelated value) = 15625. The completely unrelated value has to match the other value extremely closely to get a result that is so much smaller. Possible? Sure. Likely? Nah.
is conformal solution still viable or has LHC ruled it out?
 
  • #20
mfb said:
The square of the observable Higgs mass is the sum of a squared "bare" mass and loop corrections. These loop corrections depend on all the particles and the scale where new physics comes in. If we just assume "Standard Model up to the Planck scale", we would expect the loop corrections to be of the order of the squared Planck mass: m2bare+cm2P=m2obsmbare2+cmP2=mobs2 m_{bare}^2 + c\, m_P^2 = m_{obs}^2 where mobs is the mass we measure in the lab, and c is some numerical prefactor that depends on details not relevant here. There is no known relation between mbare and c mP.
We know mobs = 125 GeV, and mP = 1.22*1019 GeV. Plugging that in, we get something like 1502407283632643267022981020544340468283664 + (completely unrelated value) = 15625.

I know. Still, ##m_\mathrm{bare}## is not a measurable value and should have no meaning at all. Only ##m_\mathrm{obs}## should be relevant for any physics and is the only value that enters any calculation?
 
  • #21
Dr.AbeNikIanEdL said:
I know. Still, ##m_\mathrm{bare}## is not a measurable value and should have no meaning at all. Only ##m_\mathrm{obs}## should be relevant for any physics and is the only value that enters any calculation?
It is not observable, but it enters calculations - the calculation of the observable Higgs mass, in particular. You can't work without it.
 
  • Like
Likes ohwilleke
  • #22
But the observable higgs mass is not calculable in the standard model, it is fixed to observation as any other mass in the SM. Anything else only depends on this value of ##m_\mathrm{obs}## and never on ##m_\mathrm{bare}##.
 
  • Like
Likes ohwilleke
  • #23
Guys, I don't see why there is such a focus on Higgs mass problem per se.

Sure, the corrections for it are huge and this does not look good, but this is not the only such problem in SM. For example, vacuum energy situation is even worse - vacuum energy, if calculated in SM, is divergent. This is worse than any fine-tuning.

To me, it is not a disaster, it just implies that more development of the theory is in order.

And there are hints we now have after Higgs discovery.

Prior to that, we only knew that sum of squares of masses of all fermions is suspiciously close to half of square of Higgs VEV, which might be just a hint that squares of all fermion yukawa couplings must (for some yet unknown reason) add up to one.

But now, when we know the mass of Higgs, sum of squares of masses of all bosons *also* is very close to half of square of Higgs VEV (within ~0.3%).

And Higgs and top masses are such that SM vacuum seems to lie very close to stability/metastability line.

And also, good old Koide rule is there too.

Something is fishy here. The mass of Higgs is not "randomly chosen by Nature", neither masses of fermions.
 
  • Like
Likes ohwilleke
  • #24
Haelfix said:
I wrote on her board already, and this was my point. It's one thing when you have an effective field theory, and a cutoff scale and you worry about what natural values dimensionless ratios must be . There you can definitely talk about Bayesian priors, and I agree with her that this is a fuzzy question, (I also agree with others that almost any prior you pick disfavors a small value).

But this isn't what the core problem is. JUst like Newton didn't worry about priors and small numbers in front of his law of gravity, when you write down a theory that explains the Higgs mass^2 term, you have much bigger problems to worry about. Namely how to make extremely normal contributions within your Planckian theory produce an incredibly tiny number without badly breaking some other part of the theory.

Stated this way the hierarchy problem is really about how difficult it is for a theorist to come up with a sensible theory in the UV.
You wrote a very good comment on her board. She obviously lacks the understanding of the subject, but nevertheless makes strong claims (even hinting that she considers 1000's of other physicists as being wrong - that is always a very bad sign).

One may also add the following perspective to the argument: even after the QCD phase transition with vacuum condensate, the net result of the cc is almost zero. This is a relatively low-energy phenomenon compared to the scale of gravity. Assuming that the vacuum energy goes with some power of Lambda_QCD, Lambda_QCD must have the correct value fine tuned to a fantastic precision to exactly the right value so that after the phase transition it almost precisely cancels all the other contributions. How would the dynamics of the big bang know beforehand that after following through all sorts of phase transitions, including the QCD one, that the net effect if practically zero?

This problem is entirely different than what she misleadingly writes in her blog. Sigh.
 
  • Like
Likes atyy, vanhees71 and protonsarecool
  • #25
I would like to make a comment.

The hierarchy problem has been bugging the community for a long time, and for a long time very clever people have been going around making claims like the ones we read here, i.e. There is a natural scale (the Planck mass) for the higgs mass, there is therefore a fine tuning problem, there must be therefore new physics at the TeV. And this claim has been made so boldly and arrogantly, in the sense that almost everyone would just give it for granted and entire physics careers have been build on solutions to the hierarchy problem.

To me, while the problem is inded an interesting one, the really issue here was and is precisely the arrogance and superiority with which these claims were and sometimes still are done, as if this were so obvious and whoever does not see it "has some kind of problem".

Now of course we can discuss as much as we want about this, but there is an indisputable fact. LHC has not seen anything, in spite of the claims of all those very clever people. So this is a fact, the claims, as they were done, were plainly wrong. Period. This does not mean that the problem does not exists or that there is no new physics, but that for sure the implications that were given for granted before as practically obvious, were and are far from obvious, and as a matter of fact wrong, at least in the form they were made. So this just to say, to whoever does not see this hierarchy problem, that experimental evidence seems to be on your side., which is what we should always remember of, as physicists. We might be clever, but nature doesn't need to follow our logic.

Having said that, there are two (for me) assumptions that are made here. First of all, people talk about the Planck mass as a natural scale. This assumes somehow that gravity has a role in all this. You need G to enter one way or another. Now we know that gravity is a very resilient theory, does not like to be quantized and seems to be deeply different than anything else we know. So how can we so easily make such a strong claim? Who tells us that gravity does not follow completely different patterns? Might not even be a fundamental force, but maybe an emergent one, as some people have claimed, and G would be more like the boltzman constant. But without having to go so far, QFT without gravity does not know anything about G, the higgs does not know anything about G. So how can we so boldly state that the natural scale for the higgs mass is the Planck mass?

We can look at this from a slightly different perspective, namely computing the radiative corrections to the higgs mass. People say, oh my god, the radiative corrections go as the cutoff energy squared, which is a disaster ( if the cut off is the Planck scale, it's easy to see where the fine tuning comes from)

But again, we are assuming here that there is a cutoff at the Planck scale, we are computing just one loop corrections to the higgs mass, very well knowing that perturbation theory is a partial answer to what happens in reality (the perturbation series it self does not converge and we have no idea of what kind of physics could arise non pertubatively) and finally we are using a regularization scheme which is beyond good and bad. Cut offs are bad, break any possible symmetry of the theory. If you use a reasonable scheme, like dim reg, the mass squared leaves the place to usual 1/eps terms, that will disappear in UV renormalization as any other infinity in quantum field theory and no one would be bothered.

Now please don't isunderstand me. Here I don't want to say that there is no need for new physics, nor that the SM of particle physics is the end of the story. But clearly, if we remove the cutoff and assume that qft is mathematically well defined everywhere (still possibly without being the final theory!) then the problem simply does not exist.

So we might now end up in a philosophical discussion, but what is more probable here? That the higgs mass is affected by the Planck scale, that we should see some physics at some TeV that cures this, but everything conspires aginst us seeing signs of any new phenomena up to a couple of TeV, etc etc or are we maybe just reckoning without our host here?

Cheers
 
  • #26
Sleuth said:
LHC has not seen anything
With 1% the final dataset.

Did you see anyone claiming that the LHC absolutely has to find something beside the Higgs? Or, the much stronger claim, that it has to happen so early? I did not.
Sleuth said:
So how can we so boldly state that the natural scale for the higgs mass is the Planck mass?
We cannot. That is the whole point. The natural scale for the Higgs mass is at least the scale where new physics gets relevant. New physics at the TeV scale would put this closer to the observed Higgs mass.

There are models that avoid this issue completely, of course, like the relaxion and similar approaches.
 
  • #27
This problem has many facets, even without perturbation theory and the Planck scale. That's why I highlighted in my previous post a related problem, which is non-perturbative in nature, namely the QCD phase transition (one may also consider the weak symmetry breaking scale). While the hierarchy between Lambda_QCD and the cc is much smaller than the one related to the Planck scale, it still involves many orders of magnitude. How would early cosmology know that in the future there will be this non-perturbative phase transition (at relatively low energy) and arrange "beforehand" that after the transition the vacuum energy is practically zero?

It seems pretty obvious that this is an ill-posed question, rather one would expect that there should be some self-tuning, continuously adapting mechanism that more or less automatically guarantees that at very large times of the universe, the vacuum energy tends to zero, independent of the various phase transitions along the way. There are a few proposals of this kind around, but none seem particularly convincing.

So this *is* a most important open problem, despite perceived arrogance (btw there is a saying that arrogance is just competence if perceived from below). The Planck scale is not important, you may call it infinity for any practical purposes at low energies. The question remains how to reconcile this infinite number (or to lesser extreme, any other large number such as Lambda_QCD) with the measured "almost zero" value of the cc. To me it appears that this kind of questions cannot be meaningfully addressed in the context of particle CFT, so the one-loop argument could be nil anyway (this often happens when particle phenomenologists try to adress questions related to gravity using QFT methods). Phenomena like UV/IR mixing and the holographic nature of quantum gravity go far beyond naive particle QFT, and all the difficulties one encounters may be just an indication that one uses too a limited framework in the first place (much like trying to use standard QFT to address the information loss problem of black holes).
 
  • Like
Likes nikkkom
  • #28
Just to clarify, I did not mean that any answers in this thread were arrogant. I meant the way the hierarchy problem was presented before people crushed againsted LHC disappointing results (from this point of view). What I can see, for example, is that the way people talk about this problem today is much different then what it was 15 years ago or still right before the higgs discovery.

About arrogance, I don't this I agree on the saying. I believe that if you know what you are talking about, you have no reason to become arrogant. When you do, usually the reason is precisely the opposite.

Finally, it's true, the LHC has just started and we don't know what will happen until 2035. But what I was referring to, was the prospect of finding a huge number of new states, mainly based on hierarchy related arguments, and that simply did not happen.
The LHC has reached practically its max energy and, while we of course still don't know, I think it is reasonable to expect that if it will see something, this will be deviations here and there. Which would be good of course, but it is far from what people were talking about some years ago.

In any case, there is no gain in such discussions, we will just have to wait and see, and I will be happy to be proven wrong, if this will be the case. I just wanted to stress that many things that are given for granted and said as if they were obvious, are often not, and we had a proof of this with the hierarchy problem.

Cheers
 
  • #29
A naive scaling suggests that a "10 sigma" observation with 3000/fb could be a ~1 sigma effect today, which means it wouldn't be visible at all.. Sure, it is unlikely that we'll find 50 new particles, but even a single particle would be amazing. A single clear violation of the SM elsewhere would be great as well.
LHCb has some curious results in that aspect...
 
  • #30
Ok, to see why DimReg isn't an answer to the hierarchy problem, or alternatively why you have to talk about standard model cutoffs, could I recommend an excellent presentation by Nathaniel Craig at a recent IAS summer school that has conveniently appeared on the internet in the past week or so. For people that are serious about learning why the hierarchy problem keeps a lot of physicists up and night (and perhaps why we don't all suffer from mass delusion) I can't think of a better place to start.

Lecture one goes through a lot of what was discussed here in some detail and the following lectures are interesting as well, highly recommended.
 
  • Like
Likes vanhees71
  • #31
It has been said several times in this thread already, but I'll say it myself too, in case it helps someone else get the message. The real hierarchy problem is not the problem that one number is small and the other number is big. The problem is that we have theories in which, to match experiment, we need an observed quantity to come out small, and the way we do that is to employ a fundamental parameter that is very big, but which is finetuned so as to be almost entirely canceled out by quantum effects.

Originally I thought Hossenfelder understood this, and was taking the attitude, so what? ... in an example of that hardboiled empiricism which says, to hell with preconceptions and common sense and human intuition; what matters in science is agreement with experiment, and these finetuned theories agree with experiment. She does actually say something like that, it's just that I am no longer sure whether she thinks finetuning means huge cancellations, or just small numbers.

Anyway, for an example of a paper which overtly says, let's forget concerns about finetuning and just see what works, see "The new minimal standard model" of 2004. It never swept the world and it's already out of date (the actual Higgs is a little lighter than the range it allows), but it's an example of what finetuning-be-damned looks like. In that regard, I would contrast it with the more recent "SMASH" model (which has had some fans here), because SMASH includes an axion in order to explain why strong CP violation is effectively zero, something that a willfully finetuned theory like NMSM can just posit.

For my part, I do think finetuning (serious finetuning, involving magic cancellations that wipe out many orders of magnitude) does need to be explained or avoided; and I am one of those people who is impressed by the asymptotic safety prediction of the Higgs mass. That seems to require a "desert" (no new physics) above the electroweak scale, and something unexpected at the quantum gravity scale. For example, despite my love of string theory, I can now appreciate the interest in unconventional models of micro black holes, if that would remove one cause of the need for finetuning.

Two sets of papers that I have started looking at, are Strumia et al on agravity, and Dubovsky et al on the T-Tbar deformation of CFTs. I don't know what agravity says about black holes, but it is apparently designed to allow what Gorsky et al call asymptotic security - Higgs mass goes to zero at the Planck scale, and Higgs mass beta function goes to zero at the Planck scale. The T-Tbar deformation, meanwhile, is said to have asymptotic fragility - whatever that is, I haven't got the gist of it yet. But apparently Dubovsky's work has caught Strumia's eye, so it goes on the list for consideration.
 
  • Like
Likes Urs Schreiber
  • #32
I don't know about Sabine, cannot talk for here, but on my side, I think I did my best to understand the arguments that people talk about. My point was entirely another.

There is no doubt that, if there was really a magic cancellation of 15 orders of magnitude, everyone should be bothered. And in my argument I was not trying to say that I can prove that such a cancellation does not take place. The problem is, I am not sure I believe that this cancellation is really inevitable. The problem, in my perspective, is that we insist in wanting to apply a model of reasoning (based on QFT and EFT if you want) to a problem of which we understand practically nothing, and the reason for this is because it involves gravity at very high energies and its quantization.

Now, of course, lacking anything else better, we have to try to do something, and it is good that people think about these scenarios, and "what would ti be if..." . The problem is that these things are stated as if they were carved in stone, namely :

"there MUST be something, because there IS such a huge cancellation, don't YOU see it, you fools ?".

I was arguing precisely against this attitude. I always found it pretentious and non scientific. Therefore I am happy that nature (at the LHC in this case) is showing us that things are not so obvious as they were presented and that maybe we should cut down our egos a bit sometime, and try to think of something new. This is a much more exciting perspective to me, than keeping on adding new gauge groups and new particles and new interactions than no one ever saw, to explain phenomena that no one ever saw, just because of a hunch, sorry... there are indeed things we see and we don't understand, dark matter and dark energy for example, which indeed all have to do always and only (as far as we can say now) with gravity.

I understand you will not like the example, but to me this is so similar to Ptolemaic cosmology and epicycles. People had a framework that seemed to work, namely a system made of things rotating on circular orbits around the earth, and in their mind there was no other possible system. Therefore, of course, they tried to bend it and adapt it adding always a new piece, a new circle, to try to reproduce observation. It was "clear and obvious" to everyone, that if the system did not work, it meant that there HAD to be another epicycle, what else could it be?

Fortunately, at some point someone came about and changed the perspective, and then everything was simple and beautiful again and none of those crazy (according to the Ptolemaic system "inevitable") epicycles was actually needed.

I know you all understand this story better than me, so I ask myself, how is it possible that our minds are so powerful that we can imagine Gauge theories, symmetry breaking, bending of space-time under the action of gravity, and we still we cannot learn a bit of humbleness from stories that repeated over and over again in our past? So in conclusion I say, fine if we want to do models and present ideas using the framework that we have trying to look for a hint of a solution, it is the best we can do now and it is completely legitimate; On my side, I know I am not the new Einstein and I probably will not be the one changing perspective, so all fine. But at least, let's give it the benefit of the doubt, ok?

Cheers
 
  • Like
Likes cosmik debris and ohwilleke
  • #33
mfb said:
A naive scaling suggests that a "10 sigma" observation with 3000/fb could be a ~1 sigma effect today, which means it wouldn't be visible at all.. Sure, it is unlikely that we'll find 50 new particles, but even a single particle would be amazing. A single clear violation of the SM elsewhere would be great as well.
LHCb has some curious results in that aspect...

It doesn't work like that for a variety of reasons. It turns out that once you get the LHC running at peak energy, there are huge spoilers very early on in the results and there are never cliffhangers.

One is that more data doesn't change your systemic error, it only changes you statistical error. And, statistical error has a non-linear relationship to the size of your sample.

Statistical error is basically a function of sqrt(1/sample size). To use round numbers, if we have 50/fb now, and will have 3000/fb then, the relative improvement in statistical error is sqrt(1/60), so the statistical margin of error might decrease by a factor of about 0.129

But, the relationship between systemic error, statistical error and total error isn't linear either. If your systemic error is 1 and your statistical error is 1 (and currently a lot of results have comparable amounts of systemic and statistical error so this crude estimate isn't too far off), your total error is about 1.4.

But, if your system error is 1 and your statistical error is 0.129, your total error is about 1.008.

So a 5 sigma result when it is all over ought to be a 3.6 sigma result now.

We have no 3 sigma new physics resonances yet, and that implies that we are almost certain not to discover any new particles at a "discovery" threshold by the time that the LHC is done.

In truth, it is even worse than that.

Why?

Because lots of SM observations are subject to loop effects that involve all possible ways that a phenomena could happen including any new particle that is out there. So, if there is a BSM phenomena out there, you are not going to see just one experimental anomaly standing alone. You would see multiple experimental anomalies in measurements of different things. And, the kind of measurements where you would be seeing multiple anomalies at CMS would be the same kind of measurements where you would be seeing multiple anomalies at ATLAS.

With the possible exception of charged lepton non-universality, we haven't seen anything remotely like that at the LHC. If there were 50 new particles out there, we'd have hundreds of anomalies that we'd be seeing at a significance of about two-thirds of what we will see in a final result (in sigma units). Yet, this clearly isn't happening.
 
Last edited:
  • #34
@Sleuth: I don't see anyone claiming that there must be many orders of cancellation. That is the point: We don't expect such fine-tuning, so we have to figure out what is going on. The ideal result is an explanation for the Higgs mass that does not have excessive fine-tuning.

@ohwilleke: For most searches, the region of phase space accessible with 3000/fb is more than twice as large as the region accessible with 40/fb on a linear mass scale, and you also get a factor of nearly 10 in sensitivity in cross section for a given mass in most places. Per day of running, the early data is more exciting, but taking much more data helps a lot. Systematic errors are not an issue for most searches, and most of them also go down with luminosity because they are uncertainties relative to the signal strength (e. g. luminosity, identification efficiency and so on) or the fitting procedure. Very few searches have systematic uncertainties corresponding to a fixed uncertainty in the signal yield.
What you say is not directly wrong, but your estimate of the importance of these things is completely off.
ohwilleke said:
We have no 3 sigma new physics resonances yet, and that implies that we are almost certain not to discover any new particles at a "discovery" threshold by the time that the LHC is done.
That statement is completely wrong.
 
  • #35
mfb said:
That statement is completely wrong.

His whole point about systematics is completely wrong. More data means more control over systematics.
 
  • #36
Sleuth, you've said some harsh things. Forgive me if I am direct in replying.

You accused scientists of arrogance, twice, and then proceeded to misstate the situation. I find this...ironic.

Second, your statement that you weren't talking about anyone here when you said that is cowardly. You called the community arrogant - several times. You have members of that community here. Show some courage and either retract your statement, or stand by it - but saying that the community is arrogant, except for the handful of people who happen to post here is a cowardly way out.

Third, you have it exactly backwards on cutoffs. Setting a cutoff to zero doesn't make it valid everywhere, it makes it valid nowhere. You need to set it to infinity to make it valid everywhere.

Fourth, whether or not the solution to the hierarchy problem will be uncovered by the LHC has no bearing at all on whether there is a hierarchy problem. It's possible that the hierarchy problem has an entirely different solution. I don't think it will turn out that way, but it could. In any event, the hierarchy problem would be there even had we never built the LHC, and it was known before we built it that a Higgs boson light enough to cause EWSB of the sort we see is problematic because of quantum corrections.

Fifth, you write:

Sleuth said:
, don't YOU see it, you fools ?".

in quotes. I challenge you to find that anywhere in the scientific literature. I think you just made that up. Sticking words in the mouths of your opponents is a shabby, shabby means of debate.

Finally, you write:

Sleuth said:
maybe we should cut down our egos a bit

Fair enough. You first.
 
  • Like
Likes weirdoguy and king vitamin
  • #37
Vanadium, ok, I think the discussion is going completely in the wrong direction.

I was not referring to people on the forum, whether you believe it or not, instead to an attitude of a part of the community during talks, conferences etc, but I don't feel like repeating this again. I don't see any reasons to talk about cowards and everything else you said.

Probably I expressed my point of view with too much emphasis, my bad. I apologize if I offended anyone, it was not my intention to be arrogant.
I think the message is clear so I don't need to repeat it again. My ego is already way too small to cut it down even more, you don't know me.

Have fun
 
  • #38
Sleuth said:
I understand you will not like the example, but to me this is so similar to Ptolemaic cosmology and epicycles. People had a framework that seemed to work, namely a system made of things rotating on circular orbits around the earth, and in their mind there was no other possible system. Therefore, of course, they tried to bend it and adapt it adding always a new piece, a new circle, to try to reproduce observation. It was "clear and obvious" to everyone, that if the system did not work, it meant that there HAD to be another epicycle, what else could it be?

Fortunately, at some point someone came about and changed the perspective, and then everything was simple and beautiful again and none of those crazy (according to the Ptolemaic system "inevitable") epicycles was actually needed.

I know you all understand this story better than me, so I ask myself, how is it possible that our minds are so powerful that we can imagine Gauge theories, symmetry breaking, bending of space-time under the action of gravity, and we still we cannot learn a bit of humbleness from stories that repeated over and over again in our past? So in conclusion I say, fine if we want to do models and present ideas using the framework that we have trying to look for a hint of a solution, it is the best we can do now and it is completely legitimate; On my side, I know I am not the new Einstein and I probably will not be the one changing perspective, so all fine. But at least, let's give it the benefit of the doubt, ok?

Cheers
Well, as you yourself say history repeats itself; any theory or models that will come will replace our current understanding and the same new theories will be replaced again and again; everything repeats itself endlessly.

Every physical theory is in the end result, false according to classical logic which is the basic building block of every thought; take it as you like.
 
  • #39
I completely agree with you!
 
  • #40
(This is mostly in response to post #36, which might not be obvious as a couple of other posts were made to the thread in the meantime while I was writing this post.)

A Question Of Terminology - Trying to Defuse The Emotional Side Of The Discussion

Different Senses Of Words And The Emotional Baggage That Comes With Them

When one talks of scientist displaying "arrogance" in regard to the strong CP problem and hierarchy problem, one is using the word in an abstract and somewhat non-common sense way as opposed to the usual sense of describing the personal internal emotions and attitude of a person towards other people (much as the word "natural" as used in regard to the hierarchy problem is being used in a technically defined sense and isn't being used in its common sense meaning of "the way things actually are in nature").

There are some false friends in physics terminology like "color" for QCD charge which aren't troublesome or contentious because everyone is absolutely clear that the sense of the word being used is totally different from the common meaning of the word. But, in the case of words like "arrogance" and "natural" in the context of the strong CP problem, hierarchy problem and related discussions, its easier to inflame emotions in discussions using these words because the technical meanings are closer to the common meaning and because this choice of words is intended to some extent to evoke some of the same heuristic reactions as the common meaning.

Still the sense in which one uses the word "arrogance" in this kind of discussion is not quite the same as the one in which you use it to describe your coworker to a friend at a cocktail party. Someone who is "arrogant" in this sense, may be a very polite, civil person who reeks humility and usually displays deferential conduct in this scientist's interactions with other people.

In the sense used in the strong CP problem/hierarchy problem, unlike its common sense usage, "arrogance" is a close synonym to "hubris" rather than to "impolite jerk".

"Arrogant" Is Intended To Characterize "Naturalness" Analysis As A Type Of Unscientific Category Error

What one means when saying that a scientist is "arrogant" in reference to strong CP problem/hierarchy problem type questions in physics is that someone is making suppositions about what the laws of nature and its physical constants ought to look like, in the form of a Bayesian distribution of priors about what those laws of nature/physical constants should be, without having any empirically supported or scientifically valid basis for choosing those Bayesian priors. In the eyes of critics of seeing these issues a true "problems" in physics, attributing any scientific meaning to these Bayesian priors is a form of category error.

Critics see it as a category error because the set of all possible laws of nature and physical constants in the abstract (as opposed to in relation to different hypotheses formulates independently based upon empirical observation such as trying to decide if GR or F(R) theory better describes reality), is not a scientifically valid matter upon which to generate Bayesian priors because "possible laws of nature and physical constant values" are not things which have any reality in any space-time and hence can't be assigned a weight in any way that is meaningful or adds information to what we know from other means.

Buried Religious Subtexts

Both "arrogance" and "natural" in this scientific context is also dicey and emotional because both words carry with them a subtext of residual religious belief that has carried over linguistically even though anyone who is engaging in this debate has implicitly abandoned the religious worldview and metaphysical context in which this religious imprints into our language and usage arose.

In the case of the term "arrogance" or the synonym "presumptuous" or "hubris" the unacknowledged religious subtext is that it is not the place of a mere mortal to second guess the mind and motivations of a creator god. The terminology is actually somewhat self-undermining because the whole point of the critics is that framing these issues as choices to be made by a creator god or some abstracted amoral generic equivalent of a creator god, is not a scientifically value way of looking at the world, even thought these words derived from religious and interpersonal contexts perilously adopts the very frame of reference that critics are seeking to reject. George Lakoff would take the critics to the woodshed for this poor rhetorical choice that is self-undermining in a very subtle, unconscious way.

In the case of the term "natural" the unacknowledged religious or metaphysical subtext is that the laws of nature and its physical constants were established by an anthropic intelligent designer creator god who has certain known aesthetic preferences which are known to his devotees and that the way that the laws of the universe and its physical constants should be can be inferred in a meta fashion from the presumed stylistic preferences of a presumed creator god as computer programmer or game designer with a certain set of choices available much as someone playing a "create your own universe" game on a computer. Needless to say, we have no reliable scientific reason to think that our laws of the universe or the values of our physical constants really came into being in a context like that one.

But, the tendency of both sides of the debate to resort to language with religious baggage isn't entirely surprising because at its heart this debate is one about the metaphysical assumptions of fundamental physics as a discipline, even if the question is rarely posed that way and even though scientists who pursue an analysis of whether laws of physics and physical constants are "natural" rarely frame their analysis as a metaphysical one even though the very heart of this analysis is implicitly a metaphysical one that assumes that it is scientifically valid to think about a set of all possible laws of nature and all possible values of fundamental physical constants.

Using Greek Philosophy Terminology

To go really old school, the strong CP problem/hierarchy problem and related issues use a very philosophically Platonic mode of reasoning, and in modern science we have in most other contexts rejected Platonic modes of reasoning in lieu of modes of reasoning rooted in Aristotelean world views that posit that there is not a "real" world of "ideals" that exists separate and apart from the observationally observable world. In one definition of modern Platonism (from the link earlier in this paragraph):

Platonism is the view that there exist such things as abstract objects — where an abstract object is an object that does not exist in space or time and which is therefore entirely non-physical and non-mental.

Critics would see this Platonic mode of reasoning as inconsistent with the scientific method.

Thus, in sum, what critics are trying to convey with the shorthand term "arrogance" is that the circumstances in which the proponents of these questions as genuine "problems" in physics is that the act of generating any Bayesian prior in this situation, regardless of its exact details, is not scientifically or logically justified.

Modern mathematicians do engage in Platonic modes of reasoning, but tend to be more clear about expressly and formally stating when they are relying upon axioms that rely upon no factual or empirically observed basis for their existence and instead are only conjectures from the mathematician. And, modern mathematicians are usually more upfront and clear than physicists exploring these kinds of Platonic problems that they are making no assertions about the physical validity of matters which they assume as axiomatic.

Are There Better Alternative Words?

The trouble is, that it is hard to come up with alternatives to the word "arrogant" in this context that don't have comparable emotional baggage.

For example, another strong synonym to the word "arrogant" in this physics context is "presumptuous", but "presumptuous" certainly has emotional baggage as well, although perhaps it is slightly less inflammatory because while "arrogant" is a term that usually goes to one's conduct interpersonal relationships in its common meaning, "presumptuous" does not have the same kind of social and interpersonal connotation.

If anyone could come up with a word that is synonym for "arrogant" as used in the technical sense that it is being used by critics of this approach with less baggage, perhaps it could help make the discussion less heated. I'm simply at a loss to come up with one at the moment.

An Aside

I'm laughing a bit at myself when the automatic censorship feature of PF was invoked when I posted this, but honestly, the censored version conveys the intended meaning just as well in this context.

Practical Consequences

The trouble is that whether formulating a Bayesian prior in this situation is "arrogant" or is scientifically justified in this situation is really the core question at the root of the entire debate. And, it turns out, there are a lot of practical, real world consequences to whether it is proper to formulate a Bayesian prior in this kind of situation or not.

Many hundreds of millions of dollars, maybe billions of dollars of scientific funding decisions, and many thousand of big picture career choices about how very smart people with PhDs in physics decide what they will devote their research agendas and write dozens of academic papers about and think about, hinge on this very fundamental, yes or no, question which superficially seems very abstruse.

Resolution of this issue influences how project managers at major HEP experiments devote scarce resources to do particular kinds of data analysis of collider experiments, and about which questions will receive the highest priority to be answered. The invisible victim, when the critics lose and the naturalness proponents win, are all of the scientific hypotheses and scientists advocating them who generate their research agenda without reference to concepts like naturalness and have fewer resources allocated to them as a result of pursuits of naturalness oriented research agendas which would have been much less of a priority relative to alternative research agendas if the "arrogance"/"naturalness" question were resolved the other way.

If "naturalness" is a dead end as a fruitful way of generating hypotheses and prioritizing new scientific investigation, then this bad idea may have delayed breakthroughs with different views about the scientific method that could prove in the end to be more fruitful approaches to discovering new scientific knowledge by decades. Critics are increasingly vocal now precisely because the "naturalness" assumption has in hindsight proven to have not generated much useful guidance about which projects and hypotheses to pursue and indeed in hindsight appears to have been counterproductive over the time frame of the last forty years or so.

Of course, even a broken clock is right twice a day. Naturalness, whether or not it is a valid means of generating a scientific hypothesis to test and whether or not it is a valid means of prioritizing different research agendas, may sometimes led to a useful result, even if not using this frame as a criterion for generating scientific hypotheses and prioritizing research agendas would have produced better results (something we won't know until we try the alternatives).

And, of course, the whole existence of the debate at all points out some blind spots in the common formulations of the "scientific method" itself, which is very clear and prescriptive about how to test and how to compare hypotheses that have been generated once they exist to test, but is very vague and provides little guidance about how to generate hypotheses and how to prioritize research possibilities in an optimal manner before we spend the time and money necessary to get in the business of hypothesis testing once we have a list of questions to test that is longer than the available resources to test or explore or generate them.

It is really most unfortunate that such high stakes depend upon a quite esoteric and subtle disagreement within the high energy physics community over whether this one very basic type of methodology is, or is not, a scientifically valid methodological tool.

My view

To put my cards on the table, I'll disclose my views on the merits.

As I've stated in other similar threads at PF, I think Sabine is spot on correct. I don't agree that the strong CP problem or the hierarchy problem or some kindred lines of inquiry are genuine, valid scientific "problems" and instead consider them to be on a par with numerology. This kind of analysis can be interesting, but it doesn't provide much insight (indeed, sometimes numerology is more informative than naturalness analysis which is really just a particular subspecies of numerology anyway).

In the same vein, I think that much of the work done based on the anthropic principle and the "multiverse" as a way of determining why the laws of the universe or the values of physical constants are way they are, is pseudo-science.

I think, as she does, that the scientific community has experienced a generation or two of group think that has led it astray, and that being right or wrong on this question has nothing to do with whether you are in the majority or not. It is not an issue that can be resolved democratically.

But, the main purpose of this particular post is not to get to the bottom of the correct answer, but to at least frame it in a way that adds light to the dispute, to let some of the unnecessary emotional air out of the bag, to try to prevent misunderstandings, and to focus on the core issue at stake since in my humble opinion this is really a disagreement over fundamentals and not primarily a question dependent upon technical details to any great degree.
 
Last edited:
  • Like
Likes Sleuth
  • #41
mitchell porter said:
I am one of those people who is impressed by the asymptotic safety prediction of the Higgs mass. That seems to require a "desert" (no new physics) above the electroweak scale, and something unexpected at the quantum gravity scale.

mitchell porter, if it's not too much of a digression from the main topic at hand here, could you elaborate on this statement? Does the Higgs mass play nicely into theories of asymptotically safe gravity (which I assume is the "agravity" mentioned later in your post)?
 
  • #42
king vitamin said:
could you elaborate on this statement? Does the Higgs mass play nicely into theories of asymptotically safe gravity (which I assume is the "agravity" mentioned later in your post)?
The special feature of the Higgs and top masses is that they place the standard model at the edge of metastability. One interpretation of this (see section 5.1 here) is that the Higgs quartic and its beta function both go to zero at the Planck scale. A 2009 paper showed how to obtain this under the assumption of asymptotic safety of gravity.

Agravity is short for adimensional gravity, gravity with only dimensionless couplings. It resembles conformal gravity. See "Agravity" and "Agravity up to infinite energy". The idea seems to be, embed the standard model in a nongravitational field theory in which all couplings are asymptotically safe (Francisco Sannino's group works on this), then couple that field theory to dimensionless gravity so as to preserve those special Planck-scale boundary conditions without finetuning (see figure 1, page 15 of the second agravity paper).
 
  • Like
Likes Urs Schreiber
Back
Top