# I Sabine on strong CP, hiearchy

1. Jul 4, 2017

### kodama

sabine hossenfelder

Here is a different example for this idiocy. High energy physicists think it’s a problem that the mass of the Higgs is 15 orders of magnitude smaller than the Planck mass because that means you’d need two constants to cancel each other for 15 digits. That’s supposedly unlikely, but please don’t ask anyone according to which probability distribution it’s unlikely. Because they can’t answer that question. Indeed, depending on character, they’ll either walk off or talk down to you. Guess how I know.

Now consider for a moment that the mass of the Higgs was actually about as large as the Planck mass. To be precise, let’s say it’s 1.1370982612166126 times the Planck mass. Now you’d again have to explain how you get exactly those 16 digits. But that is, according to current lore, not a finetuning problem. So, erm, what was the problem again?
...
And there are more numerological arguments in the foundations of physics, all of which are wrong, wrong, wrong for the same reasons. The unification of the gauge couplings. The so-called WIMP-miracle (RIP). The strong CP problem. All these are numerical coincidence that supposedly need an explanation. But you can’t speak about coincidence without quantifying a probability!
http://backreaction.blogspot.com/

2. Jul 4, 2017

### strangerep

I'm not sure there's much point having a parallel debate about this here on PF, rather than in the comments section on Bee's blog (which Bee is more likely to read, and maybe respond). At last count there were already 94 comments there.

3. Jul 4, 2017

### Staff: Mentor

No matter which prior you use, the probability will be extremely small in all these cases. Which means explanations for a small mass gain orders of magnitude in terms of their relative likelihood.

If you measure a constant of nature to be 1.0000000000000000000146, you would expect some deeper reason why it has to be extremely close to 1 (but not exactly 1). It is not absolutely necessary that there is a reason, but it looks likely. The exact value is not the point, measuring it to more precision doesn't change the argument. The point is the vastly different probability for "so close to 1" and "somewhere between 0 and 2" for every reasonable probability distribution.
The strong CP problem is similar. Yes, it can happen that a phase between 0 and 2 pi is smaller than 0.000000000001 by accident. But do you really expect that?

This is different from the curvature in cosmology. I don't have any problems with parameters that happen to be small where there is no natural scale for them. But the Higgs has a natural scale (the Planck mass), and the CP phase as well (it is an angle).

4. Jul 5, 2017

### Dr.AbeNikIanEdL

Hm, so 1.1234567891234567892146 (intended to be random digits) does not need explanation? It would seem that "so close to 1.1234567891234567892000" is as probable as "so close to 1" for some equally reasonable distribution?

5. Jul 5, 2017

### Orodruin

Staff Emeritus
Well, this is not exactly true as stated. You would need to add some qualifiers on what type of priors you consider "natural". I also must disagree with Sabine, I don't know exactly who she has been talking to but I know several high-energy physicists that would happily tell you that fine tuning may not be a problem depending on your assumptions (and some that are way too happy to fine tune their models).

When it comes to the strong CP-problem and things like flavour mixing, there is a natural measure on those parameters, the Haar measure. Indeed, this would give a flat distribution on the circle for the strong CP phase.

With this approach you will get nowhere. The entire experimental analysis using frequentist statistics is based on ordering different experimental outcomes based on how "extreme" they would be within the model. In the case of the strong CP-phase, measuring a value close to zero is extreme in the sense of giving no CP-violation in contrast to the rest of the parameter space. You might, as Sabine says, consider a different prior distribution, but what should the prior distribution depend on if not your underlying model?

I agree with Sabine that you must pay attention to your prior assumptions, but many people will inherently assume some prior and the prior can very well be based on your model assumptions.

6. Jul 5, 2017

### Staff: Mentor

That would be odd as well, although in a different way. The 123456789123456789 would suggest our decimal system is special in some way.

You can find many examples of values that would be strange, but the largest part of the interval [0,2] is not close to some value we would consider "strange".
Let's be generous: Everything that does not differ by more than 10 orders of magnitude on the interval [0,2], or if it does differ by more, favors smaller values (to give logarithms some love).
A prior that gives values like 1.0000000000000000000146 a 1015 times higher probability than values around 1.362567697948747224894 or any other number like this doesn't look natural to me.

Edit: Finite probabilities for discrete values like 1, 0, 1/2 and similar are fine as well.

Last edited: Jul 5, 2017
7. Jul 5, 2017

### Demystifier

The mass of elephant is 9 orders of magnitude larger than the mass of ant. Why? This is a hard hierarchy problem in biology that lacks any natural explanation.

8. Jul 5, 2017

### Staff: Mentor

That has no similarity to the question of the Higgs mass.
We have 13 orders of magnitude between the top and the neutrinos, but unlike the Higgs that doesn't require any fine-tuning.

9. Jul 5, 2017

Staff Emeritus
I think a closer analogy would be if the mass of an elephant is the same as the mass of a birch tree - to within a nanogram.

10. Jul 5, 2017

### Staff: Mentor

And birch trees are the only food elephants eat. And elephants are the only type of animals - to avoid look-elsewhere effects.

11. Jul 5, 2017

### Dr.AbeNikIanEdL

Ok, in the example @mfb it was not indicated that there would be something physically special about 1. In the case of the CP problem I get that the value of 0 at least is singled out as significant by physics.

Thats why I wrote "intended to be random numbers"...

I am not sure about this. I guess you could find patterns in almost all finite series of digits, or at least in so many that it is not surprising if one turns up somewhere, so this seems to depend on the arbitrary decision what kind of patterns you allow to consider a number interesting.

12. Jul 5, 2017

### Staff: Mentor

You can always find a "777" in the digit sequence or something like that, but that is nowhere close to the pattern 1.0000000000000000000246 has (the last digits here are arbitrary, the zeros are not). It does not matter what kind of patterns you include - the observed Higgs bare mass to Planck mass ratio will stand out for every collection that is somewhat reasonable.

13. Jul 5, 2017

Staff Emeritus
On the Strong CP problem, does she really believe that this is an accident? "Oh, the angle has to be something - why not less than 10^-10 radians?"

On the Higgs hierarchy problem, there is a difference between having one number take the value that it does to ~36 decimal places, and I agree, how does one even talk about this in a probabilistic sense. But that's not what we have. We have two numbers with two different physical sources that are the same to 36 decimal places. This seems unlikely to be accidental.

14. Jul 5, 2017

### kodama

what about theories that suggest Higgs natural scale is at the fermi scale?

15. Jul 5, 2017

### Haelfix

I wrote on her board already, and this was my point. It's one thing when you have an effective field theory, and a cutoff scale and you worry about what natural values dimensionless ratios must be . There you can definitely talk about Bayesian priors, and I agree with her that this is a fuzzy question, (I also agree with others that almost any prior you pick disfavors a small value).

But this isn't what the core problem is. JUst like Newton didn't worry about priors and small numbers in front of his law of gravity, when you write down a theory that explains the Higgs mass^2 term, you have much bigger problems to worry about. Namely how to make extremely normal contributions within your Planckian theory produce an incredibly tiny number without badly breaking some other part of the theory.

Stated this way the hierarchy problem is really about how difficult it is for a theorist to come up with a sensible theory in the UV.

16. Jul 5, 2017

### Staff: Mentor

They are in the group of "proposed solutions to the hierarchy problem".

17. Jul 5, 2017

### Dr.AbeNikIanEdL

But there is still only one value that enters the calculation of any observable, the resulting higgs mass?

What exactly is meant by "explain" the $m^2$ term? Could you suggest some reference for further reading?

18. Jul 5, 2017

### Staff: Mentor

The square of the observable Higgs mass is the sum of a squared "bare" mass and loop corrections. These loop corrections depend on all the particles and the scale where new physics comes in. If we just assume "Standard Model up to the Planck scale", we would expect the loop corrections to be of the order of the squared Planck mass: $m_{bare}^2 + c\, m_P^2 = m_{obs}^2$ where mobs is the mass we measure in the lab, and c is some numerical prefactor that depends on details not relevant here. There is no known relation between mbare and c mP.
We know mobs = 125 GeV, and mP = 1.22*1019 GeV. Plugging that in, we get something like 1502407283632643267022981020544340468283664 + (completely unrelated value) = 15625. The completely unrelated value has to match the other value extremely closely to get a result that is so much smaller. Possible? Sure. Likely? Nah.

19. Jul 5, 2017

### kodama

is conformal solution still viable or has LHC ruled it out?

20. Jul 5, 2017

### Dr.AbeNikIanEdL

I know. Still, $m_\mathrm{bare}$ is not a measurable value and should have no meaning at all. Only $m_\mathrm{obs}$ should be relevant for any physics and is the only value that enters any calculation?