Why is the hierarchy problem a problem?

  • Thread starter Thread starter Smattering
  • Start date Start date
Smattering
Messages
170
Reaction score
21
As I am not sure what is the most appropriate forum for this question, I am posting it here:

In another thread I came across a link to the Wikipedia article on the hierarchy problem:

https://en.wikipedia.org/wiki/Hierarchy_problem

Unfortunately, after reading the article several times, I am still not sure what the core of the problem actually is, and why it is a problem.

For one thing, there seem to be several different definitions of the problem that might be equivalent, but apparently I am lacking the required knowledge to understand why:

1. Why is the weak force ##10^{32}## times stronger than gravity?
2. Why is the fundamental value of some physical parameter vastly different from its effective value after renormalization?
3. Why is the Higgs boson so much lighter than the Planck mass?

Regarding 1: Why is this considered a problem?
Regarding 2: Renormalization is necessary in QFT, right? So to what QFT is this actually referring?
Regarding 3: What has the Planck mass to do with this?

Can someone help me to get a better idea what this is all about?Robert
 
  • Like
Likes Buzz Bloom
Physics news on Phys.org
Hi Smattering:

I have been interested to see what the experts would say about this, so I am as disappointed as you are likely to be from getting no answers. I will give some thoughts about the questions in the hope that my foolish ideas might provoke a smart answer.

In general, "WHY" questions about physics are frequently unanswerable. To go along with the "Hierarchy Problem" how about the following:
1. Why is EM force about 1036 times stronger than gravity?
2. Why is EM force about 104 times stronger than the weak force?
3. Why doesn't either (1) or (2) deserve a problem name like the "Hierarchy Problem"? That is, why is the "Hierarchy Problem" more of a problem than (1) or (2)?

BTW, I did not understand your
Regarding 2: Renormalization is necessary in QFT, right? So to what QFT is this actually referring?​

Regards,
Buzz
 
I agree that why questions tend to be unphysical when referring to why nature behaves in a certain way. But in this case the why question does not refer to nature, but rather to the physicists who feel that there is a hierarchy problem. Physicists are people, and unlike nature, people are supposed to have motives.

Edit: Initially, I thought you were referring to my own why question from the thread title. But after re-reading your post, I now think that you were referring to the definition of the hierarchy problem which is itself a why question. Regarding this, I agree. I neither understand how a why question about the values of some natural values can even be physically meaningful.
 
Last edited:
Maybe this will get more responses now that it's been moved to the particle physics forum.
 
Buzz Bloom said:
BTW, I did not understand your
Regarding 2: Renormalization is necessary in QFT, right? So to what QFT is this actually referring?​

Hi Buzz,

According to the Wikipedia article there is some physical parameter that has a fundamental value that is several magnitudes higher than its effective value after renormalization. I thought that this parameter must have something to do with gravitation, but renormalization is closely related to quantum field theory, and there is no generally accepted QFT of gravity.
 
Last edited:
Hi Smattering:

Thanks for your answer. I confess my too quick look carelessly missed that discusion of renormalization in the Wikipedia article.

Now that I actually tried to read the article, I found it way over my head, especially
such quantum corrections are usually power-law divergent, which means that the shortest-distance physics are most important.​
I think I now get that the "Hierarchy Problem" is a problem because
Typically the renormalized value of parameters are close to their fundamental values​
and for the weak-gravity ratio, this is not the case, and apparently no one has an acceptable explanation for this anomoly.

Regards,
Buzz
 
jtbell said:
Maybe this will get more responses now that it's been moved to the particle physics forum.
Unfortunately that doesn't create an alert.

All those hierarchy problems are "just" things that look odd. Sure, a parameter can be exactly 1.000000000000000000000344, and the theory works, but without a deeper theory that predicts this value it looks odd. If the parameter does not have to be 1, and can be anything, why is it so close to 1 but not exactly 1? It is expected that some "more fundamental" theory will lead to some explanation of factors like that.
 
mfb said:
Unfortunately that doesn't create an alert.

All those hierarchy problems are "just" things that look odd. Sure, a parameter can be exactly 1.000000000000000000000344, and the theory works, but without a deeper theory that predicts this value it looks odd. If the parameter does not have to be 1, and can be anything, why is it so close to 1 but not exactly 1? It is expected that some "more fundamental" theory will lead to some explanation of factors like that.

Hm ... but this sounds a bit like the layman's argument that the chance of winning a lottery with numbers "1 2 3 4 5 6" is less likely than winning it with more random looking numbers.

Having spent quite some time on statistical pattern recognition in university, I can certainly understand that fine tuning can be a problem due to the risk of overfitting your model on the existing observations such that it will not generalize well on new observations. But fine tuning (as I understand the term) does not refer to parameter values differing in magnitude. Rather it means that very small changes to a parameter's value lead to huge differences in result.
 
Smattering said:
Hm ... but this sounds a bit like the layman's argument that the chance of winning a lottery with numbers "1 2 3 4 5 6" is less likely than winning it with more random looking numbers.
It is not, but if there is only one drawing ever and it gives 1 2 3 4 5 6 in that order, it is still a surprising result. It makes you wonder if the drawing was truly random or if someone simply coded "give me the smallest number not yet drawn" and ran that to generate the numbers.

But fine tuning (as I understand the term) does not refer to parameter values differing in magnitude. Rather it means that very small changes to a parameter's value lead to huge differences in result.
That is related to the hierarchy problem(s). Changing the parameter 1.000000000000000000000344 to 1.000000000000000000000484 (made-up numbers) could have a huge effect.
 
  • Like
Likes BiGyElLoWhAt
  • #10
mfb said:
It is not, but if there is only one drawing ever and it gives 1 2 3 4 5 6 in that order, it is still a surprising result. It makes you wonder if the drawing was truly random or if someone simply coded "give me the smallest number not yet drawn" and ran that to generate the numbers.

Having *your* numbers drawn in a lottery is always a surprising result, isn't it? If you told me to bet on "23 41 17 34 3 8", I would find it equally suprising if these numbers were drawn in exactly the sequence you predicted.

That is related to the hierarchy problem(s). Changing the parameter 1.000000000000000000000344 to 1.000000000000000000000484 (made-up numbers) could have a huge effect.

I can understand why this would be an issue. But the Wikipedia article implied to me that the hierarchy problem is not so much about the fine tuning of single parameters, but rather the differing value scales of two or more parameters.
 
Last edited:
  • #11
Smattering said:
Having *your* numbers drawn in a lottery is always a surprising result, isn't it? If you told me to bet on "23 41 17 34 3 8", I would find it equally suprising if these numbers were drawn in exactly the sequence you predicted.
If you are the one running the lottery (something that makes you unique - we have only one universe to observe) it would be surprising if you win your own lottery, independently of the numbers. Sure, it can happen by chance, but manipulation is certainly a relevant alternative hypothesis.

I can understand why this would be an issue. But the Wikipedia article implied to me that the hierarchy problem is not so much about the fine tuning of single parameters, but rather the differing value scales of two or more parameters.
Those concepts are related. If a mass value can be anything from 0 to the Planck scale, and has to be subtracted from a value that should be around the Planck scale, it is surprising if the difference is orders of magnitude below the Planck scale. That's the factor 1.00000000000000000000463 I mentioned earlier (again, random digits).
 
  • #12
mfb said:
Those concepts are related. If a mass value can be anything from 0 to the Planck scale, and has to be subtracted from a value that should be around the Planck scale, it is surprising if the difference is orders of magnitude below the Planck scale. That's the factor 1.00000000000000000000463 I mentioned earlier (again, random digits).

Sorry, but I do not understand what you are referring to. Can you please explain this in more detail?
 
  • #13
The Higgs mass is the sum (or difference, depending on sign conventions) of two unrelated terms:
- its bare mass, which can take any value
- radiative corrections, which (in the absence of new physics below the Planck scale) should be of the order of the Planck mass

The Higgs is 17 orders of magnitude lighter than the Planck mass, so in the Standard Model the two terms have to be very close together to create such a huge difference between Planck scale and Higgs mass.

Supersymmetry and other models lead to smaller radiative corrections, so the necessary amount of fine-tuning goes down.
 
  • Like
Likes ohwilleke
  • #14
Maybe it's worth putting some numbers in (courtesy of Michael Dine): m(H)2 = 36,127,890,984,789,307,394,520,932,878,928,933,023 - 36,127,890,984,789,307,394,520,932,878,928,917,398 GeV2.

It is entirely possible that the two numbers come from completely unrelated processes and their closeness is purely coincidental. Just like it's possible to walk into a room and find all the pencils are pefectly balanced on their points. But does that seem likely to you?
 
  • #15
I have to say that I am not at all impressed with the presumptuous premise of the hierarchy problem and a number of other "problems" of modern physics such as the problem of matter-antimatter asymmetry in the universe, the problem that the cosmological constant has the value that it does, and the "problem" that the strong force Lagrangian doesn't have a CP violating term even though a generalized version of the equation has a very obvious place to put one. Nature is what it is and there is no particular reason that its fundamental constants should have any particular value, which is what "fundamental" means.

If a physical constant value present in Nature looks unnatural, in my mind, this is evidence that your looking at the situation in the wrong way. But, it isn't necessarily a hint that you need to devise news laws of Nature that make physical constant values seem "natural" by hand.

Supersymmetry is a particularly brute force solution to the hierarchy problem that could probably be answered with additional laws of Nature (e.g. the sum of the square of the masses of the fundamental fermions equals the sum of the square of the masses of the fundamental bosons, which is true experimentally to within all applicable margins of error) that are more subtle and do not require a host of new particles that have not been observed.
 
  • #16
I tried to find an example from the history of physics where an apparent "mysterious" fine tuning of free parameters of a theory was resolved by a newer theory, but I'm not that good with history of science. Anyone?
 
  • #17
  • #18
Not physics, but evolution explained why so many different species exist, all "fine-tuned" to their specific environment, and all sortable into groups of very similar species.

If you think of atomic energy levels as independent free parameters, then quantum mechanics explained their relation (e. g. 1/n^2 for hydrogen-like atoms).
We don't see them as independent parameters today as we found a theory predicting fixed relations between them.

The orbits of the planets all follow Kepler's law - could look like fine-tuning, but Newton's theory of gravity gave a simple explanation for it.
 
  • #19
mfb said:
If you think of atomic energy levels as independent free parameters, then quantum mechanics explained their relation (e. g. 1/n^2 for hydrogen-like atoms).
We don't see them as independent parameters today as we found a theory predicting fixed relations between them.

This is not quite a type of example I was looking for. There was no "deferent and epicycle" atomic theory which was predicting atomic energy levels, before we've got our current one.
 
  • #20
mfb said:
Not physics, but evolution explained why so many different species exist, all "fine-tuned" to their specific environment, and all sortable into groups of very similar species.

But the explanation that evolution can offer here is not any sophisticated mechanism, but simply selection bias.
 
  • #21
nikkkom said:
This is not quite a type of example I was looking for. There was no "deferent and epicycle" atomic theory which was predicting atomic energy levels, before we've got our current one.
That is exactly the point! The Standard Model does not make a prediction for the bare Higgs mass which needs its fine-tuned value in this model.
Smattering said:
But the explanation that evolution can offer here is not any sophisticated mechanism, but simply selection bias.
An easy explanation for something that looked mysterious before. In other words, a good theory.
 
  • #22
Sorry, but I still do not get the point.

mfb said:
The Higgs mass is the sum (or difference, depending on sign conventions) of two unrelated terms:
- its bare mass, which can take any value
- radiative corrections, which (in the absence of new physics below the Planck scale) should be of the order of the Planck mass

Does this justify any expection about the relative values that these two terms should have?

The Higgs is 17 orders of magnitude lighter than the Planck mass, so in the Standard Model the two terms have to be very close together to create such a huge difference between Planck scale and Higgs mass.

Why would it seem more surprising if they were close together than if they were far away from each other?

Supersymmetry and other models lead to smaller radiative corrections, so the necessary amount of fine-tuning goes down.

I cannot agree so far. Even if the radiative corrections were smaller, the other term still would have to match exactly.
 
  • #23
Vanadium 50 said:
Maybe it's worth putting some numbers in (courtesy of Michael Dine): m(H)2 = 36,127,890,984,789,307,394,520,932,878,928,933,023 - 36,127,890,984,789,307,394,520,932,878,928,917,398 GeV2.

It is entirely possible that the two numbers come from completely unrelated processes and their closeness is purely coincidental.

I once came across some crackpot site where the site owner discovered that if you perform some kind of mathematical operation on a specific fundamental constant (unfortunately, I do not remember which one anymore) you get extremely close to the value of another (seemingly unrelated) fundamental constant. And then he built a complete crackpot theory on that suprising result.

Just like it's possible to walk into a room and find all the pencils are pefectly balanced on their points.

I cannot see any commonality between this example and situation described above.
 
  • #24
Smattering said:
Does this justify any expection about the relative values that these two terms should have?
No.
Smattering said:
Why would it seem more surprising if they were close together than if they were far away from each other?
See Vanadium's numbers. There are 1017 more numbers far away from each other than numbers close together.
Smattering said:
I cannot agree so far. Even if the radiative corrections were smaller, the other term still would have to match exactly.
Yes, but you need a lower amount of fine-tuning. It is reasonable to add two 4-digit numbers (one of them negative) and get a 3-digit result. That is not very unlikely.
It is surprising that adding two 19-digit numbers gives a 3-digit number.

Smattering said:
I once came across some crackpot site where the site owner discovered that if you perform some kind of mathematical operation on a specific fundamental constant (unfortunately, I do not remember which one anymore) you get extremely close to the value of another (seemingly unrelated) fundamental constant. And then he built a complete crackpot theory on that suprising result.
It was not an agreement with a precision of 17 digits, and with unrelated constants you have thousands of possible ways to combine them (even more if you ignore units). The Higgs has a single combination that happens to match with a precision of 17 digits in the Standard Model.

And I repeat, because I don't think this point got clear: the SM works fine with that. There is no fundamental problem with such a fine-tuning. It just does not look very natural.
 
  • #25
Vanadium 50 said:
Maybe it's worth putting some numbers in (courtesy of Michael Dine): m(H)2 = 36,127,890,984,789,307,394,520,932,878,928,933,023 - 36,127,890,984,789,307,394,520,932,878,928,917,398 GeV2.
Hi Vanadium:

Can you cite a reference for Michael Dine's result that you quoted? If not, can you identify for me the individual variables whose values differ to give the square of the Higgs mass? I get from a mfb quote that they might be
- its bare mass, which can take any value
- radiative corrections, which (in the absence of new physics below the Planck scale) should be of the order of the Planck mass​
If this is correct, can you explain (at a summary level) how the values for these two variables is derived?

Regards,
Buzz
 
  • #26
Michael gave it in a talk somewhere. I copied it down there. But the idea is that the first term is the Higgs bare mass and the second term is the radiative corrections to the mass. Neither is calculable today, all we know is the rough size and the difference between those numbers.
 
  • #27
Vanadium 50 said:
Neither is calculable today
Hi Vanadium:

Does this mean that the values were once calculable, but now they aren't? If so, can you explain why that might be so? If not, please clarify?

Regards,
Buzz
 
  • #28
A new (yet unformulated) theory might allow to calculate them in the future. That is the "today" aspect.
 
  • #29
mfb said:
A new (yet unformulated) theory might allow to calculate them in the future. That is the "today" aspect.
Hi mfb:

If that is the case, where did Michael Dine get his numbers? Were they just made up to make a point about how the Higgs mass seems to have a "magical" quality?

Regards,
Buzz
 
  • #30
All those digits? Sure. We know the magnitude of the number, but not the precise value.
Googling the number directly leads to Michael's talk.
 
  • #31
mfb said:
All those digits? Sure. We know the magnitude of the number, but not the precise value.
Googling the number directly leads to Michael's talk
Hi mfb:

Thanks for the link and the Google hint. Both PDF files look both interesting and difficult. It will no doubt take me a while to digest whatever I can get out of them.

Regards,
Buzz
 
  • #32
Does anybody know a reference where a list is given with finetuning-examples in science in general? :)
 
  • #33
The hierarchy problem is a fine-tuning problem within the Wilsonian framework that the QFTs we use are effective field theories. If they are not effective field theories, but correct and complete quantum field theories, then there is no hierarchy problem.

http://quantumfrontiers.com/2013/06/18/we-are-all-wilsonians-now/

"Wilson’s mastery of quantum field theory led him to another crucial insight in the 1970s which has profoundly influenced physics in the decades since — he denigrated elementary scalar fields as unnatural. I learned about this powerful idea from an inspiring 1979 paper not by Wilson, but by Lenny Susskind. That paper includes a telltale acknowledgment: “I would like to thank K. Wilson for explaining the reasons why scalar fields require unnatural adjustments of bare constants.”

Susskind, channeling Wilson, clearly explains a glaring flaw in the standard model of particle physics — ensuring that the Higgs boson mass is much lighter than the Planck (i.e., cutoff) scale requires an exquisitely careful tuning of the theory’s bare parameters. Susskind proposed to banish the Higgs boson in favor of Technicolor, a new strong interaction responsible for breaking the electroweak gauge symmetry, an idea I found compelling at the time. Technicolor fell into disfavor because it turned out to be hard to build fully realistic models, but Wilson’s complaint about elementary scalars continued to drive the quest for new physics beyond the standard model, and in particular bolstered the hope that low-energy supersymmetry (which eases the fine tuning problem) will be discovered at the Large Hadron Collider. Both dark energy (another fine tuning problem) and the absence so far of new physics beyond the HIggs boson at the LHC are prompting some soul searching about whether naturalness is really a reliable criterion for evaluating success in physical theories. Could Wilson have steered us wrong?"
 
  • #35
A handwavy way to think about it is that if the theories we have are not the final theory, then fine tuning of our crummy wrong theory is indicating something about the high energy theory that is peeping through to the low energy. This is why fine tuning is often argued to indicate new physics.
 
  • #36
atyy said:
Both dark energy (another fine tuning problem) and the absence so far of new physics beyond the HIggs boson at the LHC are prompting some soul searching about whether naturalness is really a reliable criterion for evaluating success in physical theories. Could Wilson have steered us wrong?"

Is naturalness anything else than a pure aesthetic argument? Why should we expect nature to be elegant?
 
  • #37
Smattering said:
Is naturalness anything else than a pure aesthetic argument?

No, it is not. We have examples of theories with fine tuning being superseded by theories without one.
Give me a case where the opposite happened, if you know one.
 
  • #38
Smattering said:
Is naturalness anything else than a pure aesthetic argument? Why should we expect nature to be elegant?

There is some aesthetics to it, but not the one you wrongly believe motivates it. The aesthetics is that we assume that our theory is not the final theory. Within that framework, naturalness can be technically phrased. See slide 8 of http://www.slac.stanford.edu/econf/C040802/lec_notes/Lykken/Lykken_web.pdf.
 
  • #39
Well, we know that the Standard Model is not complete. It does not include gravity, its options to account for dark matter are at best questionable, it tells us nothing about dark energy or inflation, and even if we ignore gravity we would have the Landau pole as problem at even higher energies.
 
  • Like
Likes atyy
  • #40
One example of concerns about fine tuning leading to fruitful scientific theories would be the anomalous magnetic dipole moment of the electron aka "g-2" (i.e. why is the magnetic dipole moment of the electron, "g", not exactly 2, but instead, some tiny but very exactly measured small amount greater than two).

It turns out that this slight discrepancy arises in QED from interactions with virtual photons, and that if your theory doesn't allow for virtual photons (and other odd assumptions of path integrals like inclusion of photon paths at slightly more and slight less than the speed of light "c" even though those paths are highly suppressed) that you get an answer different from the physical one. The notion would be that fine tuning if it is observed must exist because we are missing something of the same sort of mathematical character as the inclusion of virtual photons in our theory which is why our expectations are so off. The search for why g-2 was fine tuned produced theoretical progress. Now, I can't say that the intellectual history of that discovery really establishes that fine tuning was the insight that really made the difference in figuring out that virtual loops needed to be considered in QED (and the rest of the Standard Model as well), but it is a historical example that captures the notion.
 
  • Like
Likes haushofer and mfb
  • #41
Anomalous magnetic moment in QED is not an example of fine tuning.

An example of fine tuning would be a theory where two large free parameters interact (subtracted, divided, etc) to give a vastly smaller number.

Before QED, no theory at all explained electron's anomalous magnetic moment. (I'm not sure we even had a theory or any kind which was predicting the value of electron's magnetic moment, anomalous or not).
 
  • Like
Likes atyy
  • #42
Dirac's relativistic quantum mechanics "predicted" a value of exactly 2. If you modify the theory to get a free parameter, this parameter seems to be fine-tuned to nearly agree with Dirac's prediction.
 
  • #43
I don't think the magnetic moment is a good example of fine tuning. As said above, it does not arise by a near cancellation. The "if you replace it with a free parameter" argument can be applied to absolutely every constant and every measurement, so it provides no real insight.
 
  • Like
Likes atyy
  • #44
A classic example would be the self energy of the electron in classical EM.
$$M_{observed}=M_{bare}+\frac{e^{2}}{4\pi \epsilon r_{e}}$$
Experiment sets a limit on the the size r so that the self energy was greater than 10 GeV and thus the first bare term must be chosen to cancel the second self energy term with a finetuning at about the O(.001) level
.511Mev=-9999.489+10000.000 Mev
Of course quantum mechanics comes to the rescue to 'explain' this finetuning, by adding the positron and the related quantum electrodynamic symmetry. This sort of picture is sort of the conceptual equivalent to explaining the Higgs mass by something like Technicolor.
 
  • #45
Vanadium 50 said:
I don't think the magnetic moment is a good example of fine tuning. As said above, it does not arise by a near cancellation. The "if you replace it with a free parameter" argument can be applied to absolutely every constant and every measurement, so it provides no real insight.

It actually is a good example of fine tuning, because Dirac left out a term for "simplicity." The so-called "Pauli term"
<br /> \kappa [\gamma_{\mu},\gamma_{\nu}]F^{\mu \nu} \psi<br />
can be added to the Dirac equation for arbitrary \kappa, which makes the magnetic moment an adjustable parameter while satisfying all necessary symmetries. It's actually Wilson who saves us here: this term is non-renormalizable, so it's irrelevant at low energies, giving us Dirac's universal result. It's not a near-cancellation scenario like the cosmological constant or Higgs mass, but Dirac did "fine-tune" \kappa to zero.
 
Back
Top