Why is the hierarchy problem a problem? (1 Viewer)

Users Who Are Viewing This Thread (Users: 0, Guests: 1)

32,253
8,223
This is not quite a type of example I was looking for. There was no "deferent and epicycle" atomic theory which was predicting atomic energy levels, before we've got our current one.
That is exactly the point! The Standard Model does not make a prediction for the bare Higgs mass which needs its fine-tuned value in this model.
But the explanation that evolution can offer here is not any sophisticated mechanism, but simply selection bias.
An easy explanation for something that looked mysterious before. In other words, a good theory.
 
167
20
Sorry, but I still do not get the point.

The Higgs mass is the sum (or difference, depending on sign conventions) of two unrelated terms:
- its bare mass, which can take any value
- radiative corrections, which (in the absence of new physics below the Planck scale) should be of the order of the Planck mass
Does this justify any expection about the relative values that these two terms should have?

The Higgs is 17 orders of magnitude lighter than the Planck mass, so in the Standard Model the two terms have to be very close together to create such a huge difference between Planck scale and Higgs mass.
Why would it seem more surprising if they were close together than if they were far away from each other?

Supersymmetry and other models lead to smaller radiative corrections, so the necessary amount of fine-tuning goes down.
I cannot agree so far. Even if the radiative corrections were smaller, the other term still would have to match exactly.
 
167
20
Maybe it's worth putting some numbers in (courtesy of Michael Dine): m(H)2 = 36,127,890,984,789,307,394,520,932,878,928,933,023 - 36,127,890,984,789,307,394,520,932,878,928,917,398 GeV2.

It is entirely possible that the two numbers come from completely unrelated processes and their closeness is purely coincidental.
I once came across some crackpot site where the site owner discovered that if you perform some kind of mathematical operation on a specific fundamental constant (unfortunately, I do not remember which one anymore) you get extremly close to the value of another (seemingly unrelated) fundamental constant. And then he built a complete crackpot theory on that suprising result.

Just like it's possible to walk into a room and find all the pencils are pefectly balanced on their points.
I cannot see any commonality between this example and situation described above.
 
32,253
8,223
Does this justify any expection about the relative values that these two terms should have?
No.
Why would it seem more surprising if they were close together than if they were far away from each other?
See Vanadium's numbers. There are 1017 more numbers far away from each other than numbers close together.
I cannot agree so far. Even if the radiative corrections were smaller, the other term still would have to match exactly.
Yes, but you need a lower amount of fine-tuning. It is reasonable to add two 4-digit numbers (one of them negative) and get a 3-digit result. That is not very unlikely.
It is surprising that adding two 19-digit numbers gives a 3-digit number.

I once came across some crackpot site where the site owner discovered that if you perform some kind of mathematical operation on a specific fundamental constant (unfortunately, I do not remember which one anymore) you get extremly close to the value of another (seemingly unrelated) fundamental constant. And then he built a complete crackpot theory on that suprising result.
It was not an agreement with a precision of 17 digits, and with unrelated constants you have thousands of possible ways to combine them (even more if you ignore units). The Higgs has a single combination that happens to match with a precision of 17 digits in the Standard Model.

And I repeat, because I don't think this point got clear: the SM works fine with that. There is no fundamental problem with such a fine-tuning. It just does not look very natural.
 

Buzz Bloom

Gold Member
1,892
312
Maybe it's worth putting some numbers in (courtesy of Michael Dine): m(H)2 = 36,127,890,984,789,307,394,520,932,878,928,933,023 - 36,127,890,984,789,307,394,520,932,878,928,917,398 GeV2.
Hi Vanadium:

Can you cite a reference for Michael Dine's result that you quoted? If not, can you identify for me the individual variables whose values differ to give the square of the Higgs mass? I get from a mfb quote that they might be
- its bare mass, which can take any value
- radiative corrections, which (in the absence of new physics below the Planck scale) should be of the order of the Planck mass​
If this is correct, can you explain (at a summary level) how the values for these two variables is derived?

Regards,
Buzz
 

Vanadium 50

Staff Emeritus
Science Advisor
Education Advisor
22,374
4,703
Michael gave it in a talk somewhere. I copied it down there. But the idea is that the first term is the Higgs bare mass and the second term is the radiative corrections to the mass. Neither is calculable today, all we know is the rough size and the difference between those numbers.
 

Buzz Bloom

Gold Member
1,892
312
Neither is calculable today
Hi Vanadium:

Does this mean that the values were once calculable, but now they aren't? If so, can you explain why that might be so? If not, please clarify?

Regards,
Buzz
 
32,253
8,223
A new (yet unformulated) theory might allow to calculate them in the future. That is the "today" aspect.
 

Buzz Bloom

Gold Member
1,892
312
A new (yet unformulated) theory might allow to calculate them in the future. That is the "today" aspect.
Hi mfb:

If that is the case, where did Michael Dine get his numbers? Were they just made up to make a point about how the Higgs mass seems to have a "magical" quality?

Regards,
Buzz
 
32,253
8,223
All those digits? Sure. We know the magnitude of the number, but not the precise value.
Googling the number directly leads to Michael's talk.
 

Buzz Bloom

Gold Member
1,892
312
All those digits? Sure. We know the magnitude of the number, but not the precise value.
Googling the number directly leads to Michael's talk
Hi mfb:

Thanks for the link and the Google hint. Both PDF files look both interesting and difficult. It will no doubt take me a while to digest whatever I can get out of them.

Regards,
Buzz
 

haushofer

Science Advisor
2,113
500
Does anybody know a reference where a list is given with finetuning-examples in science in general? :)
 

atyy

Science Advisor
13,298
1,456
The hierarchy problem is a fine-tuning problem within the Wilsonian framework that the QFTs we use are effective field theories. If they are not effective field theories, but correct and complete quantum field theories, then there is no hierarchy problem.

http://quantumfrontiers.com/2013/06/18/we-are-all-wilsonians-now/

"Wilson’s mastery of quantum field theory led him to another crucial insight in the 1970s which has profoundly influenced physics in the decades since — he denigrated elementary scalar fields as unnatural. I learned about this powerful idea from an inspiring 1979 paper not by Wilson, but by Lenny Susskind. That paper includes a telltale acknowledgment: “I would like to thank K. Wilson for explaining the reasons why scalar fields require unnatural adjustments of bare constants.”

Susskind, channeling Wilson, clearly explains a glaring flaw in the standard model of particle physics — ensuring that the Higgs boson mass is much lighter than the Planck (i.e., cutoff) scale requires an exquisitely careful tuning of the theory’s bare parameters. Susskind proposed to banish the Higgs boson in favor of Technicolor, a new strong interaction responsible for breaking the electroweak gauge symmetry, an idea I found compelling at the time. Technicolor fell into disfavor because it turned out to be hard to build fully realistic models, but Wilson’s complaint about elementary scalars continued to drive the quest for new physics beyond the standard model, and in particular bolstered the hope that low-energy supersymmetry (which eases the fine tuning problem) will be discovered at the Large Hadron Collider. Both dark energy (another fine tuning problem) and the absence so far of new physics beyond the HIggs boson at the LHC are prompting some soul searching about whether naturalness is really a reliable criterion for evaluating success in physical theories. Could Wilson have steered us wrong?"
 

atyy

Science Advisor
13,298
1,456
A handwavy way to think about it is that if the theories we have are not the final theory, then fine tuning of our crummy wrong theory is indicating something about the high energy theory that is peeping through to the low energy. This is why fine tuning is often argued to indicate new physics.
 
167
20
Both dark energy (another fine tuning problem) and the absence so far of new physics beyond the HIggs boson at the LHC are prompting some soul searching about whether naturalness is really a reliable criterion for evaluating success in physical theories. Could Wilson have steered us wrong?"
Is naturalness anything else than a pure aesthetic argument? Why should we expect nature to be elegant?
 
Is naturalness anything else than a pure aesthetic argument?
No, it is not. We have examples of theories with fine tuning being superseded by theories without one.
Give me a case where the opposite happened, if you know one.
 

atyy

Science Advisor
13,298
1,456
32,253
8,223
Well, we know that the Standard Model is not complete. It does not include gravity, its options to account for dark matter are at best questionable, it tells us nothing about dark energy or inflation, and even if we ignore gravity we would have the Landau pole as problem at even higher energies.
 

ohwilleke

Gold Member
1,382
331
One example of concerns about fine tuning leading to fruitful scientific theories would be the anomalous magnetic dipole moment of the electron aka "g-2" (i.e. why is the magnetic dipole moment of the electron, "g", not exactly 2, but instead, some tiny but very exactly measured small amount greater than two).

It turns out that this slight discrepancy arises in QED from interactions with virtual photons, and that if your theory doesn't allow for virtual photons (and other odd assumptions of path integrals like inclusion of photon paths at slightly more and slight less than the speed of light "c" even though those paths are highly suppressed) that you get an answer different from the physical one. The notion would be that fine tuning if it is observed must exist because we are missing something of the same sort of mathematical character as the inclusion of virtual photons in our theory which is why our expectations are so off. The search for why g-2 was fine tuned produced theoretical progress. Now, I can't say that the intellectual history of that discovery really establishes that fine tuning was the insight that really made the difference in figuring out that virtual loops needed to be considered in QED (and the rest of the Standard Model as well), but it is a historical example that captures the notion.
 

The Physics Forums Way

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving
Top