What is the best theory why our vacuum may be in the edge of metastability?

In summary, the best theory why our vacuum may be in the edge of metastability is that it is easier to cross the different false vacua (not necessarily to the true vacuum).This is based on the premise that slight tweaks to the renormalization group of the Higgs can make the universe from metastable to stable. However, this is still an uncertain proposition and has yet to be proven.
  • #36
mitchell porter said:
My understanding is that in the G2-MSSM, the scale of the Higgs vev is generated and protected by specific mechanisms, such as you describe. But protection of the Higgs mass from large corrections still relies on low-scale supersymmetry (just like other susy models). And so in that regard, the G2-MSSM conforms to Strassler's reasoning. Strassler says that the scale "vmax" at which the SM ceases to be valid can't be too high, or else there will be unacceptable finetuning; and in these G2 compactifications of M theory, the MSSM is indeed supposed to replace the SM, not too far above the weak scale.

As for the cosmological constant, Bobkov argued that a version of the Bousso-Polchinski mechanism can work within the G2-MSSM.

eVQ5xG.jpg


To tame Vmax without fine tuning.. it seems supersymmetry (like MSSM) is really required not far from weak scale.. but then I wonder what stuff can do it too without supersymmetry.. anyone got any idea? Perhaps some quantum gravity stuff that can affect outside the Planck scale too? Remember above 10^-18 are scales that are beyond particles.. maybe space is composed of something that can tame it? solving it and CC at same time and even the Hierarchy Problem?

Also what's weird beside having the mass so low is why its on the edge of metastability.. maybe the 4 are related somehow if you can use some mechanism of quantum gravity that can affect outside Planck scale.
 

Attachments

  • eVQ5xG.jpg
    eVQ5xG.jpg
    9.7 KB · Views: 718
Physics news on Phys.org
  • #37
[URL='https://www.physicsforums.com/insights/author/urs-schreiber/']Urs Schreiber[/URL] said:
there is no absolute concept of "size" of a quantum correction... there is an arbitrary freedom in choosing renormalization constants, large or not...

one needs more theoretical input than just low energy effective perturbative quantum field theory with its arbitrary renormalization freedom
These comments have been bothering me for over a week now. To really address them authoritatively, I would have to review a lot of renormalization theory, but for now I'll just state my thoughts and see what comes of it.

There is a kind of field theory that is renormalizable up to arbitrarily high scales. The Lagrangian only contains renormalizable terms. This is the original concept of a renormalizable field theory.

Then there are field theories which also contain non-renormalizable terms with coefficients that have a dependence on a physical cutoff scale, above which the theory is not defined. This is called an effective field theory.

I feel like Urs's comments apply most directly to a field theory that is renormalizable in the original, unrestricted sense. This actually includes the standard model, because, as explained here, all the corrections can be absorbed into the renormalization.

However, Strassler is, by hypothesis, treating the standard model as an effective theory, valid only up to a physical cutoff scale. And he's saying that in that case, the higher the cutoff scale, the greater the finetuning required in order to keep the Higgs mass low.

Surely Strassler's reasoning can be made as rigorous as anything else in quantum field theory. We might have to use the "standard model effective field theory", in which the non-renormalizable terms are added to the SM lagrangian. And we might quantify the finetuning by looking at how restricted are the coefficients of the non-renormalizable terms (which codify the effects of unknown BSM physics), if we want the Higgs to stay light.

My complaint is that Urs treats strictly renormalizable field theory, and a UV-complete theory like string theory, as the only useful forms of reasoning, when in fact effective field theory is enormously useful. And a technical question would be, just how great are the "freedoms", the ambiguities, in effective field theory? I think they would be far more reasonable in size.

As evidence, I would point to the discussion of renormalon-induced uncertainties in the top quark pole mass, in QCD. No-one there is saying that the top quark pole mass can be anything at all; the uncertainty is pretty small. And I think that would be far more characteristic of the ambiguities in EFT quantities.
 
  • #38
I don't see which distinction you are meaning to invoke. The perspective of effective field theory is one way of several to look at the issue of renormalization freedom. All these are different perspectives on the same thing. This is explained in the Subsection titled Wilson-Polchinski effective QFT flow within the PF-Insights on Mathematical QFT -- Renormalization.

Independently of that, it may be worthwhile to recall the following subtlety in the terminology "renormalizable":

The technical meaning of "renomalizable" just means that of the a priori infinitely many renormalization constants, it so happens that only a finite number appear. If a theory happens to be non-renormalizable in this sense, it just means that there are infinitely many choices to be made in computing any quantity at arbitrary energy. That may be physically disappointing, but it is not mathematically inconsistent.

This point was established way back by Epstein-Glaser 73, but somehow the community goes through cycles of forgetting and rediscovering it. Once place where this is rediscovered is

J. Gomis and Steven Weinberg,
"Are nonrenormalizable gauge theories renormalizable?",
Nucl. Phys. B 469 (1996) 473
(arXiv:hep-th/9510087)

Namely we can just keep making choices of renomralization constants on and on. In Epstein-Glaser 73 this infinite list of choices is organized by loop order, while in effective field theory it is organized by energy scale, but the principle is the same in both cases.

What is good about the Epstein-Glaser perspective of renormalization is that it gives one crystal clear picture of what the space of choices is.
 
  • Like
Likes Spinnor
  • #39
[URL='https://www.physicsforums.com/insights/author/urs-schreiber/']Urs Schreiber[/URL] said:
I don't see which distinction you are meaning to invoke.
The difference between a renormalizable field theory with a cutoff that can go to infinity, and a nonrenormalizable field theory that is still predictive but only up to a finite cutoff. As a reference, let me cite Mark Srednicki's QFT text, which introduces the distinction at the start of chapter 18, and develops effective field theory in chapter 29. The fine-tuning problem is mentioned halfway through chapter 29. The physical interpretation of a nonrenormalizable theory as an effective action valid only up to a finite cutoff begins just before eqn 29.37 ("The Wilson scheme also allows us...").

I'm just saying that this is the context in which Strassler's argument is made, and it is a valid argument given its premises: if there is an EFT with a light scalar, the higher the cutoff, the more tuning is required. (Maybe it could be formalized with Kevin Costello's framework.)
 

Similar threads

  • Beyond the Standard Models
Replies
8
Views
2K
  • Beyond the Standard Models
Replies
9
Views
3K
  • High Energy, Nuclear, Particle Physics
Replies
3
Views
2K
  • Beyond the Standard Models
Replies
32
Views
4K
  • Beyond the Standard Models
3
Replies
73
Views
16K
  • Beyond the Standard Models
Replies
0
Views
1K
Replies
37
Views
5K
  • Science Fiction and Fantasy Media
Replies
15
Views
729
  • Beyond the Standard Models
Replies
4
Views
1K
Back
Top