Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

I Sabine on strong CP, hiearchy

  1. Aug 5, 2017 #26

    mfb

    User Avatar
    2017 Award

    Staff: Mentor

    With 1% the final dataset.

    Did you see anyone claiming that the LHC absolutely has to find something beside the Higgs? Or, the much stronger claim, that it has to happen so early? I did not.
    We cannot. That is the whole point. The natural scale for the Higgs mass is at least the scale where new physics gets relevant. New physics at the TeV scale would put this closer to the observed Higgs mass.

    There are models that avoid this issue completely, of course, like the relaxion and similar approaches.
     
  2. Aug 5, 2017 #27
    This problem has many facets, even without perturbation theory and the Planck scale. That's why I highlighted in my previous post a related problem, which is non-perturbative in nature, namely the QCD phase transition (one may also consider the weak symmetry breaking scale). While the hierarchy between Lambda_QCD and the cc is much smaller than the one related to the Planck scale, it still involves many orders of magnitude. How would early cosmology know that in the future there will be this non-perturbative phase transition (at relatively low energy) and arrange "beforehand" that after the transition the vacuum energy is practically zero?

    It seems pretty obvious that this is an ill-posed question, rather one would expect that there should be some self-tuning, continuously adapting mechanism that more or less automatically guarantees that at very large times of the universe, the vacuum energy tends to zero, independent of the various phase transitions along the way. There are a few proposals of this kind around, but none seem particularly convincing.

    So this *is* a most important open problem, despite perceived arrogance (btw there is a saying that arrogance is just competence if perceived from below). The Planck scale is not important, you may call it infinity for any practical purposes at low energies. The question remains how to reconcile this infinite number (or to lesser extreme, any other large number such as Lambda_QCD) with the measured "almost zero" value of the cc. To me it appears that this kind of questions cannot be meaningfully addressed in the context of particle CFT, so the one-loop argument could be nil anyway (this often happens when particle phenomenologists try to adress questions related to gravity using QFT methods). Phenomena like UV/IR mixing and the holographic nature of quantum gravity go far beyond naive particle QFT, and all the difficulties one encounters may be just an indication that one uses too a limited framework in the first place (much like trying to use standard QFT to address the information loss problem of black holes).
     
  3. Aug 5, 2017 #28
    Just to clarify, I did not mean that any answers in this thread were arrogant. I meant the way the hierarchy problem was presented before people crushed againsted LHC disappointing results (from this point of view). What I can see, for example, is that the way people talk about this problem today is much different then what it was 15 years ago or still right before the higgs discovery.

    About arrogance, I don't this I agree on the saying. I believe that if you know what you are talking about, you have no reason to become arrogant. When you do, usually the reason is precisely the opposite.

    Finally, it's true, the LHC has just started and we don't know what will happen until 2035. But what I was referring to, was the prospect of finding a huge number of new states, mainly based on hierarchy related arguments, and that simply did not happen.
    The LHC has reached practically its max energy and, while we of course still don't know, I think it is reasonable to expect that if it will see something, this will be deviations here and there. Which would be good of course, but it is far from what people were talking about some years ago.

    In any case, there is no gain in such discussions, we will just have to wait and see, and I will be happy to be proven wrong, if this will be the case. I just wanted to stress that many things that are given for granted and said as if they were obvious, are often not, and we had a proof of this with the hierarchy problem.

    Cheers
     
  4. Aug 5, 2017 #29

    mfb

    User Avatar
    2017 Award

    Staff: Mentor

    A naive scaling suggests that a "10 sigma" observation with 3000/fb could be a ~1 sigma effect today, which means it wouldn't be visible at all.. Sure, it is unlikely that we'll find 50 new particles, but even a single particle would be amazing. A single clear violation of the SM elsewhere would be great as well.
    LHCb has some curious results in that aspect...
     
  5. Aug 6, 2017 #30

    Haelfix

    User Avatar
    Science Advisor

    Ok, to see why DimReg isn't an answer to the hierarchy problem, or alternatively why you have to talk about standard model cutoffs, could I recommend an excellent presentation by Nathaniel Craig at a recent IAS summer school that has conveniently appeared on the internet in the past week or so. For people that are serious about learning why the hierarchy problem keeps a lot of physicists up and night (and perhaps why we don't all suffer from mass delusion) I can't think of a better place to start.

    Lecture one goes through a lot of what was discussed here in some detail and the following lectures are interesting as well, highly recommended.
     
  6. Aug 10, 2017 #31
    It has been said several times in this thread already, but I'll say it myself too, in case it helps someone else get the message. The real hierarchy problem is not the problem that one number is small and the other number is big. The problem is that we have theories in which, to match experiment, we need an observed quantity to come out small, and the way we do that is to employ a fundamental parameter that is very big, but which is finetuned so as to be almost entirely cancelled out by quantum effects.

    Originally I thought Hossenfelder understood this, and was taking the attitude, so what? ... in an example of that hardboiled empiricism which says, to hell with preconceptions and common sense and human intuition; what matters in science is agreement with experiment, and these finetuned theories agree with experiment. She does actually say something like that, it's just that I am no longer sure whether she thinks finetuning means huge cancellations, or just small numbers.

    Anyway, for an example of a paper which overtly says, let's forget concerns about finetuning and just see what works, see "The new minimal standard model" of 2004. It never swept the world and it's already out of date (the actual Higgs is a little lighter than the range it allows), but it's an example of what finetuning-be-damned looks like. In that regard, I would contrast it with the more recent "SMASH" model (which has had some fans here), because SMASH includes an axion in order to explain why strong CP violation is effectively zero, something that a willfully finetuned theory like NMSM can just posit.

    For my part, I do think finetuning (serious finetuning, involving magic cancellations that wipe out many orders of magnitude) does need to be explained or avoided; and I am one of those people who is impressed by the asymptotic safety prediction of the Higgs mass. That seems to require a "desert" (no new physics) above the electroweak scale, and something unexpected at the quantum gravity scale. For example, despite my love of string theory, I can now appreciate the interest in unconventional models of micro black holes, if that would remove one cause of the need for finetuning.

    Two sets of papers that I have started looking at, are Strumia et al on agravity, and Dubovsky et al on the T-Tbar deformation of CFTs. I don't know what agravity says about black holes, but it is apparently designed to allow what Gorsky et al call asymptotic security - Higgs mass goes to zero at the Planck scale, and Higgs mass beta function goes to zero at the Planck scale. The T-Tbar deformation, meanwhile, is said to have asymptotic fragility - whatever that is, I haven't got the gist of it yet. But apparently Dubovsky's work has caught Strumia's eye, so it goes on the list for consideration.
     
  7. Aug 10, 2017 #32
    I don't know about Sabine, cannot talk for here, but on my side, I think I did my best to understand the arguments that people talk about. My point was entirely another.

    There is no doubt that, if there was really a magic cancellation of 15 orders of magnitude, everyone should be bothered. And in my argument I was not trying to say that I can prove that such a cancellation does not take place. The problem is, I am not sure I believe that this cancellation is really inevitable. The problem, in my perspective, is that we insist in wanting to apply a model of reasoning (based on QFT and EFT if you want) to a problem of which we understand practically nothing, and the reason for this is because it involves gravity at very high energies and its quantization.

    Now, of course, lacking anything else better, we have to try to do something, and it is good that people think about these scenarios, and "what would ti be if..." . The problem is that these things are stated as if they were carved in stone, namely :

    "there MUST be something, because there IS such a huge cancellation, don't YOU see it, you fools ?".

    I was arguing precisely against this attitude. I always found it pretentious and non scientific. Therefore I am happy that nature (at the LHC in this case) is showing us that things are not so obvious as they were presented and that maybe we should cut down our egos a bit sometime, and try to think of something new. This is a much more exciting perspective to me, than keeping on adding new gauge groups and new particles and new interactions than no one ever saw, to explain phenomena that no one ever saw, just because of a hunch, sorry... there are indeed things we see and we don't understand, dark matter and dark energy for example, which indeed all have to do always and only (as far as we can say now) with gravity.

    I understand you will not like the example, but to me this is so similar to Ptolemaic cosmology and epicycles. People had a framework that seemed to work, namely a system made of things rotating on circular orbits around the earth, and in their mind there was no other possible system. Therefore, of course, they tried to bend it and adapt it adding always a new piece, a new circle, to try to reproduce observation. It was "clear and obvious" to everyone, that if the system did not work, it meant that there HAD to be another epicycle, what else could it be???

    Fortunately, at some point someone came about and changed the perspective, and then everything was simple and beautiful again and none of those crazy (according to the Ptolemaic system "inevitable") epicycles was actually needed.

    I know you all understand this story better than me, so I ask myself, how is it possible that our minds are so powerful that we can imagine Gauge theories, symmetry breaking, bending of space-time under the action of gravity, and we still we cannot learn a bit of humbleness from stories that repeated over and over again in our past? So in conclusion I say, fine if we want to do models and present ideas using the framework that we have trying to look for a hint of a solution, it is the best we can do now and it is completely legitimate; On my side, I know I am not the new Einstein and I probably will not be the one changing perspective, so all fine. But at least, let's give it the benefit of the doubt, ok?

    Cheers
     
  8. Aug 10, 2017 #33

    ohwilleke

    User Avatar
    Gold Member

    It doesn't work like that for a variety of reasons. It turns out that once you get the LHC running at peak energy, there are huge spoilers very early on in the results and there are never cliffhangers.

    One is that more data doesn't change your systemic error, it only changes you statistical error. And, statistical error has a non-linear relationship to the size of your sample.

    Statistical error is basically a function of sqrt(1/sample size). To use round numbers, if we have 50/fb now, and will have 3000/fb then, the relative improvement in statistical error is sqrt(1/60), so the statistical margin of error might decrease by a factor of about 0.129

    But, the relationship between systemic error, statistical error and total error isn't linear either. If your systemic error is 1 and your statistical error is 1 (and currently a lot of results have comparable amounts of systemic and statistical error so this crude estimate isn't too far off), your total error is about 1.4.

    But, if your system error is 1 and your statistical error is 0.129, your total error is about 1.008.

    So a 5 sigma result when it is all over ought to be a 3.6 sigma result now.

    We have no 3 sigma new physics resonances yet, and that implies that we are almost certain not to discover any new particles at a "discovery" threshold by the time that the LHC is done.

    In truth, it is even worse than that.

    Why?

    Because lots of SM observations are subject to loop effects that involve all possible ways that a phenomena could happen including any new particle that is out there. So, if there is a BSM phenomena out there, you are not going to see just one experimental anomaly standing alone. You would see multiple experimental anomalies in measurements of different things. And, the kind of measurements where you would be seeing multiple anomalies at CMS would be the same kind of measurements where you would be seeing multiple anomalies at ATLAS.

    With the possible exception of charged lepton non-universality, we haven't seen anything remotely like that at the LHC. If there were 50 new particles out there, we'd have hundreds of anomalies that we'd be seeing at a significance of about two-thirds of what we will see in a final result (in sigma units). Yet, this clearly isn't happening.
     
    Last edited: Aug 10, 2017
  9. Aug 10, 2017 #34

    mfb

    User Avatar
    2017 Award

    Staff: Mentor

    @Sleuth: I don't see anyone claiming that there must be many orders of cancellation. That is the point: We don't expect such fine-tuning, so we have to figure out what is going on. The ideal result is an explanation for the Higgs mass that does not have excessive fine-tuning.

    @ohwilleke: For most searches, the region of phase space accessible with 3000/fb is more than twice as large as the region accessible with 40/fb on a linear mass scale, and you also get a factor of nearly 10 in sensitivity in cross section for a given mass in most places. Per day of running, the early data is more exciting, but taking much more data helps a lot. Systematic errors are not an issue for most searches, and most of them also go down with luminosity because they are uncertainties relative to the signal strength (e. g. luminosity, identification efficiency and so on) or the fitting procedure. Very few searches have systematic uncertainties corresponding to a fixed uncertainty in the signal yield.
    What you say is not directly wrong, but your estimate of the importance of these things is completely off.
    That statement is completely wrong.
     
  10. Aug 10, 2017 #35

    Vanadium 50

    User Avatar
    Staff Emeritus
    Science Advisor
    Education Advisor
    2017 Award

    His whole point about systematics is completely wrong. More data means more control over systematics.
     
  11. Aug 10, 2017 #36

    Vanadium 50

    User Avatar
    Staff Emeritus
    Science Advisor
    Education Advisor
    2017 Award

    Sleuth, you've said some harsh things. Forgive me if I am direct in replying.

    You accused scientists of arrogance, twice, and then proceeded to misstate the situation. I find this...ironic.

    Second, your statement that you weren't talking about anyone here when you said that is cowardly. You called the community arrogant - several times. You have members of that community here. Show some courage and either retract your statement, or stand by it - but saying that the community is arrogant, except for the handful of people who happen to post here is a cowardly way out.

    Third, you have it exactly backwards on cutoffs. Setting a cutoff to zero doesn't make it valid everywhere, it makes it valid nowhere. You need to set it to infinity to make it valid everywhere.

    Fourth, whether or not the solution to the hierarchy problem will be uncovered by the LHC has no bearing at all on whether there is a hierarchy problem. It's possible that the hierarchy problem has an entirely different solution. I don't think it will turn out that way, but it could. In any event, the hierarchy problem would be there even had we never built the LHC, and it was known before we built it that a Higgs boson light enough to cause EWSB of the sort we see is problematic because of quantum corrections.

    Fifth, you write:

    in quotes. I challenge you to find that anywhere in the scientific literature. I think you just made that up. Sticking words in the mouths of your opponents is a shabby, shabby means of debate.

    Finally, you write:

    Fair enough. You first.
     
  12. Aug 10, 2017 #37
    Vanadium, ok, I think the discussion is going completely in the wrong direction.

    I was not referring to people on the forum, whether you believe it or not, instead to an attitude of a part of the community during talks, conferences etc, but I don't feel like repeating this again. I don't see any reasons to talk about cowards and everything else you said.

    Probably I expressed my point of view with too much emphasis, my bad. I apologize if I offended anyone, it was not my intention to be arrogant.
    I think the message is clear so I don't need to repeat it again. My ego is already way too small to cut it down even more, you don't know me.

    Have fun
     
  13. Aug 10, 2017 #38

    MathematicalPhysicist

    User Avatar
    Gold Member

    Well, as you yourself say history repeats itself; any theory or models that will come will replace our current understanding and the same new theories will be replaced again and again; everything repeats itself endlessly.

    Every physical theory is in the end result, false according to classical logic which is the basic building block of every thought; take it as you like.
     
  14. Aug 10, 2017 #39
    I completely agree with you!
     
  15. Aug 10, 2017 #40

    ohwilleke

    User Avatar
    Gold Member

    (This is mostly in response to post #36, which might not be obvious as a couple of other posts were made to the thread in the meantime while I was writing this post.)

    A Question Of Terminology - Trying to Defuse The Emotional Side Of The Discussion

    Different Senses Of Words And The Emotional Baggage That Comes With Them

    When one talks of scientist displaying "arrogance" in regard to the strong CP problem and hierarchy problem, one is using the word in an abstract and somewhat non-common sense way as opposed to the usual sense of describing the personal internal emotions and attitude of a person towards other people (much as the word "natural" as used in regard to the hierarchy problem is being used in a technically defined sense and isn't being used in its common sense meaning of "the way things actually are in nature").

    There are some false friends in physics terminology like "color" for QCD charge which aren't troublesome or contentious because everyone is absolutely clear that the sense of the word being used is totally different from the common meaning of the word. But, in the case of words like "arrogance" and "natural" in the context of the strong CP problem, hierarchy problem and related discussions, its easier to inflame emotions in discussions using these words because the technical meanings are closer to the common meaning and because this choice of words is intended to some extent to evoke some of the same heuristic reactions as the common meaning.

    Still the sense in which one uses the word "arrogance" in this kind of discussion is not quite the same as the one in which you use it to describe your coworker to a friend at a cocktail party. Someone who is "arrogant" in this sense, may be a very polite, civil person who reeks humility and usually displays deferential conduct in this scientist's interactions with other people.

    In the sense used in the strong CP problem/hierarchy problem, unlike its common sense usage, "arrogance" is a close synonym to "hubris" rather than to "impolite *******".

    "Arrogant" Is Intended To Characterize "Naturalness" Analysis As A Type Of Unscientific Category Error

    What one means when saying that a scientist is "arrogant" in reference to strong CP problem/hierarchy problem type questions in physics is that someone is making suppositions about what the laws of nature and its physical constants ought to look like, in the form of a Bayesian distribution of priors about what those laws of nature/physical constants should be, without having any empirically supported or scientifically valid basis for choosing those Bayesian priors. In the eyes of critics of seeing these issues a true "problems" in physics, attributing any scientific meaning to these Bayesian priors is a form of category error.

    Critics see it as a category error because the set of all possible laws of nature and physical constants in the abstract (as opposed to in relation to different hypotheses formulates independently based upon empirical observation such as trying to decide if GR or F(R) theory better describes reality), is not a scientifically valid matter upon which to generate Bayesian priors because "possible laws of nature and physical constant values" are not things which have any reality in any space-time and hence can't be assigned a weight in any way that is meaningful or adds information to what we know from other means.

    Buried Religious Subtexts

    Both "arrogance" and "natural" in this scientific context is also dicey and emotional because both words carry with them a subtext of residual religious belief that has carried over linguistically even though anyone who is engaging in this debate has implicitly abandoned the religious worldview and metaphysical context in which this religious imprints into our language and usage arose.

    In the case of the term "arrogance" or the synonym "presumptuous" or "hubris" the unacknowledged religious subtext is that it is not the place of a mere mortal to second guess the mind and motivations of a creator god. The terminology is actually somewhat self-undermining because the whole point of the critics is that framing these issues as choices to be made by a creator god or some abstracted amoral generic equivalent of a creator god, is not a scientifically value way of looking at the world, even thought these words derived from religious and interpersonal contexts perilously adopts the very frame of reference that critics are seeking to reject. George Lakoff would take the critics to the woodshed for this poor rhetorical choice that is self-undermining in a very subtle, unconscious way.

    In the case of the term "natural" the unacknowledged religious or metaphysical subtext is that the laws of nature and its physical constants were established by an anthropic intelligent designer creator god who has certain known aesthetic preferences which are known to his devotees and that the way that the laws of the universe and its physical constants should be can be inferred in a meta fashion from the presumed stylistic preferences of a presumed creator god as computer programmer or game designer with a certain set of choices available much as someone playing a "create your own universe" game on a computer. Needless to say, we have no reliable scientific reason to think that our laws of the universe or the values of our physical constants really came into being in a context like that one.

    But, the tendency of both sides of the debate to resort to language with religious baggage isn't entirely surprising because at its heart this debate is one about the metaphysical assumptions of fundamental physics as a discipline, even if the question is rarely posed that way and even though scientists who pursue an analysis of whether laws of physics and physical constants are "natural" rarely frame their analysis as a metaphysical one even though the very heart of this analysis is implicitly a metaphysical one that assumes that it is scientifically valid to think about a set of all possible laws of nature and all possible values of fundamental physical constants.

    Using Greek Philosophy Terminology

    To go really old school, the strong CP problem/hierarchy problem and related issues use a very philosophically Platonic mode of reasoning, and in modern science we have in most other contexts rejected Platonic modes of reasoning in lieu of modes of reasoning rooted in Aristotelean world views that posit that there is not a "real" world of "ideals" that exists separate and apart from the observationally observable world. In one definition of modern Platonism (from the link earlier in this paragraph):

    Critics would see this Platonic mode of reasoning as inconsistent with the scientific method.

    Thus, in sum, what critics are trying to convey with the shorthand term "arrogance" is that the circumstances in which the proponents of these questions as genuine "problems" in physics is that the act of generating any Bayesian prior in this situation, regardless of its exact details, is not scientifically or logically justified.

    Modern mathematicians do engage in Platonic modes of reasoning, but tend to be more clear about expressly and formally stating when they are relying upon axioms that rely upon no factual or empirically observed basis for their existence and instead are only conjectures from the mathematician. And, modern mathematicians are usually more upfront and clear than physicists exploring these kinds of Platonic problems that they are making no assertions about the physical validity of matters which they assume as axiomatic.

    Are There Better Alternative Words?

    The trouble is, that it is hard to come up with alternatives to the word "arrogant" in this context that don't have comparable emotional baggage.

    For example, another strong synonym to the word "arrogant" in this physics context is "presumptuous", but "presumptuous" certainly has emotional baggage as well, although perhaps it is slightly less inflammatory because while "arrogant" is a term that usually goes to one's conduct interpersonal relationships in its common meaning, "presumptuous" does not have the same kind of social and interpersonal connotation.

    If anyone could come up with a word that is synonym for "arrogant" as used in the technical sense that it is being used by critics of this approach with less baggage, perhaps it could help make the discussion less heated. I'm simply at a loss to come up with one at the moment.

    An Aside

    I'm laughing a bit at myself when the automatic censorship feature of PF was invoked when I posted this, but honestly, the censored version conveys the intended meaning just as well in this context.

    Practical Consequences

    The trouble is that whether formulating a Bayesian prior in this situation is "arrogant" or is scientifically justified in this situation is really the core question at the root of the entire debate. And, it turns out, there are a lot of practical, real world consequences to whether it is proper to formulate a Bayesian prior in this kind of situation or not.

    Many hundreds of millions of dollars, maybe billions of dollars of scientific funding decisions, and many thousand of big picture career choices about how very smart people with PhDs in physics decide what they will devote their research agendas and write dozens of academic papers about and think about, hinge on this very fundamental, yes or no, question which superficially seems very abstruse.

    Resolution of this issue influences how project managers at major HEP experiments devote scarce resources to do particular kinds of data analysis of collider experiments, and about which questions will receive the highest priority to be answered. The invisible victim, when the critics lose and the naturalness proponents win, are all of the scientific hypotheses and scientists advocating them who generate their research agenda without reference to concepts like naturalness and have fewer resources allocated to them as a result of pursuits of naturalness oriented research agendas which would have been much less of a priority relative to alternative research agendas if the "arrogance"/"naturalness" question were resolved the other way.

    If "naturalness" is a dead end as a fruitful way of generating hypotheses and prioritizing new scientific investigation, then this bad idea may have delayed breakthroughs with different views about the scientific method that could prove in the end to be more fruitful approaches to discovering new scientific knowledge by decades. Critics are increasingly vocal now precisely because the "naturalness" assumption has in hindsight proven to have not generated much useful guidance about which projects and hypotheses to pursue and indeed in hindsight appears to have been counterproductive over the time frame of the last forty years or so.

    Of course, even a broken clock is right twice a day. Naturalness, whether or not it is a valid means of generating a scientific hypothesis to test and whether or not it is a valid means of prioritizing different research agendas, may sometimes led to a useful result, even if not using this frame as a criterion for generating scientific hypotheses and prioritizing research agendas would have produced better results (something we won't know until we try the alternatives).

    And, of course, the whole existence of the debate at all points out some blind spots in the common formulations of the "scientific method" itself, which is very clear and prescriptive about how to test and how to compare hypotheses that have been generated once they exist to test, but is very vague and provides little guidance about how to generate hypotheses and how to prioritize research possibilities in an optimal manner before we spend the time and money necessary to get in the business of hypothesis testing once we have a list of questions to test that is longer than the available resources to test or explore or generate them.

    It is really most unfortunate that such high stakes depend upon a quite esoteric and subtle disagreement within the high energy physics community over whether this one very basic type of methodology is, or is not, a scientifically valid methodological tool.

    My view

    To put my cards on the table, I'll disclose my views on the merits.

    As I've stated in other similar threads at PF, I think Sabine is spot on correct. I don't agree that the strong CP problem or the hierarchy problem or some kindred lines of inquiry are genuine, valid scientific "problems" and instead consider them to be on a par with numerology. This kind of analysis can be interesting, but it doesn't provide much insight (indeed, sometimes numerology is more informative than naturalness analysis which is really just a particular subspecies of numerology anyway).

    In the same vein, I think that much of the work done based on the anthropic principle and the "multiverse" as a way of determining why the laws of the universe or the values of physical constants are way they are, is pseudo-science.

    I think, as she does, that the scientific community has experienced a generation or two of group think that has led it astray, and that being right or wrong on this question has nothing to do with whether you are in the majority or not. It is not an issue that can be resolved democratically.

    But, the main purpose of this particular post is not to get to the bottom of the correct answer, but to at least frame it in a way that adds light to the dispute, to let some of the unnecessary emotional air out of the bag, to try to prevent misunderstandings, and to focus on the core issue at stake since in my humble opinion this is really a disagreement over fundamentals and not primarily a question dependent upon technical details to any great degree.
     
    Last edited: Aug 10, 2017
  16. Aug 10, 2017 #41

    king vitamin

    User Avatar
    Gold Member

    mitchell porter, if it's not too much of a digression from the main topic at hand here, could you elaborate on this statement? Does the Higgs mass play nicely into theories of asymptotically safe gravity (which I assume is the "agravity" mentioned later in your post)?
     
  17. Aug 11, 2017 #42
    The special feature of the Higgs and top masses is that they place the standard model at the edge of metastability. One interpretation of this (see section 5.1 here) is that the Higgs quartic and its beta function both go to zero at the Planck scale. A 2009 paper showed how to obtain this under the assumption of asymptotic safety of gravity.

    Agravity is short for adimensional gravity, gravity with only dimensionless couplings. It resembles conformal gravity. See "Agravity" and "Agravity up to infinite energy". The idea seems to be, embed the standard model in a nongravitational field theory in which all couplings are asymptotically safe (Francisco Sannino's group works on this), then couple that field theory to dimensionless gravity so as to preserve those special Planck-scale boundary conditions without finetuning (see figure 1, page 15 of the second agravity paper).
     
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted