Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Do TOE candidates predict SM parameters?

  1. Oct 23, 2012 #1
    I wonder if TOE candidate theories (loop gravity, superstrings, twixtors, or <you-name-it>) predict from the first principles SM parameters (all, some of them, or some correlations like Koide formula)?

    If they don't predict SM parameters, then the only option is "10**500 different universes with different SM parameters, born from the false vacuum", correct?

    What is a relationship between different theories and "10**500 universes" approach:
    1. absolutely required,
    2. optional,
    3. or not compatible at all?

    Of course, the answer may be different for different theories.
  2. jcsd
  3. Oct 23, 2012 #2
    Kaluza-Klein theory predicts some SM parameters and does it wrong. Namely it provides a wrong value of the electron mass and predicts a massless scalar particle. It was one of the reasons it was abandoned from the mainstream. It is sad IMO, since some Higgs mechanism could actually correct the values.
  4. Oct 23, 2012 #3
    Explaining the SM parameters is a long journey. Thousands of QFT papers have been written, trying to explain just parts of the general structure - the order-of-magnitude relations between fermion masses, or between the elements of the mixing matrices. String theory is then a unified framework which, on the right background, might produce these GUTs or other BSM models, but it is very hard to get the predictions even for a specific background, because of hard-to-calculate quantum corrections. So explaining the SM is a jigsaw, you need to work separately on different parts of it, and you need to have multiple ideas for how each part fits together. Eventually there will be another leap in understanding and a deeper logic will become apparent, as happened when the SM was created in the 1970s, but for now there is no other path than to keep pushing in all directions.
  5. Oct 23, 2012 #4
    I understand the issue with quantum corrections. But let's discuss the narrower question, CP violation. The exact value is not important, what is interesting is that it is 0 or not 0. So we can forget about the exact value of quantum corrections.

    As mathematics itself is deterministic, it is hard to get assymetry from symmetrical theory. There are 3 options:

    1. The assymetry is injected into otherwise symmerical framework with initial/boundary conditions. However, TOE should not depend on initial conditions, othwerwise it is the same as just postulating some specific values of SM parameters.

    2. The assymetry is the core property of the mathematical framework of TOE. For example, exp(x) is not symmetric at x=0. Say, negative charges are principally different from positive ones, however, at low energies that assymetry is almost hidden by quantum corrections. However, we know that at high energies we see more symmetries, not less!

    3. Symmetry is symmetrically and deterministically broken (MWI-cat-style). Essentially it is the same as 10*500 "baby-universes" approach. In every baby universe symmetry is somehow broken, but the total ensemble of universes is perfectly symmetric.

    Anything missing?

    So I was asking what TOE theories rely on #3 and what on #2.
    Last edited: Oct 23, 2012
  6. Oct 23, 2012 #5
    In string theory, CP symmetry is part of ten-dimensional gauge symmetry, so any CP violation has to be spontaneous. That can happen in a variety of ways, e.g. from fluxes in the extra dimensions or from supersymmetry-breaking, so exactly why you get CP violation from the weak interaction but not from the strong interaction, etc., will depend on the model.
  7. Oct 23, 2012 #6


    User Avatar
    Science Advisor

    Is Connes' non-commutative geometry approach a ToE candidate? Then there is a ToE candidate NCG which predicts SM parameters
  8. Oct 23, 2012 #7


    User Avatar
    Science Advisor
    Gold Member
    Dearly Missed

    Resilience of the Spectral Standard Model
    Ali H. Chamseddine and Alain Connes

    I'm glad to see them using the terminology "spectral geometry" more often now instead of "noncommutative geometry". And consequently saying "spectral Standard Model" instead of noncommutative standard model. It is a more descriptive. Urs Schreiber advocated this terminology over 5 years ago in a tutorial he gave.

    They do seem to reproduce the SM and make some predictions about particle masses. Their idea of a "ToE" seems to involve the "big desert" hypothesis that the Standard Model we have might be good all the way to Planck scale, or unification scale. Perhaps a theory of gravity and matter that is good to Planck scale is all one can reasonably as of a "ToE".

    It's impressive that they get the Standard Model and even predict some parameters based on what is a fairly simple geometric scheme.

    I'm not sure just how they acquire gravity. Could someone give an intuitive explanation of how they do that? They obviously have all the appropriate machinery---a 4D manifold and an ability to describe the metric on it.

    This paper of Connes and Chamseddine was on the "Most Important Paper" poll and got three votes:
    https://www.physicsforums.com/poll.php?do=showresults&pollid=2289 [Broken]
    If you haven't voted in the poll yet, and want to, go here:
    Last edited by a moderator: May 6, 2017
  9. Oct 23, 2012 #8


    User Avatar
    Science Advisor

    I have to understand the classification of their NCGs better in order to understand what it means that "they get the Standard Model". In their last paper they don't get the Higgs mass (even if they claim in the abstract) but they show that 125 GeV is reasonable within their theory; it is at best a post-diction

    I am not, either; that's why I asked whether "... Connes' non-commutative geometry approach [is] a ToE candidate"
  10. Oct 23, 2012 #9
    As I understand it the symmetries of the SM continue to exist in any spacetime, no matter how curled up or how fast it is expanding, since they are internal symmetries. So it would seem that these symmetries existed from the start even during inflation. I would assume that something must have happened to begin to differentiate one symmetry from another during reheating when inflation stopped. Could it be that each of the symmetries of the U(1)SU(2)SU(3) begin to look like the others at small enough distances or high enough energies. And then as the temperature cooled, the distance lengths grew large enough to make the differences in the symmetries appearent? Is that what is happening in the NCG model?
  11. Oct 23, 2012 #10


    User Avatar
    Science Advisor
    Gold Member
    Dearly Missed

    There is a revolutionary new idea here that you may not taking count of. A new way to represent 4D spacetime that does NOT use a manifold. The manifold (a set of points, a set of maps, map overlap consistency, smooth functions defined locally, tangentspace at each point) was invented in 1850 and is the math object everybody usually thinks of in this context representing a spacetime or any other continuum. GR was defined on manifolds.

    But there is a more abstract way to present spacetime! In a sense it is simpler and more generalizable. You can present a spacetime as an abstract NUMBER SYSTEM!!! It is very unintuitive why and how you can do this until you get used to it.

    I dont mean ordinary numbers--integers, reals, complex numbers---NO! but a set of abstract things not numbers but that you nevertheless can add, subtract, multiply. For example imagine the set of all smooth real-valued functions defined on the surface of a donut. An algebraic object: you can add or multiply two functions to obtain a third. And hidden deep in the combination rules of that algebraic object can still be a simple geometric object like the surface of a donut. An algebraist can find the hidden geometrical thing which is sometimes called the "spectrum" of the algebraic object. It is revealed to him by the way the functions defined on the donut add, subtract, and multiply with each other to give other functions

    The good thing is that the algebraic object can be modified ever so slightly and then its spectrum will consist not merely of, say, the surface of a donut but so to speak a very fussy surface of a donut which will only allow to be defined on it functions which have a certain symmetry (like U(1)SU(2)SU(3), but perhaps not that exactly, some symmetry)

    There is no more manifold, no set of points papered with overlapping maps, and the symmetry is in some sense "dyed in the wool" intrinsic in the spectrum of this abstract algebra system, the spectrum which would have represented an ordinary spacetime if it hadn't been generalized or tweaked.

    So Connes and Chamseddine figured out how to mimic a spacetime with SM living in it (in this abstract way) and in the process they got MORE. they were able to extract predictions that you couldn't get from the ordinary SM.
    They goofed on one of their first predictions (as they explain in the "Resilience" paper) overlooked an important detail. but that happens. So they found their mistake and now they have a "post-diction" instead of a prediction. the main thing is that their way of realizing the SM gives something MORE than just the Standard Model.

    Also it is kind of elegant.

    One of the regulars at BtSM, arivero, knows a lot about NCG (or spectral geometry as it is coming to be called) and can, I think, explain how you rig the algebra so that the U(1)SU(2)SU(3) SM arises from it. IIRC he has studied this some years back like 2007 and 2008.

    Here is the first reference in the "Resilience" paper:
    [1] Ali H. Chamseddine and Alain Connes, Why the Standard Model, J. Geom. Phys. 58 (2008) 38-47.

    Why the Standard Model
    Ali H. Chamseddine, Alain Connes
    13 pages
    (Submitted on 25 Jun 2007)

    "The Standard Model is based on the gauge invariance principle with gauge group U(1)xSU(2)xSU(3) and suitable representations for fermions and bosons, which are begging for a conceptual understanding. We propose a purely gravitational explanation: space-time has a fine structure given as a product of a four dimensional continuum by a finite noncommutative geometry F. The raison d'etre for F is to correct the K-theoretic dimension from four to ten (modulo eight). We classify the irreducible finite noncommutative geometries of K-theoretic dimension six and show that the dimension (per generation) is a square of an integer k. Under an additional hypothesis of quaternion linearity, the geometry which reproduces the Standard Model is singled out (and one gets k=4)with the correct quantum numbers for all fields. The spectral action applied to the product MxF delivers the full Standard Model,with neutrino mixing, coupled to gravity, and makes predictions(the number of generations is still an input)."

    And here's their second reference:

    [2] Ali H. Chamseddine and Alain Connes, Noncommutative Geometry as a Framework for Unification of all Fundamental Interactions including Gravity. Part I, Fortsch. Phys. 58 (2010) 553-600 http://arxiv.org/abs/1004.0464
    Last edited: Oct 24, 2012
  12. Oct 24, 2012 #11
    So if this new theory is so good that it predicts SM parameters... then there are no 10**500 "baby universes" with different parameters... why are these, then only possible parameters so life-friendly?
  13. Oct 24, 2012 #12
    First let me point out that according to Connes, they get the other forces from gravity's extension to the noncommutative part of space-time. The recipe is:
    • Start with an "almost commutative manifold" M x F, where M is Minkowski space and F is a "finite geometry".
    • Define a Dirac operator on M x F.
    • Construct the bosonic part of the "spectral action" for D (a standard formula, motivated by the need for local observables). Connes says that conceptually, this action is "pure gravity", but fluctuations of the metric on M x F are of two types, the familiar sort that are gravitons, and "inner fluctuations" associated with F which give rise to the gauge bosons.
    • Finally, you add another standard term to the spectral action to get the fermions and their interactions.
    This synopsis is based on expositions like this one. I've only skimmed it and can't tell you why this procedure gives you gravity.
    It implies a few extra relationships among SM parameters. So the total number of free parameters is slightly reduced, but there's still about 20 of them. In the NCG framework, most of these are moduli of the Dirac operator, the object used to construct the SM action.

    I have seen a few perplexed online discussions in which people who know conventional particle physics and QFT struggle to understand where these extra relationships come from. Since the 1970s, there is a standard way to think about the action of a renormalizable gauge field theory, as containing all renormalizable interactions consistent with the specified gauge symmetry. These interaction terms will have undetermined coefficients and these are the free parameters of the theory.

    The standard model is a renormalizable theory and it fits this framework. Connes et al have re-expressed the standard model in this new "noncommutative" or "spectral" framework, and they get the extra relations. So the challenge is to understand, from a perspective based on conventional QFT, where these extra constraints on the parameters come from. The fact that gravity is part of the noncommutative construction from the beginning may be related. I have seen string theorists speculate that the noncommutative theory may be a truncation of a Kaluza-Klein model (Lubos Motl) or a perspective on a non-geometric phase of string theory (Urs Schreiber), but what's really clear is that these are speculations, and there's still no rigorous understanding of how this relates to the 1970s-standard framework.

    I've also noticed that the recent papers (Chamseddine and Connes, Estrada and Marcolli) which adjust the NC standard model to get the 125 GeV Higgs, use RG flow equations constructed by people working with the conventional SM, and I don't know if that's OK. They seem to be hypotheses about how the RG flow in the NC SM might work, rather than RG equations derived from the postulates of the NC SM.
  14. Oct 24, 2012 #13
    So we are not saying farewell to AP as there still free parameters...
  15. Oct 24, 2012 #14
    There could also just be non-anthropic arbitrariness.

    But the ultimate foundations of the NC SM are somewhat obscure, e.g. (technical detail) consider the function f appearing in the spectral action which then gets expanded. The function is quite unspecified, all that matters are the first few coefficients, which then contribute to the observed parameters.

    In the standard framework of renormalizable field theory, the procedure for theory construction is understood well enough that we can say that the free parameters really are free. In the case of string theory, we can say that specifying the vacuum ought to determine all the "free parameters", but whether there is some deeper principle (anthropic or dynamical) which selects the vacuum or favors a certain type of vacuum is completely unclear, because string cosmology remains an unsettled or even badly-founded subject. But in the case of Connes et al's approach to physics, the foundation is simply obscure and therefore it's hard to say how predictive it becomes in its final form.

    (By the way, there is no consensus or demonstration that eternal inflation works within string theory. Eternal inflation is originally a field-theoretic model, in which different regions of the inflating universe settle into different ground states. Some string critics of the concept, like Tom Banks, seem to favor a sort of holographic cosmic Copenhagenism in which the universe outside our cosmological horizon is just disregarded. I think that's dumb as an outlook - galaxies don't cease to exist when they cross our horizon - but Banks also makes some technical criticisms of the field-theoretic assumptions behind the model of eternal inflation, arguing that they don't apply to string theory. And in general, quantum gravity in de Sitter space is just not worked out, e.g. the existence and meaning of the asymptotic instabilities in Castro and Maloney's latest. For all I know, those instabilities are a sign that eternal inflation is right, they may be the beginning of the validation of the paradigm. I'm just pointing out (1) the possibility that the different vacua of string theory are different "theories" - different superselection sectors - rather than states of the same theory which might be simultaneously realized within the one universe (2) the physical mechanisms behind eternal inflation still have an uncertain status within string theory (3) string cosmology is very much a work in progress (unlike e.g. string perturbation theory), the discussions about it are still highly heuristic, and in the end the theory may dictate an entirely different approach.)
  16. Oct 25, 2012 #15


    User Avatar
    Science Advisor

    The basic question is what "gravity" in the spectral model means: propagating gravitons or non-perturbative, arbitrarily curved spacetime
  17. Oct 25, 2012 #16
    Then it is not TOE yet (by the definition of TOE) as one more level is expected. Hopefully Max Tegmark's MUH is true, so we will be able to construct turtles all way down from TOE, it is called by Max 'Physics from scratch'
  18. Oct 26, 2012 #17


    User Avatar
    Science Advisor

    I don't agree. A ToE is a theory which explains all known physical phenomena. String theory has the potential to do that.

    Let's make an example: From the SM and its effective or low-energy approx. you can derive the existence of several different phases of matter (gas, liquid, ..., steam, water, ice, ..., iron, ..., ). The fact that the SM doesn't tell you whether you observe carbon-hydrogen based organisms here instead of ice, iron, ... has nothing to do with limitations of the SM but only with the initial conditions. Therefore it is absolutely unclear whether string theory should contain any selection principle in order to derive a unique vacuum we are living in.

    In addition I think the above mention conservative definition of a ToE (which requires that it can predict any experimental result related to all known interactions) is widely accepted. Going beyond that definition i.e. requiring that a ToE can ground its own uniqueness and consistency in itself is beyond that definiton - and is a logical nightmare.
  19. Oct 26, 2012 #18
    I agree with you, but the main point was about "free parameters". TOE with "baby universes" should not have any free parameters. Of course, it doesn't predict the values of SM parameters in *our* universe - it might be based on AP.

    So may be it is a problem of terminology, parameters which appear to be "free" on the scope of individual baby universe, are not free on the scope of the full multiverse. My point was that on multiverse level TOE can't contain any free parameters. Do you agree?
  20. Oct 26, 2012 #19
    To be a potential ToE, it should be proved that ST contains the vacuum we are living in (even among 100^500 others), which AFAIK is not (yet) the case...
  21. Oct 26, 2012 #20


    User Avatar
    Science Advisor

    No. It may contain some free parameters which you can fix via a small number of experiments.

    Again: it seems that we do not agree an "ToE". A theory describing all known phenomena consistentyl is a ToE. There can be more than one ToE and there can be free parameters in a ToE (to be fixed in the above mentioned sense).

    There need not be a unique ToE; different ToEs may exist, and experiment will select the 'correct' one. It's called 'Theory of Everything' = ToE, not 'Unique Theory of Everything' = UToE.
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook