Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Nature of quantum theory of gravity.

  1. May 6, 2009 #1
    Can anybody explain why the theory describing quantum gravity is expected to be discrete (rather than a continuum theory), nonlocal (rather than a local theory) and Lorentz violating (rather that a Lorentz invariant theory)?
  2. jcsd
  3. May 6, 2009 #2


    User Avatar
    Science Advisor

    Regarding discreteness: I don't think that's a must; there are theories using discreteness as input (CDT), others derive discreteness (LQG), in string theory discreteness does not show up. Discreteness is a well-know way to cure UV / short distance problems and (possibly) singularities; that's way many people expect something like that.

    Regarding non-locality: why do you think that there must be non-locality? in LQG you can speculate about non-locality, but again, it's not a must; string theory allows purely local description, as far as I know

    Regarding Lorentz-violation: string theory respects Lorentz-invariance in some sense (it does, if the chosen background does); I would say that Lorentz-invariance is not a fundamental symmetry of nature, it is only manifest in certain solutions.
    Last edited: May 6, 2009
  4. May 6, 2009 #3


    User Avatar

    IMHO, the continuum is naturally understood as limiting cases of discrete systems.

    Real numbers are usually constructed as completion of rational numbers by it's limits. But even the axiomatic approach makes uses of limits.

    So I would say that even from a pure mathematical point of view, the continuum is less "fundamental" than integer number theory.

    From my point of view, this is particularly when you consider that part of the point of quantum theory is that it is an information theory. And information certainly raises the question of how you count/measure information.

    A continuum measure is far more nontrivial than a discrete one. So any proper definition of continuum models, IMO needs to be introduced as per a procedure, limiting or completion, somehow from more primitive constructs.

    And if you believe that a finite physical system can not encode infinite information, then it seems plausible enough to me to think that the continuum must - in any information theory - be emergent, in the limit of large systems. In particular does this suggest that if you skip this process and jump right into the continuum world, certain tracking is completely lost.

    Even if most of what we know are well described by continuum models, I personally think that informationwise, a discrete starting point is the most natural thing. If not for other reason, to understand the continuums mathematical redundancy better and abstract from it the physical degrees of freedom.

    This argument is however quite different from that of LQG etc. To each his own :)

  5. May 6, 2009 #4
    Just to note CDT does not claim that spacetime is discrete. It uses triangulations in the same way a lattice does for QCD. One can still take the continuum limit in principle. CDT is just a method of computing the path integral. Discretness in LQG otoh comes from the the actual quantization of spacetime.
  6. May 7, 2009 #5


    User Avatar

    Wether spacetime is discrete or not is IMO a more "specific" question, and can still have somewhat different meanings, depending on wether you talk about discrete spacetime event sets or discrete geometries (that themselves ate continous).

    I see this also from another way, prior to even defining spacetime in a new framework, for me personally I take the observer and observation processes to be one of the more fundamental aspects, and here I see that the very physical microstructure distinguishable to the observer (which also would be the container for any kind of state vector) is discrete and bounded. The complexity bound for each observer could be an important parameter, because it constrains all state vectors, and provides a hard limit for the normalisations. So this should also guaranteed that inifinites don't show up because no matter what the informtaion represents (space, time, mass, energy or anything else) the representation structure provides a sort of cutoff intrinsic to each observer.

    In current QM, this is ignored. As I see it the relation between QM and GR is partly so that QM is a theory of measurement, given communication channels, but it doesn't properly analyse what happens at the nodes, and how the communication channels themselves may be affected by the process.

    GR otoh, relates matter/energy and spacetime, which can be pictures to be a special case of relating nodes and channels, BUT it's not a measurement theory in the proper sense.

    So I personally see that from an abstract ambiton of constructing a theory of measurement, that also models the observers (in particular explicitly notes how the complexity of an observer constrains what measurements are POSSIBLE), that current models are flawed, but that both QM and GR both have good partial insights.

    This is why I believe in going back, and try to reconstruct on a very fundamental level a theory of measurement. I even think that gravity will come out by consistency.

    The idea of actually taking classical GR, try to reformulation in as per some choice, and try to litteraly "quantize it" as per the same old OLD QM, is the wrong way to go.

    Maybe we need to step back even before the foundations of QM and GR, and rethink. And maybe rather than trying to take them both as is and try to patch them togheter, reconstruct them both, but this time, without ignoring some of the obvious points that should matter to a measurement theory, such as saturation, not of communication CHANNELS, but of communication NODES (these are different things).

    It's clear why in a particle experiment, that the laboratory frame is not easliy saturated. However, picture the inside view, where the observer is on par with the object it observes.

    Edit: the predictions I envision here are at least two types.

    1. First it's the general limits on the observations process, where current QM is possible just emergent in a "massive observer" approximation. And in the general case, the theory of measurement is even more indeterministic than QM, in particular do I expect there less determinism in the time evolution of state vector.

    2. The other aspect thta I think will provide the key to a possible TOE unification model is that EVEN in the normal case of massive observer approximation (like we have in the particle-labframe scenario), if we can gain insight into the "inside view" of the subatomic world, we might understand WHY we have the interactions we have, and probably explain several of hte paramters in the particle standard model. The unification idea here would be that as you "scale down" the observers complexity, the observed, distiniguishable laws, become simpler and simpler, and there will definitively be expected transtiions along the scaling where previously distinguishable interaction become indistinguishable and at some point there is only one. I think the key to that, is the inside view. In this view, the mental question would be like "what would I do, or what would I be CONSTRAINED to do, if I was a quark?" Maybe, there really isn't much of a choice. From the outside view however, the choices are plenty! which is I think why we have the fine tuning problems etc. This observer scaling would be a completely new approach that would replace or at least change the way we currently do and understand renormalisation.

    Last edited: May 7, 2009
  7. May 7, 2009 #6


    User Avatar
    Science Advisor

    I think we should ask: what are the reasons that different communities (s.t. – LQG - ...) don’t come together? what are the fundamental differences: philosophical question how to do science? or simply different approaches and lack of success?

    How will QG change the way of doing science? Regardless who is right (and perhaps all existing programs are misguided and plainly wrong!) QG is a new challenge to science. I think it’s the first time that we try to develop a new theory w/o one single experimental hint. Even worse, QG tells us that these new hints may be simply not there, except in regimes and processes that are not tractable experimentally.

    So the only guideline is to develop a new theory that
    1) makes the same predictions as QFT and GR in the regimes where they are applicable
    2) resolves problems in the QG regime where a naïve combination of both theories fail

    Of course it could be that new QG effects will show up, but up to now these effects are not there. There are some ideas like non-trivial vacuum dispersion relations, but as far as I know the experts are not really sure if these predictions are based on solid grounds.

    Today s.t. is accused not to produce predictions that are provable or disprovable by experiment. But I think this is a fundamental problem in all QG approaches; it applies to LQG and CDT as well (the prediction that spacetime becomes two-dimensional in the UV is nice, but not better testable than 10 or 11 dimensions in s.t.)
  8. May 7, 2009 #7


    User Avatar

    I agree that we should put the current problem of fundamental pysics in the context of science. It it no longer to separate the ontology and state of scientifically acquired knowledge, from the ontology of the scientific method. This is why I have personally come to have an evolutionary view of science.

    The natural link between scientific knowledge and scientific methods on one hand, and physical states and physical interaction processes really does become very clear (IMHO at least) when you ponder a fundamental physical theory as a form of measurement theory. The boundary between SCIENCE(scientists interacting with it's environment) and PHYSICS(physical systems interacting with it's environment) itself, becomes thin indeed. That's how I see it.

    I think that overall we do not understand the depth of this, not even conceptually. But when we do, it could possibly be the basis for the next revolution we need.

  9. May 7, 2009 #8
    No, I can not explain this, since I expect quantum gravity to be a continuous, Lorentz invariant and local theory myself.

    However, I also expect QG to be observer dependent, for the following reason. Every physical experiment is an interaction between a system and an observer, and the outcome depends on the physical properties of both. In particular, the result depends on the mass and charge of the observer. Alas, predictions of QFT do not depend on these quantities, which means that some tacit assumption is made. Clearly, the assumption is that the observer's charge is zero (so the observer does not perturb the fields) and that the observer's mass is infinite (so the observer follows a well-defined, classical trajectory in spacetime; in particular, the observer's position and velocity commute at equal times). This assumption is consistent except in the presence of gravity, where charge and mass are the same; heavy mass equals inert mass. Hence QFT breaks down specifically for gravity.
  10. May 7, 2009 #9


    User Avatar

    Hello Thomas!

    I fully agree.

    But to try to pinpoint how you mean this - if I may ask, what is your opinion about the idea of a birds view, that is viewed in a realist sense, that explains the observations made by each observer.

    Can such a bird level view, be observer indepdendent in your view (ie in the realist sense; as if you DO take the realist view, it exists independent of observation, whatever that means), or would you say that makes no sense?

  11. May 7, 2009 #10
    Very interesting post actually. Not sure whether i agree. I think that nature is subjective. That is there is only subjective reality and objective reality is a useful creation we use to describe the world and give it meaning. Or possibly it is a subtle mix of objectivity and subjectivity. Clearly QM and GR bring objective reality into question in there own way, but both have limits in which the objective world is returned to us. Possibly the combination of QM and GR will shatter objective reality. But i also have ideas about how things become better defined(more objective) in physics if we consider larger and larger systems. That is if we write down the theory of a single atom we are implicitly assuming this atom is in a universe all on its own. This description has many uncertainties associated with it however it is also an incomplete description of an actual physical atom since we neglect its enviroment. My idea is that when we describe its enviroment to a greater and greather accuracy and thus describe the atom better these uncertaities are diminished; our description becomes more complete. My guiding principle here is that we, humans that is, need to break things down into concepts in order to understand them, but the reality is that nature works as one the only true description of reality is an infinte one and hence any finte description will always be in complete.

    Yeah not sure where im going with that but nm. But i think one should not be concerned with "the obsever" of an experiment as much as the whole enviroment(maybe just a choice of language). Hence any experiment is subject to its enviroment in an essentially non-local way in the EPR sense and also in the Wheeler delayed choice experiment way( ie non-local in a temperal sense as well as spatial).

    Also i think the 2nd law of thermodynamics is insightful in such matters. If one thinks of entropy as information then it makes sense that when one makes a measurement one gains some information about the world and as the observer is a physical system too his and hence the universes entropy should increase. In the spirit of the 'statistical time hypothesis' i think the increase of information(or entropy) is a meaningfull defination of "time". Time as a coordiante otoh is essentially meaningless.

    BTW all ideas in this post i don't nessarily believe they are just my thoughts
  12. May 8, 2009 #11


    User Avatar

    I don't know Thomas enough to know he entire vision, but I at least partially agree with him on several points judged from his post above.

    I maybe misinterpret you but about what you say here, I understand your point and it's a common theme, but from my perspective is has a problem and it's related to what I call the inside view:

    The point is that any consideration always takes place from the point of view of an observer, and this observer can hold (IMO) only finite information. This means that no matter how you measure size, at some point there is a complexity limit to how large system this one observer can relate to. This is IMHO, why the birds view fails.

    Otherwise your idea could give a birds view, if we consider "the entire universe" to be one observer. Although that is perfectly fine in a way, it does not help other observers. I think this complexity constraints, implies an extra uncertainty.

    Sometimes, atom vs Eeart-based-laboratory the assymmetry in complexity is so large that your ideas works fine. But I think this idea doesn't hold in the general case.

    An alterantive to your suggested route, as I see it, is the evolving law and emergent objectivity view. But the FULL emergence of complete objectivity, is still running with the complexity contraint in my view. So given a finite observer, there is a limit to the objectivity. And this limit, has predictable impacts on this observers actions. This is IMO the key to the unification picture as well.

    If you don't know you dont' know, but the complexity contraint metaphorically means that the box from which you pull your choices, gets smaller and smaller, until the point where no more than a few choices possibly fit in the box. (shrinking statespace)

    To make sense of this idea, one also need to explain the ORIGIN of complexity. Which I see closely related to the origin of mass. Also this I picture in a game theoretic way. Life is game, with stakes. The winner gains stability, and ultimately control and complexity. But knowing your environment, you can ultimately also control your environment and consume it.

  13. May 9, 2009 #12
    Let me explain what I mean by observer dependence in somewhat more length.The main reference is http://arxiv.org/abs/0811.0900 . There is also a sequel with one number higher, but I am rather unhappy with that paper and will write a better one once I manage to assemble enough motivation.

    The introduction of observer dependence amounts to little more than making a Taylor expansion of all fields. The point is that a Taylor series

    f(x) = sum_m 1/m! f_m (x-q)^m

    depends not only on the Taylor coefficients f_m, but also on the expansion point q, i.e. the observer's position. To each function f(x) we can associate many Taylor series, parametrized by q, whereas the function defined by the Taylor series is unique in good cases. The Taylor coefficients for different base points are of course related, but to make a Taylor expansion we must commit some definite base point. It then makes sense to talk about *the* observer carrying *the* clock, etc.

    To quantize a Taylor series we introduce some dynamics for the observer's position. This adds some terms to the Lagrangian, and it is here that the observer's charge e enters. In particular, the observer decouples from the field dynamics if e = 0. The crucial novelty is that the Taylor coefficients f_m are the components of field measured relative to the observer's position, which is subject to quantum fluctuations. If

    p = Mv

    denote the (non-relativistic) observer's momentum, mass and velocity, Heisenberg tells us that

    [q, v] = i hbar/M.

    If the observer measures her position at some instant, she does not have a clue were she is at the next instant, because her velocity must be completely unknown.

    There are two ways to avoid this quantum uncertainty:
    1. Set hbar = 0. This gives classical physics e.g. GR.
    2. Set M = infinity. This gives QFT

    If we keep M nonzero and finite, and consider GR in this setup, there are again two limits:
    1. hbar = 0. This describes GR coupled to a point particle with mass M.
    2. Newton's constant G = 0. This should describe QFT with M as the cut-off scale. Because the observer's mass is infinite in QFT, all relevant energies must be much smaller than M, including the energy of virtual quanta. Hence M acts as a cut-off.

    This makes it very clear why GR and QFT are mutually incompatible: if we let the cut-off scale M to infinity, the observer will interact with gravity and collapse into a black hole. Not good! To my knowledge, this is by far the most intuitive argument why QFT has big problems specifically with GR.

    There is a web of interconnections between three crucial concepts in quantum gravity: locality, diff anomalies, and observer dependence.

    1. Locality.
    Classical GR is a local theory, and so is QFT (in an appropriate sense). What is happening here and now should be best described in terms of local data, and not e.g. in terms of data living on some holographic screen outside our visible dS universe. However, theorems (LOST and others) state that nontrivial correlation functions are incompatible with the space-time diffeomorphism symmetry of GR. This no-go theorem can be evaded by quantum mechanical breaking of this symmetry, i.e. by

    2. Diff anomalies.
    Contrary to popular belief, a gauge anomaly does not automatically render a theory inconsistent. What it does is to turn a classical gauge symmetry into a quantum global symmetry, which acts on the Hilbert space rather than reducing it. This may or may not be consistent, depending on whether this action is unitary. The diff anomalies relevant to quantum gravity generate a higher-dimensional generalization of the Virasoro algebra, which was discovered almost 20 years ago. It is not possible to construct representations of this algebra using the fields themselves, because such attempts leads to non-sensical infinities.

    3. Observer dependence.
    Instead, off-shell representations of the multi-dimensional Virasoro algebra can be built from space-time trajectories in the space of Taylor series. The relevant extensions are functionals of the observer's trajectory (the time evolution of the observer's position), and can hence not arise in QFT, where the observer is never introduced. This is in accordance with a theorem that asserts that there are no diff anomalies in 4D in QFT.

    Hence we see that locality, diff anomalies, and observer dependence are closely related. You can not have one in quantum gravity without buying the whole package.
  14. May 9, 2009 #13
    I find it very difficult to understand what people mean by "discrete space-time". By definition, any physical measurement must involve a physical system made of real particles. By another definition, empty space does not contain any particles. So, logically, you cannot measure anything in empty space, all your measuring devices and detectors are supposed to stay silent. Space (or space-time) is not a physical system that can be observed, so it is impossible to verify in experiment whether it is continuous or discrete. If this question is beyond experiment, then it should be beyond physics.
  15. May 9, 2009 #14


    User Avatar

    Thanks! I'll try to skim that paper when I get a chance, I noticed it was quite long so I won't be able to do it now.

    The first impression is that your motviation and reasoning is different than mine, but I seem to share some of the conclusions regarding the impact of the observers mass, which makes it interesting for me to look into. Although mass and spacetime is reconstructed in my view, my starting abstraction is a complexity number, or information capacity/inertia. I expect this to more or less related to mass, but exactly how the observed mass as we konw it, relate to this is still unclear.

    I need to read it more carefully to see the whole picture and identify your starting points.

  16. May 10, 2009 #15


    User Avatar
    Science Advisor

    I think all theories talking about discrete spacetime admit at the same time that this discreteness is not necessarily subject to experiments directly. E.g.in LQG the area operator is no dirac observable.

    But there are indirect effects for this discreteness. Some time ago in LGQ the vacuum dispersion relation has been derived in some semiclassical context. Because of this fundamental discreteness c=const. changed to a frequency dependent c.

    I don't know how serious this was, because in LQG you still have problems in finding the correct semiclassical limit. The Hamiltonian is still not defined rigorously and the results often depend on he details of the trial states.

    I don't know the current status, but this topic seems to be very interesting to me.
  17. May 10, 2009 #16

    I think introduction of theoretical ingredients that are not directly observable is very dangerous. Before long you can find yourself counting angels on the head of a pin.
  18. May 10, 2009 #17


    User Avatar
    Science Advisor


    What about Hilbert spaces, gauge fields, metric, energy-momentum-density, ...

    All not directly observable in the strict sense
  19. May 10, 2009 #18


    User Avatar

    Excellent point :)

    However I do join Meopmunk in the ambition of sticking to observables. But I think if you take that seriously, hilbert spaces to mention one thing IS subject to the same critic. This is where I seem to differ with Meopmunk.

    This is in line with my view, where hilbert spaces must be evolving.

    Relating to my comment in this discussion https://www.physicsforums.com/showthread.php?t=312921&page=2

  20. May 10, 2009 #19
    I agree about gauge fields and energy-momentum density. They are not observable, but they are not fundamental either. They are just formal abstract objects. Theory can be formulated without them. This is best explained in Weinberg's "The quantum theory of fields" vol. 1. His idea is that the fundamental quantities in quantum theory are interacting operators of energy and boost. Quantum fields are just parts of a mathematical trick that allows us to build these interactions without losing the Poincare invariance.

    I agree that Hilbert space, wave functions, Hermitian operators are highly technical things. However, they are deeply rooted in experiment. I would like to draw your attention to the approach called "quantum logic". It basically says that quantum mechanics is simply an analog of the classical probability theory in which certain observations cannot be performed simultaneously. According to this approach, quantum theory can be formulated in terms of observable quantities only, i.e., probabilities. The mathematical language for doing that is the theory of orthomodular lattices. However, this language is unfamiliar to most physicists and difficult to use. So, using Hilbert spaces and (non-observable) wave functions is the price we pay for mathematical convenience.
  21. May 10, 2009 #20


    User Avatar

    Yes I think is the best description from the human point of view. And in this view, noone can I think deny that our abstractions are de facto evolving, because human science and experimenting has evolved in the past and continues todo so, and with it our worldview.

    To understand the emergence of hilbert spaces at human level amounts to understanding the EVOLUTION of human science and development of physics, which has a history of theory evolving during the feedback of experiment.

    (This is a similar point Smolin makes in arguing in favour of his evolving law.)

    So I think your answer is correct.

    *But* unless one thinks that this is only a human abstraction, and think the it's not a problem for the physicists, one would keep questioning and ask if the abstractions and state spaces, described by anything but a human, like a physical inside observer, would also be evolving? Because after all, physical systems that are non-human, behaves AS IF they actually somehow also work with similar state spaces.

    Can we describe this process physically? What do we get it we treat "experiments" and "measurements" on a similar footing? Could "experimentally rooted" simply be thought of as "evolutionary favoured or selected"? I think so.

    Then it seems the missing link here is to understand our own history, and how our present and our future relate.

    A scientists, has reasons rooted in his experimental history, for holding certain abstractions. We seem to agree there.

    Either you think an analysis of that takes us into the psychology of scientists brains (like Popper did) and is thus not relevant to science.

    Or you think, like I do, that this is exactly the kind of thing we need to understand in depth.

    If you take the view of QM, that it is about what an observer can measure, it doesn't seem to have much todo with humans. Instead shouldn't we ask, how the hilbert spaces of the environment observed by a piece of matter, is encoded in the matter itself? And how these respond to each other?

    As far as I know the usual quantum logic solves none of the mentioned problems. It is more or less and equivalent way to introduce QM (the differences are technical more so than fundamental IMO) The same question applies to this logic, what is the origin of this logic?

    I think there is an answer to that (to be nailed). And the answer to that is the same as to how hilbert spaces emerge.

    I expect that in an evolutionary perspective, quantum logic would probably prove to be more fit than classical logic, and that observers evolve quantum logic behaviour since it makes them more fit - the question I ask, is to describe and understand in detail this abstract evolving logic.

Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Similar Discussions: Nature of quantum theory of gravity.