Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Quantum Gravity?

  1. Feb 3, 2008 #1
    Ok, I am by no means an expert in this field I've just read a few books about it (poplular science books) and I'm going to begin a physics major next year. I've got a few questions though:

    1. There are many different theories for quantum gravity, what makes physicists think one might be more correct than another? It seems to me like they are shooting in the dark. When one looks at the evolution of science what you see isn't educated guesses but reasoning based on empirical data - like I said I'm not too educated on QG but it seems to me like this is not what's happening, feel free to tell me that I'm completely misinformed.

    2. What makes us believe there is such a thing as quantum gravity? Is is just because we think that it just should be that way? Is it because QM and GR are incomplete or make predictions that don't appear to be correct? Is there a chance that there is no QG or ToE?

    3. Are there any books which deal with the above questions? I was thinking Lee Smolins newest book might.

    Sorry if I've insulted anyone who has devoted there lives to this sort of research, I too wish to study physics and possibly quantum gravity. I am merely seeking a greater understanding of why the search is on.

    Also sorry if this sort of thread is commonplace. I'm sure I've got more questions I just can't seem to think of them right now.
  2. jcsd
  3. Feb 4, 2008 #2


    User Avatar
    Science Advisor
    Gold Member
    Dearly Missed

    At the moment my favorite QG paper is one that is part wide-audience and part technical. It is a November 2007 preprint by Renate Loll.
    You could get quite a bit out of it, especially the first 5 or 6 pages of introduction, I think.

    It is free for download. Here is the abstract, or summary page. It has a link to the PDF.

    The Emergence of Spacetime, or, Quantum Gravity on Your Desktop
    R. Loll
    21 pages, 11 figures, write-up of plenary talk at GR18, Sydney, July 2007
    (Submitted on 2 Nov 2007)

    "Is there an approach to quantum gravity which is conceptually simple, relies on very few fundamental physical principles and ingredients, emphasizes geometric (as opposed to algebraic) properties, comes with a definite numerical approximation scheme, and produces robust results, which go beyond showing mere internal consistency of the formalism? The answer is a resounding yes: it is the attempt to construct a nonperturbative theory of quantum gravity, valid on all scales, with the technique of so-called Causal Dynamical Triangulations. Despite its conceptual simplicity, the results obtained up to now are far from trivial. Most remarkable at this stage is perhaps the fully dynamical emergence of a classical background (and solution to the Einstein equations) from a nonperturbative sum over geometries, without putting in any preferred geometric background at the outset. In addition, there is concrete evidence for the presence of a fractal spacetime foam on Planckian distance scales. The availability of a computational framework provides built-in reality checks of the approach, whose importance can hardly be overestimated."

    By way of comparison, here is a more technical paper from the same group of researchers that came out about the same time.

    Planckian Birth of the Quantum de Sitter Universe
    J. Ambjorn, A. Gorlich, J. Jurkiewicz, R. Loll
    10 pages, 3 figures
    (Submitted on 17 Dec 2007)

    "We show that the quantum universe emerging from a nonperturbative, Lorentzian sum-over-geometries can be described with high accuracy by a four-dimensional de Sitter spacetime. By a scaling analysis involving Newton's constant, we establish that the linear size of the quantum universes under study is in between 17 and 28 Planck lengths. Somewhat surprisingly, the measured quantum fluctuations around the de Sitter universe in this regime are to good approximation still describable semiclassically. The numerical evidence presented comes from a regularization of quantum gravity in terms of causal dynamical triangulations."

    My impression is that the first paper (a kind of survey overview by one author) is going to be accessible to you----or parts of it will be. Usually the first few pages at the the beginning and then the conclusions section, about one page at the end, are the most accessible. If you get interested in this approach, I can tell you a lot about it and save you having to dig so much out by yourself.

    this program is among the most advanced and interesting in QG research. The first paper was in 1998 (Ambjorn and Loll) so it is just under ten years old.
    Last edited: Feb 4, 2008
  4. Feb 4, 2008 #3


    User Avatar
    Science Advisor
    Gold Member
    Dearly Missed

    Read the first 4 or 5 pages of Loll's paper. She gives reasons. One reason is there is no extra clutter, no extra assumptions-----no strings, no branes, no extra dimensions, no loops, no spinnetworks, no minimal length, it uses triangulation but doesnt depend on the triangulation, the size goes to zero, it is a straightforward Feynman path integral method just that the path cruises thru the set of possible of geometries of space. It runs in a computer. The results are fascinating.

    GR breaks down at the BB and in BHs. A singularity is a breakdown of theory. QG gets past those places. LQG is the best developed in that department. A quantum theory of spacetime geometry is emerging. It will resolve the classical singularities where GR fails and it probably find fundamental degrees of freedom at a level where matter and geometry are the same thing-----matter as a facet or aspect of geometry----so we will have some answers to the longstanding puzzle of why matter affects geometry, how they interact. I'm fairly certain about this at a general level. I'm not suggesting one approach will win out over another, maybe several approaches to QG will prove closely related or equivalent. Maybe all will fail and some new approach will surface. But something does seem to be taking shape in the general area of background independent QG.

    It's a good book. Definitely a must read.
  5. Feb 4, 2008 #4
    Thanks a lot for the responses marcus I will certainly take a close look at those papers when I have a bit more time, though I am finding it a bit difficult to understand parts of the first page! I understand mostly (I think) the message trying to be conveyed however, what bugs me is all of these words like non-perterbative, lorentzian metric etc, etc. I hope I'm not actually supposed to know this stuff right now!? I look it up on Wikipedia and it doesn't help to much as I'm presented with mathmetical concepts and forumla I'm just not familiar with.
  6. Feb 4, 2008 #5


    User Avatar
    Science Advisor
    Gold Member
    Dearly Missed

    You did good! No you are not supposed to already know technical terms like "non-perturbative"! Maybe I should apologize for suggesting a way-too-hard paper. But I am glad you got that exposure.

    I think now it's up to me, if we want to continue the conversation, to paraphrase what Loll is talking about in less technical language.

    Or else to say in general terms why I like her approach. In fact that is more in line with your question #1, and would also be the easiest for me (as long as you are interested in that kind of explanation.)
    Last edited: Feb 4, 2008
  7. Feb 4, 2008 #6


    User Avatar
    Science Advisor
    Gold Member
    Dearly Missed

    First a word of advice. My impression is that condensed matter physics might have better career prospects. Also there is something of a boom going on in astrophysics. New instruments both ground-based and in orbit. (Auger, GLAST, MAGIC, Planck). Astrophysics offers information both about the universe and about fundamental physics gathered by instruments which exploit a new technologies and cost less than accelerators and colliders.

    I would say that QG is something to know about, to keep in one's periferal vision. But not something to delve into as a US undergraduate. As far as I know, string theory has been something of a disappointment---output and citations began to dwindle as early as 2003. String faculty jobs are being cut back (according to a 2007 HEPAP report.) Admittedly, there are newer non-string approaches to QG which are on the upswing. (They have obscure names like CDT, QEG, LQC, NCG. Sorry about all the acronyms.) But a lot of the non-string QG scene is outside the US----Canada, UK, Europe. Loll's group, for instance, is mainly at Utrecht in Holland. Rovelli's group is at Marseille and several other French universities. The QEG people are at Mainz and Trieste. And the numbers of people involved are very small. Nonstring QG has at most several hundred people worldwide, counting all the different approaches.

    I'm glad you express an interest in QG and give me a chance to talk about it, but don't want to divert you from the main business of a solid undergrad physics major, hopefully with specialization in some line like astrophysics and condensed matter or wherever the jobs are! Other people here at PF can provide knowledge and guidance on that score.

    Anyway here's why I like Jan Ambjorn and Renate Loll's CDT approach.
    It is totally minimal. They don't make up a lot of extra dimensions and garbage. And they get it to run in the computer.

    They get little universes to pop into existence, grow, shrink, and disappear. The universes are in a sense random because they arise and evolve according to a quantized General Relativity rule, and so they are different each time. The researchers use a Monte Carlo method which means they produce thousands of little universes at random, study and measure them, and take averages.

    An interesting aspect of these simulated universes, their dimensionality, can be measured by generating one and chosing a point in it, and seeing how radius and volume are related around that point. The dimensionality is not predetermined. It is a quantum observable (like position and momentum of a particle). The dimensionality will normally depend on the scale at which you look. At larger scale the spatial dimension will be 3D and the spacetime dimension will be 4D, just as one would expect. At smaller scale the geometry is more like a foam or a fractal, rather than a smooth continuum. Indeed at smaller scale the dimensionality may not be an integer and may be considerably less than 3D.
    Another way the dimensionality of these little universes can be measured is by stopping one and running a diffusion experiment or random walk in it, starting from some arbitrarily chosen point.

    The method used in CDT (Loll's approach) is essentially the same as the Feynman path integral way of studying the motion of a particle. With the path integral, you approximate using piecewise linear paths----line segments----and make a weighted sum of all possible jagged paths. Then you let the length of the segments go to zero.
    With CDT the triangulated spacetime geometry is analogous to the jagged piecewise linear path-----the zigzag chain of line segments.
    Again there is a weighted sum. Again you let the size go to zero.

    You may have heard that in the Feynman path integral picture you think of the particle as exploring all possible ways of getting from A to B. (the amplitude weighting is such that when it's all added up the really bad ways tend to cancel out).
    what happens here is closely analogous. A spacetime is like a path through possible 3D geometries, so you can think of the universe as exploring all possible ways of getting from spatial geometry A to spatial geometry B. There is an amplitude weighting attached to each way. The initial and final geometry states A and B can be minimal---thats how it is in their Monte Carlo runs.

    I hope i haven't made things more confusing! If I have, i apologize. Please just overlook the blunder and don't worry about what is poorly described or incomprehensible.
    The main thing is that the Utrecht people have something working that begins to look like quantum gravity. That is, a quantum geometry which evolves according to a quantized version of General Relativity----(to speak in gibberish: a Feynmanesque path integral using the discrete Regge form of the Einstein Hilbert action :biggrin: ). The reality is simpler than it sounds: they are beginning to have a quantum gravity (in other words, quantum spacetime geometry) that you can run in a computer and study what happens with, at various scales. It begins to look like quantum gravity ought to look-----in my humble view.

    Especially impressive to me is the foamy or fractally structure at planck scale----smoothing out to a conventional continuum appearance at larger scale.

    I think this is how, some 40 years ago, John Archibald Wheeler (Feynman's thesis advisor at Princeton) thought it ought to look.

    That's good. And the minimality is good: the fact that their approach is barebones, no extra junk, no extra dimensions----just the absolute most simple straightforward path integral quantization of spacetime geometry.

    Their using triangle building blocks corresponds to the series of line segments in Feynman's original path integral.
    Keeping track of all the building blocks does lead to a lot of bookkeeping, but eventually you let their size go to zero. It was just a temporary way to finitize the problem.

    Well that sort of responds your #1 question. That is why someone who likes the CDT approach might like it. I could also tell you why I like Loop Quantum Cosmology. And someone else who has a favorite QG might contribute another part of the response.

    I like LQC (loop quantum cosmology) because it gets rid of singularities and runs the model back before the big bang. That is another thing that quantizing spacetime geometry (i.e. quantizing gravity, or Gen Rel) can be expected to do. So it is another approach that seems to be working out. If you want more detail on that, let me know.
    Last edited: Feb 4, 2008
  8. Feb 4, 2008 #7
    Yeah, that's the impression I get as well. QG just happens to be one of many fields I'm very interested, I'm also really interested in astrophysics.

    I'm curious about this - so how are they producing these (obviously with a computer)? To better state what I mean, are they just giving these universes random constants or something? I don't really get this.

    I don't understand what you you mean by the bolded part. I understand what a dimension is I think: a degree of freedom of movement. What I don't understand is how it cannot be predetermined. I just don't even know what that means.

    Also, when you say it becomes like a foam or fractal - do you mean it is sort of homogeneous?

    This approach seems a bit like finding a correlation coefficent between data and say a line segment but in three dimensions (sorry if that makes absolutely no sense :tongue:). Is a weighted sum sort of an average? You said the pick arbitrary point in the 'universe' and run the diffusion experiment but it seems to me (probably because I don't understand it right) that there would be infinite paths?

    I'm sorry, still pretty confused on this.

    Yeah, for this reason alone I can see why you (and others) might like this theory - it seems more intuitive than others (I just can't begin to understand higher dimensions). So does this theory make predictions that might be tested experimentally? Does this deal with the problems of a singularity (what they are I don't really know)?

    I understand that it is probably really difficult to explain all these things to an ignorant (right now!) person like me and I thank you for your patience and help. It's great to talk to an actual physicist!
  9. Feb 4, 2008 #8


    User Avatar
    Science Advisor
    Gold Member
    Dearly Missed

    it's better than that.
    each universe assembles itself by randomly shuffling some number N (like 1/3 of a million) little tetrahedron-sided building blocks
    and it has a weight that reflects how well it conforms to Gen Rel (the Regge version that works with triangulated spaces.)

    to get the next universe, their program shuffles the blocks a million times according to randomly chosen moves which rearrange the blocks---the move probabilities reflect Gen Rel making some rearrangements more likely than others.
    after a million rearrangement moves, they consider that they now have a new universe, chosen at random, always according to a quantized version of Gen Rel (technically the Regge version that applies to triangulated spaces)

    This is an intelligent question. When space and time arise from the most fundamental constituents, even dimensionality itself is a quantum observable! Getting it right is non-trivial! An early form of the triangulation path integral method was tried in the 1990s, between 1991 and 1998, and it kept giving degenerate universes with the wrong dimensionality. Either the dimension was too low----the blocks stuck to together in a feathery branchy way that leads to dimension 1 or 2. Or the blocks would pile on each other like sardines and the dimension would be unbounded. One block might be in direct contact with 40 others, so the volume would go up as the 40th power of the radius, characteristic of dimension 40.

    this can happen because the universes are autonomous. they are not assembled within some preexisting surrounding 4D space! That would be cheating---it wouldn't be fun or realistic either. Think of building a surface out of equilateral triangles but instead of only 6, you can have 7 equilateral triangles come together at one point. You see it warps out of the flat 2D. Now what about 8 or 10 or 20.

    The geometry of these things can be highly irregular. (Like the jagged paths in a Feynman path integral can be wildly jaggy.)

    It is important to allow this. their method is mathematically sound partly because they don't cheat or cut corners. They allow blocks freedom in how they stick together.

    So the dimensionality was NOT coming out right for a long time. Finally in 1998, with the first CDT paper of Loll and Ambjorn, they got some control on it. But they still did not get 4D to work in the computer until later. Finally in 2004 they got the first thing that actually turned out 4D. WHEW!

    Ever since then CDT has been progressing nicely, with results piling up. I guess you could say that 2004 was a CDT breakthrough. Getting it to turn out 4D was non-trivial

    Because of Heisenberg uncertainty, one expects the structure of space to be extremely chaotic (un-smooth, the opposite of smooth) at planck scale. Quantum observables like area, angle, volume, the location of something, etc, uncertainties about all these trade-off---they can't all be controlled at once. So something can look like a smooth continuum or smooth manifold at OUR scale and be seething chaos at much smaller scale. Even the dimensionality may not be well-defined at least in the usual way.

    Perhaps I shouldn't have mentioned diffusion. Running a diffusion experiment in one of the universes is just a way of evaluating its dimension. The higher the dimension the sooner a random walker gets lost and the less likely he is to ever return home. The dimension you get by measuring this way is called the "spectral dimension". One of their papers is about that.

    I'm a retired mathematician who likes to watch progress in fundamental physics (the what that space time and matter arise from.) It is an exciting spectator sport.

    So far, LQC (loop quantum cosmology) is ahead of Loll and Ambjorn CDT in the singularity department. LQC researchers may have discovered how to resolve the cosmological and big bang singularities. It is not conclusive but it works in many cases they have studied.

    A third contender is an idea that Lee Smolin's group is working on---including matter as a facet of quantum geometry. It is a strange idea. I don't know how it will work out. A young co-worker of Smolin just gave a seminar talk about it, which is online. I started a thread about it. Yidun Wan. Matter as braids in spin networks.
    Last edited: Feb 4, 2008
  10. Feb 5, 2008 #9
    So if I'm understanding this correctly:
    *The "block" represents matter
    *The way the blocks stick together determines the dimensioniality
    *The way the blocks stick together is random

    But it seems like if the blocks stick together at random that means the dimsensionality is random so is that why they were kept getting wrong dimsensioniality? Did researchers have to create some sort of rule which made it so it wasn't completely random so that the universes would have the appropriate dimensioniality?

    This is fascinating stuff, I can't wait untill I can get a deep understanding of it. But I suppose that's a ways off and I'll have to enjoy my undergrad years first. If I really want to understand this sort of stuff is it a good idea for me to do a double major in math/physics? Because there does seem to be an enormous amount of math involved. I'm a little bit worried I'm not smart enough to understand this stuff but I guess I'll have to wait and see.
  11. Feb 6, 2008 #10


    User Avatar
    Science Advisor
    Gold Member
    Dearly Missed

    that's right.
    they introduced a causal ordering in the way the blocks are assembled

    the blocks are chunks of spacetime
    they are 4D analogs of an equilateral triangle, or of a regular tetrahedron
    the technical term of this 4D thing (analogous to a triangle) is a 4-simplex

    In 1960 an Italian mathematician named Tulio Regge published a paper showing how to do General Relativity wihout coordinates, where he divided the spacetime continuum up into 4-simplexes. A 4D continuum divided up into 4-simplexes can be called a TRIANGULATION (by analogy with dividing a plane or smooth surface up into triangles).

    this eventually gave rise, in the 1990s, to an approach to quantum gravity called "simplicial quantum gravity" or "dynamical triangulations", where you have all the triangles be the same shape and size------or all the 4-simplexes be the same shape and size.
    And in this approach curvature is measured entirely by counting how many triangles fit together at places where they come together.

    this is probably the hardest thing to understand about the CDT approach. It is hard to imagine tetrahedrons fitting together so you get more than what you expect in ordinary flat 3D space meeting at an edge. and even harder to imagine how 4 simplexes can fit together with more meeting at some junction than would give you flat 4D space.

    but it is possible to imagine 7 instead of 6 equilateral triangles meeting at a point and it gives you a kind of conical bump. So curvature can be approximated by how many simplexes meet at the junctions. And the Einstein equation, which is about curvature, can be translated into something about COUNTING simplexes where they are glued together,

    So a number of people, including Ambjorn, could see already in 1991 or 1992 that they had a Dynamical Triangulations way that might work to quantize spacetime geometry, i.e. to quantize General Relativity.

    But it kept not working----giving the wrong dimensionality, either too big or to small---until 1998, when they tried organizing the simplexes into layers according to what could influence what---it was a causal ordering, so they called it causal dynamical triangulations (CDT)

    introducing that layering, or causal ordering, is just what you described as "some sort of rule which made it so it wasnt completely random" so that eventually after 1998 the dimensionality started to come out right. At first they studied toy cases, like 2D gravity with one space and one time dimension, and 3D gravity where space is supposed to be two dimensional, finally in 2004 they did the full 4D case.
    Last edited: Feb 6, 2008
  12. Feb 7, 2008 #11
    Amazingly enough I sort of get what you are saying now as well as the first part of the first paper (up until they start discussing the path integral). What is to be gained by repeating this same experiment i.e. reshuffling the blocks and seeing how they order themselves (I'm not sure if this is exactly what they are doing)? I suppose that last question might stem from a general lack of understanding of why it is necessary for a computer model at all (besides seeing if your mathematical model reflects physical reality). Also you said earlier that "the move probabilities reflect Gen Rel making some rearrangements more likely than others." Could you maybe elaborate on this - are these probabilities due to the causal ordering or are they due to trying to unify GR and QM so now GR is probabilistic just like QM? Sorry about the confusion it's my bad not yours.

    Out of curiosity, is this the only group using Regge's simplex 'definition' of GR?
  13. Feb 7, 2008 #12
  14. Feb 7, 2008 #13
  15. Feb 10, 2008 #14


    User Avatar

    I've always had a strong feeling that different people have quite different approaches to this. What approach that succeeds is yet to be found.

    I'm just an interested amateur but for me, my motivation to learn about this based by an internal subjective observation, that my understanding of my own environment and world I live it, is not entirely consistent. This generates some subjective stress, and encourages me to understand this better, and to resolve the inconsistent lines of reasoning that exists in my brain. As philosophically and theoretically minded, I think this is a common subjective motivation.

    So in this "phsycological view" it's not correct that there is no observational basis for the problems. Competing lines of reasoning in my brain is all the subjective observational basis I need to invest in solving this :) I must be easily amused.

    Yes, science is supposedly objective, but the collective is still composed by subjects that communicate.

  16. Feb 10, 2008 #15


    User Avatar
    Science Advisor
    Gold Member
    Dearly Missed

    Geoff, thanks for calling attention to this video!
    I watched it a couple of years ago and had forgotten what a good brief introduction it gives to triangulation QG.

    Here is the PIRSA link

    She talks fast, hits only the essentials, makes it as clear as possible within the 50-minute time limit, and as such it is still perfectly up-to-date after 2 years! I just watched the whole thing again and can recommend a repeat viewing.
  17. Mar 22, 2008 #16
    I guess in the most cases, it's just a matter of personal flavour. As an analogue example from history, take the theory of light. Starting from 17th century, there were particle theory and light theory, founded by Newton and Huygens, and until beginning 19th century, no empiric way to decide which theory was correct. So, for more than 100 years, it was just a question of personal flavour. Until Young discovered light diffraction in 1801 and prooved light is a wave.

    one counter example I've already given - particle theory of light vs. wave theory of light. Another example is general relativity - Einsteins "guessed" curvature of spacetime, there was nothing empirical in it. And further examples are QED and electroweak theory.

    take Einstein's GR equations. There's matter's stress energy tensor on the one side, and spacetime curvature tensor on the other. Matter is quantized, and therefore stress energy tensor is, too. So, if the one side of the equation is quantized, the other needs to be, too. So, there is a quantization of spacetime curvature needed.

    However, a more non-trivial question is, why QG should be a TOE? Why should a theory, that quantizes spacetime curvature, be a theory that unifies all types of interactions and all sorts of elementary particles? I'm not sure, but I guess one assumes this due to the experiences with other theories that implied symmetries, especially elektroweak theory. Until the 1960s, one was searching for a theory of weak nuclear interaction. One found electroweak theory, with a SU(2) x U(1) symmetry that unified EM and weak nuclear interaction and a unification of different types of particles, e.g. electron and neutrino.
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook