Starting from Discrete or Continuous?

Click For Summary
The discussion centers on the challenges of formulating a viable quantum gravity theory that reconciles the continuous nature of general relativity with the discrete aspects of quantum mechanics. Participants explore the merits of starting with discrete models, such as spin foam networks in loop quantum gravity (LQG) and simplicial structures in causal dynamical triangulation (CDT), versus beginning with continuous frameworks. There is debate over whether gravity must be quantized, with some arguing that singularities in classical gravity suggest a need for a quantum description. The conversation also touches on the implications of using lattice approaches and the significance of dimensionality at the Planck scale, suggesting that spacetime may effectively behave as two-dimensional at very small scales. Overall, the dialogue emphasizes the complexity of integrating quantum mechanics with gravitational theories while questioning the necessity of quantization in gravity.
inflector
Messages
344
Reaction score
2
I've been looking over quantum gravity threads here for a year or so. One thing keeps puzzling me. It appears to me that difficulty coming up with a viable quantum gravity theory is melding the continuous nature of the general relativity equations with the discrete (i.e. quantized) nature of quantum mechanics.

Nature is clearly both continuous and discrete at the same time.

Besides string theory, it appears to me that the other approaches try to build a gravity model using discrete pieces, spin foam networks and such in LQG, various n-dimensional simplex networks in CDT, etc.

What is the merit of this approach? Why start with quanta and try to build a continuous model rather than starting with a continuous model and then try to build quanta from it? Is there some theoretical benefit that one gains?

It seems that some of the models for quantum mechanics rely on resonance or quantum harmonic oscillators to model quantized energy levels, couldn't this same approach work for a model of quantum gravity? Is there anything that argues against the idea of some continuous substrate with the quanta coming out via resonances?

String theory seems to partially argue from the opposite direction. That one can start with inherently fuzzy notions and get quanta due to different vibrational harmonics and resonances. But string theory also seems to start with discrete particles. There are strings for every particle rather than one giant string, connected network of strings or interconnected mesh of strings.

Has anyone tried to model quantum gravity using interconnected strings (i.e. a multi-dimensional mesh or network)? Or does this end up looking like LQG or CDT as soon as you connect the string together anyway?

Finally, is there anything besides a desire for theoretical purity, that indicates that gravity is quantized in any way? Any observation or phenomena? Is there any reason that would preclude gravity from being a continuous phenomena assuming we could find a model that didn't exhibit the singularities one gets from too much matter in too little space?
 
Physics news on Phys.org
LQG did not start with discrete objects, but somehow derived the discretization = during the constructs of the Hilbert space of spin networks. Only recently did they turned the theory upside down and focussed more on the abstract spin networks as basic building blocks.

CDT does something which is rather similar to lattice gauge theory, except for the fact that in lattice gauge theory the lattice is fixed whereas in CDT The triangulation is fully dynamical. The most obvious benefit is that QG Becomes accessable to computer simulations; another benefit should be regularization (cut-off) with finite expressions - where finiteness should be preserved in the continmum limit.

The basic idea is that ad-hoc quantization of gravity on smooth manifolds should become meaningless at the Planck scale due to infinities. So it seems natural to try some discretization.

A very simple reason why gravity should be quantized is the Einstein euqation

G = T

T on the right side representing matter degrees of freedom is constructed from operatorsd living in some Hilbert space, whereas G is constructed from a smooth manifold. This seems to be contradictory already on the formal level. TRying to modify the equation like

G = <T>

is - afaik - inconsistent mathematically. But I have to admitt that I am no expert on these quasi-classical QG theories.
 
tom.stoer said:
LQG did not start with discrete objects, but somehow derived the discretization = during the constructs of the Hilbert space ...

That's right! Lqg was originally formulated using a smooth spacetime manifold. That version or style prevailed more or less from 1990-2008.

The discreteness was not a property of space but had to do with what we could measure. And thus the quantum states of the geometry.

States have to do with our interaction with the system: what we can prepare, measure, and know. The core quantum precept, Heisenberg, is not so much about how the world "is" but about what we can know about the world. Niels Bohr made this point generally in a famous quote. "Physics is not about how Nature is, but what we can say about it."

So the first Lqg was to embed something akin to "Wilson loops" into the smooth spacetime manifold, or into a timelike slice of it. Then one begins to do something analogous to measurement---one follows "holonomies" around the loops and see how a sample frame will tilt and roll and yaw as it goes around. Pretty soon one has discrete spectra of observables.
A Hilbertspace with discrete stuff serving as a basis. The "spin networks" which are in a sense just glorified Wilson loops, became the basis of the Hilbertspace. By about 1995 the field was no longer about loops but was about colored graphs called spin networks.

Until around 2008 the predominant way was to have the spin networks be embedded----injected into smooth manifold. So they would live there. But then recently it dawned on some people that they did not need the manifold.

The spin-networks themselves represented what we have measured and know, or what we might in principle measure. THEY ARE THE STATES of our knowledge about the world. And so the manifold is just extra baggage that only makes trouble for us.

The researchers found out how to make the spin-network knowledge Lorentz invariant, so that they would not trip up on their own shoe-laces. And so it went: spin-networks plus, of course, the Hilbert of all possible linear combinations of them.
This is an egregious oversimplification, Inflector. Just a vague sketch agreeing with what Tom just said.
 
Last edited:
One should separate regularisation procedures from discrete quantum states. In CDT the triangulation is only a regularisation of a continuum. The continuum limit should always be taken to get the physical result i.e. taking the triangle lengths to zero.
This is the same case as in lattice QCD as tom pointed out. The reason why lattices are used in both cases is because one cannot use perturbation theory for QCD or gravity. Lattice approaches are non-perturbative.

The reason you can't use perturbation theory in both gravity and QCD is because at some energy the gauge fields/gravitons are no longer propagating degrees of freedom so you can no
longer perturb around the free theory.


So you see this has nothing to do with space-time being discrete its just a technical issue of being able to calculate things without perturbation theory.




I don't know what the situation is in LQG but in that case it seems that areas and volumes are found to have a discrete spectrum. This isn't put in by hand but derived I believe. On the other hand LQG is clearly non-perturbative so I suspect that, at some level, regularisation has so come into its formulation.
 
Finbar said:
One should separate regularisation procedures from discrete quantum states. In CDT the triangulation is only a regularisation of a continuum. The continuum limit should always be taken to get the physical result i.e. taking the triangle lengths to zero.
...

An important point to make here, which a lot of people will not realize if you aren't explicit about it, is that the limit as size --> 0 is not a smooth manifold.

Loll quite clearly says that there is no minimal length. That you are supposed to imagine taking the size of the 4-simplices to zero. But they generate small 4d universes in the computer and study them, and they find that they are not manifolds, and not in the limit either, as well as they can tell by increasing the number of simplices in the sim.

You can tell that what you get is not a manifold very simply: it has a kind of "fractal" structure at very small scale, so that when you measure the dimensionality (say by taking a random walk in the spacetime) the dimensionality varies continuously with scale down to around 2d.

As I recall it goes down to a bit below 2d, like to around 1.9. To get some pictures and some intuitive explanation by Loll, read their Sci Am article. I have a link in my signature at the bottom of this post. She's a good writer and there is a lot of intuition in it.

Another way to measure dimensionality (at various scales) is to see how the volume of a ball neighborhood increases with the radius. If you don't see how, in a non-manifold topological space, the dimensionality can vary continuously with scale, then you might find Loll's sci am article helpful. Or ask about it here. We've discussed this before. There's probably a 2005 thread.
 
inflector said:
Finally, is there anything besides a desire for theoretical purity, that indicates that gravity is quantized in any way? Any observation or phenomena? Is there any reason that would preclude gravity from being a continuous phenomena assuming we could find a model that didn't exhibit the singularities one gets from too much matter in too little space?
I don't think the question of singularities is directly related to the question of quantization. E.g., classical E&M has singularities.

As an example of why you can't have purely classical gravity at all levels of approximation, consider the case of a microscopic black hole that's evaporating. As it spits out its final photon, it disappears abruptly. To try to describe that as a classical, continuous process would be sort of like the unsuccessful attempts in the early 20th century by Bohr et al. to quantize the atom without quantizing the electromagnetic field.

tom.stoer said:
TRying to modify the equation like

G = <T>

is - afaik - inconsistent mathematically. But I have to admitt that I am no expert on these quasi-classical QG theories.
Isn't that essentially what semiclassical gravity (Barcelo, Visser, ...) tries to do? I only know about that stuff at a very superficial level. I assume they only expect it to give approximations; presumably it wouldn't be possible to extend it into a complete and mathematically self-consistent theory of gravity...?
 
I am sorry, I think I started the confusion by citing lattice gauge theory as an example in which the continuum limit is a must.

In CDT - as explained by Marcus - the continuum limit is replaced by the "infinite number of simplices within a given simplex limit". This is not the same as the continuum limit if one does not assign volume and length to simplices and edges prior to the dynamical evolution.

In LQG I am not sure. There is definately nothing like a continuum limit as the spin networks are pure algebraic objects with fixed rules where one cannot "tune" anything. Nevertheless there is the idea that the interior of a black hole horizon with its huge volume and large number of spin network vertices can be represented by on single huge intertwiner (vertex).

Generalizing this idea could mean that there is an SU(2)-compliant way of coarse graining spin networks a la Wilson's renormalization group (not in momentum but in spin space). I guess that if this idea is worked out one arrives at the same 2d object near Planck length. That means 2d would be somehow the asymptotic scaling limit of LQG. Afaik there we have already discussed some papers addressing these ideas.

If this conclusion is correct then three approaches, namely CDT, LQG and asymptotic safety (which does not introduce any discrete structure!) arrive at the same physical conclusion, namely that spacetime at short distances becomes an effective two-dimensional object. This is even more interesting than the discrete structure as the latter is only a calculational tool w/o physical significance and which differs from theory to theory, whereas dimensionality of spacetime can be determined experimentally in principle!

(the discreteness of spacetime area spectrum in LQG is obscured by the fact that the corresponding operators are no observables and therefore cannot be subject to measurement)
 
bcrowell said:
Isn't that essentially what semiclassical gravity (Barcelo, Visser, ...) tries to do? I only know about that stuff at a very superficial level. I assume they only expect it to give approximations; presumably it wouldn't be possible to extend it into a complete and mathematically self-consistent theory of gravity...?
I completely agree - regarding both the idea behing semiclassical gravity and my rather limited knowledge :-)
 
marcus said:
An important point to make here, which a lot of people will not realize if you aren't explicit about it, is that the limit as size --> 0 is not a smooth manifold.

Loll quite clearly says that there is no minimal length. That you are supposed to imagine taking the size of the 4-simplices to zero. But they generate small 4d universes in the computer and study them, and they find that they are not manifolds, and not in the limit either, as well as they can tell by increasing the number of simplices in the sim.

The point is that the space-times appearing in the path integral are continuous in the continuum limit. So the discrete triangulation is just a regularization. Of coarse this does not mean that space-times are generally smooth just that they are continuous. Furthermore it does not tell you whether observables
have a discrete spectrum or not.
 
  • #10
Finbar said:
... Of coarse this does not mean that space-times are generally smooth just that they are continuous. Furthermore it does not tell you whether observables
have a discrete spectrum or not.

Yes, and yes! It looks as if Loll's approach would give a continuous, but not smooth, spacetime---reminiscent of the path of a particle that is continuous but nowhere differentiable.
It is good to emphasize the distinction between continuous and smooth.

I suspect that at microscopic level there is only an approximate similarity between CDT and LQG. I suspect in neither case would there be a smooth structure, or a constant deterministic dimensionality, the same at all scales. But LQG is more about representing what one can measure and how the animal responds to observation, and CDT may seem to be more about what IS.

In the end this may be a liability or limitation for CDT, because of the fundamental unknowability of what nature IS at extreme small scale. Ultimately all one can know is how she responds to measurement. One cannot postulate smooth, but also one cannot even postulate continuous (i.e. between any two points an uncountable infinity of other points, the imagined ability to tie things into knots that cannot come untied, and what are points?---IMO these things go beyond the possibility of verification and are on shaky philosophical ground.)

I tend to think of CDT as a kind of suggestive approximation offering "experimental guidance"----a first rough sketch showing important features.
 
Last edited:

Similar threads

  • · Replies 2 ·
Replies
2
Views
4K
  • · Replies 13 ·
Replies
13
Views
4K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 30 ·
2
Replies
30
Views
7K
  • · Replies 26 ·
Replies
26
Views
5K
  • · Replies 24 ·
Replies
24
Views
7K
  • · Replies 15 ·
Replies
15
Views
5K
  • · Replies 60 ·
3
Replies
60
Views
7K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 29 ·
Replies
29
Views
4K