Signs LQG has the right redefinition (or wrong?)

  • Thread starter Thread starter marcus
  • Start date Start date
  • Tags Tags
    Lqg
Click For Summary
The 2010 redefinition of Loop Quantum Gravity (LQG) emphasizes a framework devoid of a smooth manifold, focusing instead on a network of geometric information represented by graphs and 2-complexes. This approach draws parallels to Quantum Electrodynamics (QED) and Quantum Chromodynamics (QCD), suggesting that space can be understood through finite chunks rather than continuous structures. The redefinition aims to align with the principles of quantum mechanics, prioritizing observable relationships over assumptions about spacetime. Discussions also highlight the evolving nature of LQG and its potential compatibility with the principle of relative locality, emphasizing the need for empirical testing of both theories. Overall, the conversation reflects a deep engagement with the implications of LQG's reformulation and its future directions in theoretical physics.
  • #31
MTd2 said:
I still do not see the problem in fixing that algebraically, seriously. Can you explain it?
If AS is right to some extend then Lambda is running and you simply can't fix it algebraically! So either you allow for "dynamical q-deformation in quantum groups" or you apply the Kadanoff block spin transformation to the spin networks and derive a kind of renormalization group equation for "intertwiner coarse graining".

It is clear that you don't see the problem of fixed Lambda in the large-distance / cosmological limit; it is this limit where we observe "fixed Lambda" in nature. But in a fully dynamical setup you can't expect that one bare parameter remains fixed. If this were true then LQG must explain the reason for that, e.g. a special kind of symmetry protecting Lambda from running. Up to now it's mysterious.
 
Physics news on Phys.org
  • #32
tom.stoer said:
If AS is right to some extend then Lambda is running and you simply can't fix it algebraically! So either you allow for "dynamical q-deformation in quantum groups" or you apply the Kadanoff block spin transformation to the spin networks and derive a kind of renormalization group equation for "intertwiner coarse graining".

It is clear that you don't see the problem of fixed Lambda in the large-distance / cosmological limit; it is this limit where we observe "fixed Lambda" in nature. But in a fully dynamical setup you can't expect that one bare parameter remains fixed. If this were true then LQG must explain the reason for that, e.g. a special kind of symmetry protecting Lambda from running. Up to now it's mysterious.

Wether lambda may run or not is an intersting question.

I don't have much to say except to throw in that I specualted by a long shot connection to my own thinking between the E-H action and information divergence (which is very similar to an action; as extremal action and extremal information divergences are at minimum very closely related principles, both conceptually and mathematically).
https://www.physicsforums.com/showthread.php?t=239414

When I posted that I afterwards realized that it was too tenous for anyone else to connect to.

My conclusion was that it's likely the the constant will run, but not as much with observational scale, but more with the observer complexity scale. My take on theory scaling is that unlike what I think is commonly common practice there has two be TWO energy scales. First there is the scale of where you look, ie, how you zoom in using a microscope or a accelerator. The other energy scale is where the information is coded. In common physics, does not NOT scale, it's somehow quasi-fixed by our "earth-based lab-scale".

My point is that we SHOULD consider indepedently "zooming a microscope" and scaling the microscope itself, because there is a difference. Somehow the latter scale, puts a BOUND to how far the former scale can run.

If anyone knows anyone that takes this seriously and has some references I'd be extremely interested in that. What I suggest is that the very nature of RG may also need improvement. Because the theory scaling as we konw it konw has fixed one scale; the Earth based scale. Nothing wrong with that per see as an effective perspective, but I think a deeper understanding may come if we acknowledge both scales.

/Fredrik
 
  • #33
tom.stoer said:
It is clear that you don't see the problem of fixed Lambda in the large-distance / cosmological limit; it is this limit where we observe "fixed Lambda" in nature.

Yes, that one. The paper with cc is barely out. I guess you are asking too much...
 
  • #34
Alright, what a coincidence,

http://arxiv.org/abs/1101.4788

it seems exists to find the correct order of magnitude of the cosmological constant, for LQG, as well as that it also has a UV behavior just like AS...
 
  • #35
MTd2 said:
Yes, that one. The paper with cc is barely out. I guess you are asking too much...
No no. I don't want to criticize anybody (Rovelli et al.) for not developping a theory for the cc. I simply want to say that this paper does not answer this fundamental question and does not explain how the cc could fit into an RG framework (as is expected for other couplings).

---------------------

We have to disguish two different approaches (I bet Rovelli sees this more clearly than I do).
- deriving LQG based on the EH or Holst action, Ashtekar variables, loops, ... extending it via q-deformation etc.
- defining LQG using simple algebraic rules, constructing its semiclassical limit and deriving further physical predictions

The first approach was developped for decades, but still fails to provide all required insights like (especially) H. The second approach is not bad as it must be clear that any quantization of a classical theory is intrinsically incomplete; it can never resolve quantization issues, operator ordering etc. Having this in mind it is not worse to "simply write down a quantum theory". The problem with that approach was never the correct semiclassical limit (this is a minor issue) but the problem to write down a quantum theory w/o referring to classical expressions!

Look at QCD (again :-) Nobody is able to "guess" the QCD Hamiltonian; every attempt to do this would break numerous symmetries. So one tries (tried) to "derive" it. Of course there are difficulties like infinities, but one has a rather good control regarding symmetries. Nobody is able to write down the QCD PI w/o referring to the classical action (of course its undefined, infinite, has ambiguities ..., but it does not fail from the very beginning). Btw.: this hasn't changed over decades, but nobody cares as the theory seems to make the correct predictions.

Now look at LQG. The time for derivations may be over. So instead of derived LQG (which by may argument explained above is not possible to 100%) one may simply postulate LQG. The funny thing is that in contradistinction to QCD we seem to be able to write down a class of fully consistent theories of quantum gravity w/o derivation, w/o referring to classical expressions, w/o breaking of certain symmetries etc. The only (minor!) issue is the derivation of the semiclassical limit etc.

From a formal perspective this is a huge step forward. If this formal approach is correct, my concerns regarding the cc are a minor issue only.
 
  • #36
What is a semiclassical limit for you?
Why fitting cc could fit into an RG framework would be a fundamental question? :confused:
 
  • #37
@Tom
post #35 gives an insightful and convincing perspective. Also it leaves open the question of what will be the definitive form(s) of the theory. Because you earlier pointed out that at a deeper level a theory can have several equivalent presentations.

I had a minor comment about that. For me, the best presentation of the current manifoldless version is not the absolute latest (December's 1012.4707) but rather October's 1010.1939. And I would say that the notation differs slightly between them, and also that (from the standpoint of a retired mathematician with bad eyesight) their notation is inadequate/imperfect.

If anyone wants to help me say this, look at 1010.1939 and you will see that there is no symbol for a point in the group manifold SU(2)L = GL = G x G x ... x G
Physicists think that they can write down xi and have this mean either xi or else the N-tuple (x1, x2,...,xN)
depending on context. This is all right to a certain extent but after a point it becomes confusing.

In many ways I think the presentation in 1010.1939 is the clearest, but it is still deficient.
Maybe I will expand on that a bit, if it will not distract from more meaningful discussion.

============

BTW, in line with what Tom said in the previous post, there are obviously several different ways LQG can fail, not just one way. One failure mode is mathematical simplicity/complexity. To be successful a theory should (ideally) be mathematically simple.
As well as passing the empirical tests.

One point in favor of the 1010.1939 form is that it "looks like" QED and QCD, except that it is background independent and about geometry, instead of being about particles of matter living in fixed background. Somehow it manages to look like earlier field theories. The presentation on the first page uses "Feynman rules".

These Feynman rules focus on an amplitude ZC(h)
where C is a two-complex with L boundary or "surface" edges, and h is a generic element of SU(2) and h is (h1, h2,...,hL), namely a generic element of SU(2)L

The two-complex C is the "diagram". The boundary edges are the "input and output" of the diagram---think of the boundary as consisting of two separate (initial and final) components so that Z becomes a transition amplitude. Think of the L-tuple h as giving initial and final conditions. The notation h is my notational crutch which I use to keep order in my head. Rovelli, instead, makes free use of the subscript "l" which runs from 1 to L, and has no symbol for h.

The central quantity in the theory is the complex number ZC(h) and one can think of that number as saying

Zroadmap(boundary conditions)
 
Last edited:
  • #38
The thing I like about LQG is that although the ideas may be incorrect or the redefinition for that matter, they are making progress and aren't afraid to delve into these unique concepts. I've never seen so many original papers come out in a year in one specific research program!

All I see now from String Theory research programs is AdS_5 \times S^5 and holographic superconductors, they haven't really ventured into other ideas. Is AdS/CFT even a physical theory at this point, is it possible in our universe? I don't know, but many interesting things are going on in LQG and it's relatives such as CDT that appear much more interesting then the plateau that ST is facing, what the "heck" is a holographic superconductor anyways?

I think the real notion that must be addressed is the nature of space-time itself. I feel that all of our ideas in Physics rely on a specific space-time backgrounds and therefore having a quantum description of space-time at a fundamental level is a more clear approach - which LQG does. Does ST address this idea, is AdS/CFT a valid idea? Anyways enough with the merits of ST, what is LQG lacking?
 
Last edited:
  • #39
Kevin_Axion said:
...
I think the real notion that must be addressed is the nature of space-time itself.

I think that is unquestionably correct. The issue is the smooth manifold, invented by Bernie Riemann around 1850 and introduced to mathematicians with the help and support of Carl Gauss at Gottingen around that time. It is a continuum with a differential structure---technically the general idea is called "differentiable manifold".

The issue is whether or not is is time to replace the manifold with something lighter, more finite, more minimal, more "informatic" or information-theoretical.

If the historical moment is ripe to do this, then Rovelli and associates are making a significant attempt which may show the way. If the historical moment is not ripe to replace the manifold (as model of spacetime) then they will be heading off into the jungle to be tormented by savages, mosquitoes and malaria.

At the present time the proposed minimalist/informatic structure to replace manifold is a 2-complex. Or, ironically, one can also work with a kind of "dual" which is a full-blown 4D differential manifold which has a 2-complex of "defect" removed from it and is perfectly flat everywhere else.
A two-complex is basically just like a graph (of nodes and links) except it has one higher dimensionality (vertices, edges, faces). A two-complex is mathematically sufficient to carry a sketch of the geometric information (the curvatures, angles, areas between event-marked regions,...) contained in a 4D manifold where this departs from flatness. A two-complex provides a kind of finite combinatorial shorthand way of writing down the geometry of a 4D continuum.

So we will watch and see how this goes. Is it time to advance from the 1850 spacetime manifold beachhead, or not yet time to do that?

marcus said:
...

The central quantity in the theory is the complex number ZC(h) and one can think of that number as saying

Zroadmap(boundary conditions)
 
  • #40
So essentially quantum space-time is nodes connecting to create 4D tetrahedrons?
 
  • #41
Kevin_Axion said:
So essentially quantum space-time is nodes connecting to create 4D tetrahedrons?
I'm agnostic about what nature IS. I like the Niels Bohr quote that says physics is not about what nature is, but rather what we can say about it.

Also another favorite is the Rovelli quote that QG is not about what spacetime is but about how it responds to measurement.

(there was a panel discussion and he was trying to say that arguments about whether it is really made of chainlink-fence, or tinkertoy, or lego-blocks, rubberbands, or tetrahedra, or the 4D analog of tets, called 4-simplices, or general N-face polyhedra...are not good arguments. How one sets up is really just a statement about how one intends to calculate. One calculates the correlations between measurements/events. The panel discussion was with Ashtekar and Freidel, at PennState in 2009, as I recall. I can get the link if anyone is interested. It told me that QG is about geometric information, i.e. observables. not about "ontology". So I liked that and based my agnosticism on it.)

BTW I think human understanding grows gradually, almost imperceptibly, like a vine up a wall. Nothing works if it is too big a step, or jump. Therefore, for me, there is no final solution, there are only the small steps that the human mind can take now. The marvel of LQG, for me, is that it actually seems as if it might be possible to take this step now, and begin to model spacetime with something besides a manifold, and yet still do calculations (not merely roll the Monte Carlo simulation dice of CDT and Causets.)

But actually, Kevin, YES! :biggrin: Loosely speaking, the way almost everyone does speak, and with the weight on "essentially" as I think you meant it, in this approach spacetime essentially is something like what you said!
 
Last edited:
  • #42
tom.stoer said:
The problem with that approach was never the correct semiclassical limit (this is a minor issue) but the problem to write down a quantum theory w/o referring to classical expressions!

In the past two years I have repeatedly tried to stimulate a discussion on this issue with no such luck, everybody seems to be happy or just accept that. I have never seen any good thread on this issue because it seems to be sacrilegious to talk about it.

Moreover, I think the real culprit is differential equations, they are inherently a guess work, the technique is always to "add terms" to get it to fit experiment, not to mention its limited relating points to the neighbors and the notorious boundary condition requirement. It has served us well for a long time,but No fundamental theory should be like that.

As for LQG, the original idea was just the only option to make GR look like the quantum and to "see what happens", only for rovelli to conclude that spacetime and matter should be related. But how, LQG is giving hints which has not been capitalized on. I still think spacetime is ""unphysical ""and must be derived from matter and not the other way around.
 
  • #43
Kevin_Axion said:
So essentially quantum space-time is nodes connecting to create 4D tetrahedrons?

Just a little language background, in case anyone is interested: The usual name for the analogous thing in 4D, corresponding to a tet in 3D, is "4-simplex"

Tedrahedron means "four sides" and a tetrahedron does have four (triangular sides). At tet is also a "3-simplex" because it is the simplex that lives in 3D. Just like a triangle is a 2-simplex.

The official name for a 4-simplex is "pentachoron" choron means 3D room in Greek. the boundary of a pentachoron consists of five 3D "rooms"---five tetrahedrons.

To put what you said more precisely

So essentially quantum space-time is nodes connecting to create pentachorons?

Loosely speaking that's the right idea. But we didn't touch on the key notion of duality. It is easiest to think of in 2D. Take a pencil and triangulate a flat piece of paper with black equilateral triangles. Then put a blue dot in the center of each triangle and connect two dots with a blue line if their triangles are adjacent.

The blue pattern will look like a honeycomb hexagon tiling of the plane. The blue pattern is dual to the black triangulation. Each blue node is connected to three others.

Then imagine it in 3D where you start by triangulating regular 3D space with tetrahedra. Then you think of putting a blue dot at the center of each tet, and connect it with a blue line to each of the 4 neighbor blue dots in the 4 adjacent tets.

In some versions of LQG, the spin networks---the graphs that describe 3D spatial geometry--- are restricted to be dual to triangulations. And in 4D where there are foams (analogous to graphs), only foams which are dual to triangulations are allowed.

These ideas---simplexes, triangulations that chop up space or spacetime into simplexes, duals, etc.---become very familiar and non-puzzling. One gets used to them.

So that would be an additional wrinkle to the general idea you expressed.

Finally, it gets simpler aqain. You throw away the idea of triangulation and just keep the idea of a graph (for 3D) and a foam thought of either as 4D geometry, or as the evolution of 3D geometry. And you let the graphs and foams be completely general, so no more headaches about the corresponding dual triangulation or even if there is one. You just have general graphs and two-complexes, which carry information about observables (area, volume, angle,...)
===============================

Kevin, one could say that all this stuff about tetrahedrons and pentachorons and dual triangulations is just heuristic detail that helps people get to where they are going, and at some point becomes extra baggage---unnecessary complication---and gets thrown out.

You can for instance look at 1010.1939. In fact it might do you good. You see a complete presentation of the theory in very few pages and no mention of tetrahedrons :biggrin:

Nor is there any mention of differentiable manifolds. So there is nothing to chop up! There are only the geometric relations between events/measurements. That is all we ever have, in geometry. Einstein pointed it out already in 1916 "the principle of general covariance deprives space and time of the last shred of objective reality". Space has no physical existence, there are only relations among events.

We get to use all the lego blocks we want and yet there are no legoblocks. Something like that...
 
Last edited:
  • #44
At any rate, let's get back to the main topic. There is this new formulation, best presented in http://arxiv.org/abs/1010.1939 or so I think, and we have to ask is it simple enough and also wonder if it will be empirically confirmed. It gives Feynman rules for geometry leading to a way of calculating a transition amplitude a certain complex number, which I wrote

Zroadmap(boundary conditions)

the amplitude (like a probability) of going from initial to final boundary geometry following the Feynman diagram roadmap of a certain two-complex C.

A twocomplex is a finite list of abstract vertices, edges, faces: vertices where the edges arrive and depart and faces bordered by edges (the list says which connect with which).

Initial and final geometry details come as boundary edge labels which are elements of a group G = SU(2). There is some finite number L of boundary edges, so the list of L group elements labeling the edges can be written h = (h1, h2,...,hL).

So, in symbols, the complex number is ZC(h). The theory specifies a formula for computing this, which is given by equation (4) on page 1 of http://arxiv.org/abs/1010.1939 , the paper I mentioned.

Here is an earlier post that explains some of this:
marcus said:
@Tom
post #35 gives an insightful and convincing perspective. Also it leaves open the question of what will be the definitive form(s) of the theory. Because you earlier pointed out that at a deeper level a theory can have several equivalent presentations.

I had a minor comment about that. For me, the best presentation of the current manifoldless version is not the absolute latest (December's 1012.4707) but rather October's 1010.1939. And I would say that the notation differs slightly between them, and also that (from the standpoint of a retired mathematician with bad eyesight) their notation is inadequate/imperfect.

If anyone wants to help me say this, look at 1010.1939 and you will see that there is no symbol for a point in the group manifold SU(2)L = GL = G x G x ... x G
Physicists think that they can write down xi and have this mean either xi or else the N-tuple (x1, x2,...,xN)
depending on context. This is all right to a certain extent but after a point it becomes confusing.

In many ways I think the presentation in 1010.1939 is the clearest, but it is still deficient.
Maybe I will expand on that a bit, if it will not distract from more meaningful discussion.

============

BTW, in line with what Tom said in the previous post, there are obviously several different ways LQG can fail, not just one way. One failure mode is mathematical simplicity/complexity. To be successful a theory should (ideally) be mathematically simple.
As well as passing the empirical tests.

One point in favor of the 1010.1939 form is that it "looks like" QED and QCD, except that it is background independent and about geometry, instead of being about particles of matter living in fixed background. Somehow it manages to look like earlier field theories. The presentation on the first page uses "Feynman rules".

These Feynman rules focus on an amplitude ZC(h)
where C is a two-complex with L boundary or "surface" edges, and h is a generic element of SU(2) and h is (h1, h2,...,hL), namely a generic element of SU(2)L

The two-complex C is the "diagram". The boundary edges are the "input and output" of the diagram---think of the boundary as consisting of two separate (initial and final) components so that Z becomes a transition amplitude. Think of the L-tuple h as giving initial and final conditions. The notation h is my notational crutch which I use to keep order in my head. Rovelli, instead, makes free use of the subscript "l" which runs from 1 to L, and has no symbol for h.

The central quantity in the theory is the complex number ZC(h) and one can think of that number as saying

Zroadmap(boundary conditions)
 
Last edited:
  • #45
The way the equation (4) works is you let boundary information ( h ) percolate into the foam from its outside surface, and you integrate up all the other labels that the twocomplex C might have compatible with what is fixed on the surface.

The foam is like an information-sponge, with a certain welldefined boundary surface (actually a 3D hypersurface geometry, think initial + final) and you paint the outside of the sponge with some information-paint h
and the paint seeps and soaks into the inside, and constrains what colors can be there to some extent. Then you integrate out, over all what can be inside, compatible with the boundary.

So in the end the Z amplitude depends only on the choice of the unlabeled roadmap C, a pure unlabeled diagram, plus the L group element labels on the boundary graph.

If the group-labeled boundary graph happens to have two connected components you can call one "initial geometry" and one "final geometry" and then Z is a "transition amplitude" from initial to final, along the twocomplex roadmap C.

BTW Etera Livine just came out with a 90-page survey and tutorial paper on spinfoam. It is his habilitation, so he can be research director at Lyon, a job he has already be performing from the looks of it. Great! Etera has posted here at PF Beyond sometimes. His name means Ezra in the local-tradition language where he was raised. A good bible name. For some reason I like this. I guess I like the name Ezra. Anyway he is a first-rate spinfoam expert and we can probably find this paper helpful.

http://arxiv.org/abs/1101.5061
A Short and Subjective Introduction to the Spinfoam Framework for Quantum Gravity
Etera R. Livine
90 pages
(Submitted on 26 Jan 2011)
"This is my Thèse d'Habilitation (HDR) on the topic of spinfoam models for quantum gravity, which I presented in l'Ecole Normale Supérieure de Lyon on december 16 2010. The spinfoam framework is a proposal for a regularized path integral for quantum gravity, inspired from Topological Quantum Field Theory (TQFT) and state-sum models. It can also be seen as defining transition amplitudes for the quantum states of geometry for Loop Quantum Gravity (LQG)."

It may interest you to go to page 61 where begins Etera's Chapter 4 What's Next for Spinfoams?
 
Last edited:
  • #46
Awesome, thanks for the detailed explanation marcus! I'm in grade 11 so the maths only makes partial sense to me but the words will be good enough for now. About connecting the points in the center of the triangles, so you always have an N-polygon with three N-polygons meeting at each vertex, what is the significance of that, will you have more meeting at each vertex with pentachorons (applying the same procedure) because there exist more edges?
 
Last edited:
  • #47
Kevin_Axion said:
... About connecting the points in the center of the triangles, so you always have an N-polygon with three N-polygons meeting at each vertex, what is the significance of that, will you have more meeting at each vertex with pentachorons (applying the same procedure) because there exist more edges?
My writing wasn't clear Kevin. The thing about only three meeting was just a detail I pointed out about the situation on the plane when you go from equilateral triangle tiling to the dual, which is hexagonal tiling. I wanted you to picture it concretely. That particular aspect does not generalize to other polygons or to other dimensions. I was hoping you would draw a picture of how there can be two tilings each dual to the other.

It would be a good brain-exercise, I think, to imagine how ordinary 3D space can be "tiled" or triangulated by regular tetrahedra. You can set down a layer of pyramids pointing up, but then how do you fill in? Let's say you have to use regular tets (analogous to equilateral triangles) for everything.

And when you have 3D space filled with tets, what is the dual to that triangulation? This gets us off topic. If you want to pursue it maybe start a thread about dual cell-complexes or something? I'm not an expert but there may be someone good on that.
 
  • #48
The Wiki article is good: "The 5-cell can also be considered a tetrahedral pyramid, constructed as a tetrahedron base in a 3-space hyperplane, and an apex point above the hyperplane. The four sides of the pyramid are made of tetrahedron cells." - Wikipedia: 5-cell, http://en.wikipedia.org/wiki/Pentachoron#Alternative_names
Anyways, I digress. I'm sure this is slightly off-topic.
 
  • #49
Oh good! You are on your own. I googled "dual cell complex" and found this:
http://www.aerostudents.com/files/constitutiveModelling/cellComplexes.pdf

Don't know how reliable or helpful it may be.
 
  • #50
I understand some vector calculus and that appears to be what the math being used is. Thanks I'm sure that will be useful!
 
Last edited:
  • #51
marcus said:
It would be a good brain-exercise, I think, to imagine how ordinary 3D space can be "tiled" or triangulated by regular tetrahedra. You can set down a layer of pyramids pointing up, but then how do you fill in? Let's say you have to use regular tets (analogous to equilateral triangles) for everything.

And when you have 3D space filled with tets, what is the dual to that triangulation? This gets us off topic. If you want to pursue it maybe start a thread about dual cell-complexes or something? I'm not an expert but there may be someone good on that.


Regular tetrahedra can not fill space. Tetrahedra combined with octahedra can fill space. See isotropic vector matrix or octet-truss.

...and I think the dual is packed rhombic dodecahedra
 
Last edited:
  • #52
marcus said:
Oh good! You are on your own. I googled "dual cell complex" and found this:
http://www.aerostudents.com/files/constitutiveModelling/cellComplexes.pdf

Don't know how reliable or helpful it may be.

The dual skeleton is defined quite nicely on p31 in this paper http://arxiv.org/abs/1101.5061"

which you identified in the bibliography thread.
 
Last edited by a moderator:
  • #53
sheaf said:
The dual skeleton is defined quite nicely on p31 in this paper http://arxiv.org/abs/1101.5061"

which you identified in the bibliography thread.

Thanks! I checked page 31 of Etera Livine's spinfoams paper and it does give a nice understandable presentation. That paper is like a little introductory textbook!
I will quote a sample passage from page 31:

==quote Livine 1101.5061 ==

Starting with the simpler case of a three-dimensional space-time, a space-time triangulation consist in tetrahedra glued together along their triangles. The dual 2-skeleton is defined as follows. The spinfoam vertices σ are dual to each tetrahedron. Those vertices are all 4-valent with the four attached edges being dual to the four triangles of the tetrahedron. Each edge e then relates two spinfoam vertices, representing the triangle which glues the two corresponding tetrahedra. Finally, the spinfoam faces f are reconstructed as dual to the triangulation’s edges. Indeed, considering an edge of the triangulation, we go all around the edge and look at the closed sequences of spinfoam vertices and edges which represent respectively all the tetrahedra and triangles that share that given edge. This line bounds the spinfoam face, or plaquette, dual to that edge. Finally, each spinfoam edge e has three plaquettes around it, representing the three triangulations edges of its dual triangle. To summarize the situation:

3d triangulation ↔ spinfoam 2-complex
___________________________________
tetrahedron T ↔ 4-valent vertex σ
triangle t ↔ edge e
edge ↔ plaquette f

The setting is very similar for the four-dimensional case. The triangulated space-time is made from 4-simplices glued together at tetrahedra. Each 4-simplex is a combinatorial structure made of 5 boundary tetrahedra, glued to each other through 10 triangles. Once again, we define the spinfoam 2-complex as the dual 2-skeleton:
...
==endquote==
 
Last edited by a moderator:
  • #54
Helios said:
Regular tetrahedra can not fill space...

I think that is right, Helios. The dihedral angle of a regular tet is about 70.5 degrees,

Suppose I allow two kinds of tet. Can it be done? Please tell us if you know.


[This may not be absolutely on topic, because all we need to accomplish what Etera is talking about is some sort of tetrahedral triangulation of space, which I'm pretty sure exists (if we relax the regularity condition slightly). But it's not a bad exercise for the imagination to think about it. Helios might be a good teacher here.]
 
  • #55
Helios said:
Regular tetrahedra can not fill space.

But irregular tetrahedra can!
 
  • #56
MTd2 said:
But irregular tetrahedra can!

Indeed, only slightly irregular. The construction I was vaguely remembering was one in Loll's 2001 paper. I'll get the reference. (Loll Ambjorn Jurkiewicz 2001). they are doing 2+1 gravity so spacetime is 3D. The basic idea is simple layering. They have two types of tets, red and blue. Both look almost regular but slightly distorted. The red have an equilateral base but the wrong height (slightly taller or shorter than they should be). They set them out in a red layer covering a surface (a plane say) with little trianglebase pyramids.
Now where each pyramid meets its neighbor there is a kind of V-shaped canyon.
(I could be misremembering this, but you will, I hope, see how to correct me.)

The blue tets are also nearly regular but slightly stretched in some direction. They have a dihedral angle so that they precisely fit into that V-shape canyon. You hold the tet with one edge horizontal like the keel of a little boat. It fits right in. The top will be a horizontal edge rotated at right angles.

So now you have the upsidedown picture of a blue layer with upsidedown pyramid holes. So you put in red tets with their flat equilateral bases directed upwards. Now you have a level ground again, made of their bases, and you can start another layer.

I could be wrong. I am just recalling from that paper by Renate Loll et al. I haven't checked back to see. Please correct me if I'm wrong about how they do it. Let me get the reference. This is the best introduction to CDT I know. It is easy, concrete, and does not gloss over anything. If anyone knows a better introduction, please say.

http://arxiv.org/abs/hep-th/0105267
Dynamically Triangulating Lorentzian Quantum Gravity
J. Ambjorn (NBI, Copenhagen), J. Jurkiewicz (U. Krakow), R. Loll (AEI, Golm)
41 pages, 14 figures
(Submitted on 27 May 2001)
"Fruitful ideas on how to quantize gravity are few and far between. In this paper, we give a complete description of a recently introduced non-perturbative gravitational path integral whose continuum limit has already been investigated extensively in d less than 4, with promising results. It is based on a simplicial regularization of Lorentzian space-times and, most importantly, possesses a well-defined, non-perturbative Wick rotation. We present a detailed analysis of the geometric and mathematical properties of the discretized model in d=3,4. This includes a derivation of Lorentzian simplicial manifold constraints, the gravitational actions and their Wick rotation. We define a transfer matrix for the system and show that it leads to a well-defined self-adjoint Hamiltonian. In view of numerical simulations, we also suggest sets of Lorentzian Monte Carlo moves. We demonstrate that certain pathological phases found previously in Euclidean models of dynamical triangulations cannot be realized in the Lorentzian case."
 
Last edited:
  • #57
I welcome disagreement and corrections, but I want to keep hitting the main topic. I think there are signs that LQG has made the right redefinition and has reached an exciting stage of development. Please disagree, either in general or on details. I will give some details.

First notice that CDT AsymSafe and Causets appear persistently numerical (not analytic)---they run on massive computer experiments instead of equations. This is a wonderful way to discover things, a great heuristic tool, but it does not prove theorems. At least so far, many of the other approaches seem insufficiently analytical and lack the symbolic equations that are traditional in physics.

As I see it, the QG goal is to replace the live dynamic manifold geometry of GR with a quantum field you can put matter on. The title of Dan Oriti's QG anthology said "towards a new understanding of space time and matter" That is one way of saying what the QG researchers's goal is. A new understanding of space and time, and maybe laying out matter on a new representation of space and time will reveal a new way to understand matter (no longer fields on a fixed geometry).

Sources on the 2010 redefinition of LQG are
introductory overview: http://arxiv.org/abs/1012.4707
concise rigorous formulation: http://arxiv.org/abs/1010.1939
phenomenology (testability): http://arxiv.org/abs/1011.1811
adding matter: http://arxiv.org/abs/1012.4719

Among alternative QGs, the LQG stands out for several reasons---some I already indicated---which I think are signs that the 2010 reformulation will prove a good one:

  • testable (phenomenologists like Aurelien Barrau and Wen Zhao seem to think it is falsifiable)
  • analytical (you can state LQG in a few equations, or Feynman rules, you can calculate and prove symbolically, massive numerical simulations are possible but not required)
  • similar to QED and lattice GCD (the cited papers show remarkable similarities---the two-complex works both as a Feynman diagram and as a lattice)
  • looks increasingly like a reasonable way to set up a background independent quantum field theory.
  • an explicitly Lorentz covariant version of LQG has been exhibited
  • matter added
  • a couple of different ways to include the cosmological constant
  • indications that you recover the classic deSitter universe.
  • sudden speed-up in the rate of progress, more researchers, more papers

These are just signs---the 2010 reformulation might be right---or to put it differently, there may be good reason for us to understand the theory, as presented in brief by the October paper http://arxiv.org/abs/1010.1939.

So I will copy my last substantive post about that and try to move forward from there.

marcus said:
@Tom
post #35 gives an insightful and convincing perspective. Also it leaves open the question of what will be the definitive form(s) of the theory. Because you earlier pointed out that at a deeper level a theory can have several equivalent presentations.

I had a minor comment about that. For me, the best presentation of the current manifoldless version is not the absolute latest (December's 1012.4707) but rather October's 1010.1939. And I would say that the notation differs slightly between them, and also that (from the standpoint of a retired mathematician with bad eyesight) their notation is inadequate/imperfect.

If anyone wants to help me say this, look at 1010.1939 and you will see that there is no symbol for a point in the group manifold SU(2)L = GL = G x G x ... x G
Physicists think that they can write down xi and have this mean either xi or else the N-tuple (x1, x2,...,xN)
depending on context. This is all right to a certain extent but after a point it becomes confusing.

In many ways I think the presentation in 1010.1939 is the clearest, but it is still deficient.
Maybe I will expand on that a bit, if it will not distract from more meaningful discussion.

============

BTW, in line with what Tom said in the previous post, there are obviously several different ways LQG can fail, not just one way. One failure mode is mathematical simplicity/complexity. To be successful a theory should (ideally) be mathematically simple.
As well as passing the empirical tests.

One point in favor of the 1010.1939 form is that it "looks like" QED and QCD, except that it is background independent and about geometry, instead of being about particles of matter living in fixed background. Somehow it manages to look like earlier field theories. The presentation on the first page uses "Feynman rules".

These Feynman rules focus on an amplitude ZC(h)
where C is a two-complex with L boundary or "surface" edges, and h is a generic element of SU(2) and h is (h1, h2,...,hL), namely a generic element of SU(2)L

The two-complex C is the "diagram". The boundary edges are the "input and output" of the diagram---think of the boundary as consisting of two separate (initial and final) components so that Z becomes a transition amplitude. Think of the L-tuple h as giving initial and final conditions. The notation h is my notational crutch which I use to keep order in my head. Rovelli, instead, makes free use of the subscript "l" which runs from 1 to L, and has no symbol for h.

The central quantity in the theory is the complex number ZC(h) and one can think of that number as saying

Zroadmap(boundary conditions)
 
Last edited:
  • #58
To recapitulate, there are signs the 2010 reformulation might be right---or to put it another way, good reasons for us to understand the theory, as presented in brief by the October paper http://arxiv.org/abs/1010.1939.

There is a relatively simple direct way to grasp the theory: understand equation (4) on page 1 of that paper. That equation defines the central quantity of the theory: a complex number ZC(h). It is a geometry evolution amplitude---the amplitude (related to probabliity) that the geometry will evolve from initial to final specified by boundary labels denoted h along a roadmap specified by the twocomplex ("foam") denoted C.

Zroadmap(boundary conditions)

There is no extra baggage, no manifold, no embeddings. Understanding comes down to understanding that equation (4)

I've made one change in notation from what you see in equation (4), namely introduced
a symbol h to stand for (h1, h2,...,hL), the generic element of SU(2)L. L is the number of boundary links in the network surrounding the foam. So h is an ordered collection of group elements helping to determine geometric boundary conditions.

One thing on the agenda, if we want to understand (4) is to see why the integrals are over the specified number of copies of the groups----why there are that many labels to integrate out, instead of some other number. So for example you see on the first integral the exponent 2(E-L) - V. We integrate over that many copies of the group. Let's see why it is that number. E and V are the numbers of edges and vertices in the foam C. So E-L is the number of internal edges.
 
Last edited:
  • #59
tom.stoer said:
The only (minor!) issue is the derivation of the semiclassical limit etc.

Why is this only a minor issue?

How about the classical limit?
 
  • #60
I think that the derivation of a certain limit is a minor issue compared to the problem that a construction of a consistent, anomaly-free theory (derived as quantization of a classical theory) is not available.
 

Similar threads

  • · Replies 7 ·
Replies
7
Views
4K
  • · Replies 9 ·
Replies
9
Views
4K
Replies
12
Views
3K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 37 ·
2
Replies
37
Views
6K
  • · Replies 14 ·
Replies
14
Views
4K
  • · Replies 13 ·
Replies
13
Views
3K
Replies
4
Views
4K
  • · Replies 2 ·
Replies
2
Views
3K