Signs LQG has the right redefinition (or wrong?)

  • Thread starter Thread starter marcus
  • Start date Start date
  • Tags Tags
    Lqg
marcus
Science Advisor
Homework Helper
Gold Member
Dearly Missed
Messages
24,753
Reaction score
794
LQG was redefined in 2010. The essentials are summarized in two December papers,
http://arxiv.org/abs/1012.4707
http://arxiv.org/abs/1012.4719
What indications do you see that this was the right (or wrong) move?
How do you understand the 2010 reformulation? How would you characterize it?
What underlying motivation(s) do you see?
 
Physics news on Phys.org
As a footnote to that, there will be the 2011 Zakopane QG school the first two weeks of March. Rovelli has 10 hours of lecture, presumably to present his current understanding of the theory at a level for advanced PhD students and postdocs wanting to get into LQG research. This will be, I guess, the live definitive version.

People who coming fresh to this subject should realize that the LQG redefintion relies heavily on analogies with QED and QCD---Feynman diagram QED and lattice gauge QCD. N chunks of space instead of N particles. The graph truncation. The 2-complex ("foam") analog of the 4D lattice.

Also that the formulation does not depend on a smooth manifold or any such spacetime continuum. The graph need not embedded in a continuum (although it may optionally be so at times to accomplish some mathematical construction). To me, the graph represents a restriction of our geometric information----symbolically to a finite set of instruments/readings, or a finite set of chunks of space that we know about.

Or when talking about much smaller scale, a finite set of geometric elements we can infer something about (if not directly probe with macro instuments.)

A 2-complex ("foam") is just the one-higher-dim. analog of a graph. Instead of being a combination of 0 and 1-dimensional nodes and links, a 2-complex is the analogous combination of 0 and 1 and 2-dimensional vertexes, edges, and faces.

A graph can serve as a boundary of a 2-complex (or "foam"). If the graph comes in two separate components, the initial and the final. Then the foam can describe a possible way that the initial graph component evolves into the final. Presumably one of many possible evolutionary paths or histories.

What we are talking about is the evolution of geometric information. Probably the simplest way of talking about this that one can imagine.

There is no smooth manifold in the picture, in part simply to establish that a smooth manifold exists would require an uncountable infinity of physical measurements. It is too great an assumption to make about the world. The spirit of quantum mechanics is to concentrate on what we can actually observe and measure---the interrelationships between pieces of information. and how these evolve.

This is probably the reason that QG has gradually settled down to a manifoldless definition. In the redefined LQG there is no spacetime (in and of itself) there is only "what we can say about it"--a web of geometric info. Some measured or inferred volumes, areas, angles...

Now QED, for instance, needs to be redefined on this web of geometry---no longer should it be defined on a manifold. Information should be located on information, and by information. What we can say, not what "is".

It is this redefinition which one sees beginning to happen in the other December paper I mentioned, called "Spinfoam Fermions" http://arxiv.org/abs/1012.4719
 
Last edited:
Do you think LQG will require a principle of relative locality? Do you think this

http://arxiv.org/abs/1101.3524

has anything to do with that?
 
I think the job of the theorist is to develop testable theories which are possible to be right.

Put yourself in Freidel's place. It is not his job to "believe" theories (whatever that means.)

The January "Rel. Loc." paper argues that Rel. Loc. is testable. It can be falsified if one finds that the momentum algebra is flat. It is very interesting. Extremely.

Also LQG has changed enormously in the past year, or several years, and is extremely interesting. It is also falsifiable.

Trying now to reconfigure LQG so that it would fit into the Rel Loc philosophy is AFAICS premature speculation. What makes sense to me, now, is develop and test them both so that we have a better idea of how reality is structured. Maybe one or the other can be falsified!

have to go
 
Last edited:
So, it was just a coincidence that he used a restriction in the phase space, in the new paper, right?
 
MTd2 said:
So, it was just a coincidence that he used a restriction in the phase space, in the new paper, right?

Please help me out with more specifics. Page references even! You must be talking about Freidel and the Rel Loc paper. I haven't studied it. Point me to a paragraph on some page, or to some equation in the Rel. Loc. paper.

My eyes get tired looking thru stuff to find what somebody is talking about. :biggrin:

I think Freidel is great and I am waiting to hear his online March 1 seminar talk about Relative Locality.
And since the International LQG Seminar connects a halfdozen places around the world and they can all ask questions I am waiting to hear what questions Freidel gets from people in PennState, Perimeter, Marseille, Nottingham, Warsaw...

The only trouble is 1 March is also the first day of the Zakopane school and both Ashtekar and Rovelli are scheduled to give 2 hour lectures on that same day. So there is a huge time conflict. The school is important and also Freidel's talk is. How they ever managed to schedule it like that is beyond my comprehension.
 
Look at the abstract of

http://arxiv.org/abs/1101.3524

"We discretize the Hamiltonian scalar constraint of three-dimensional Riemannian gravity on a graph of the loop quantum gravity phase space. ... This fills the gap between the canonical quantization and the symmetries of the Ponzano-Regge state-sum model for 3d gravity."

http://arxiv.org/abs/1101.0931

p.2
"Physics takes place in phase space and there is no invariant
global projection that gives a description of processes in
spacetime. From their measurements local observers can
construct descriptions of particles moving and interacting
in a spacetime, but different observers construct different
spacetimes, which are observer-dependent slices of phase
space."

Sounds like that LQG makes sense only with relative locality, sort of, at least in 3d.
 
I see what you are driving at. Thanks for the detailed reference. I'm not going to agree or disagree yet because I don't understand Relative Locality well enough. The way I imagined it, in the Rel Loc paper the momentum space that was critical was that of matter particles. The crucial question was whether or not material momenta added in a flat vectorspace way. Was the matter momentum space curved or not? The phase space that was at issue in Rel Loc included matter. That is how I was thinking.
In the 3D paper you cited, the topic is pure gravity, no matter. Or am I missing something?
The connection is too tenuous for me to follow, at this point. Maybe someone else can respond more helpfully.
 
Yes, no matter. But you were talking about redefinitions, I took it as being general redefinitions about the fundamentals of the theory! Lol, I guess I went off topic. I don't know, maybe a new thread is required? I don't know how to put it.
 
  • #10
I suppose what you are asking about is relevant (at least eventually). I am simply not prepared to respond in any useful way. It's natural to ask in what way is LQG compatible with the Rel. Loc. principle? Is it even compatible at all? Or are they in spirit quite close? It seems to me natural that people would be asking such questions at the ILQGS on March 1, if indeed Freidel gives the scheduled talk, and if the others are available to listen and comment.

If you can't get a satisfactory discussion here and now, then you or I must start a thread about this issue after Freidel's talk (presumably 1 March).

Ultmately it comes down to empirical tests. Rel Loc is testable by testing the addition of particle momenta and suchlike stuff. LQG is testable because of its robust prediction of a cosmological bounce, some bearing on inflation, related features of Cmb. But although neither can be assumed a priori true, their mathematical (in)compatibility is surely an interesting question.
===================================

Instead of talking about Rel Loc now, what I want to do is quote some of the 4707 paper where he points out analogies with QED and QCD. He gives the full definition of LQG in three equations and half a page, and then he starts with some motivation:

This is the theory. It is Lorentz invariant [18]. It can be coupled to fermions and Yang-Mills fields [19], and to a cosmological constant [20, 21], but I will not enter into this here. The conjecture is that this mathematics describes the quantum properties of spacetime, reduces to the Einstein equation in the classical limit, and has no ultraviolet divergences. I now explain more in detail what the above means.

A. Quantum geometry: quanta of space
A key step in constructing any interactive quantum field theory is always a finite truncation of the dynamical degrees of freedom. In weakly coupled theories, such as low-energy QED or high-energy QCD, we rely on the particle structure of the free field and consider virtual processes involving finitely many, say N, particles, described by Feynman diagrams. These processes involve only the Hilbert space HN = ⊕n=1,N Hn , where Hn is the n-particle state space.

In strongly coupled theories, such as confining QCD, we resort to a non-perturbative truncation, such as a finite lattice approximation. In both cases (the relevant effect of the remaining degrees of freedom can be subsumed under a dressing of the coupling constants and) the full theory is formally defined by a limit where all the degrees of freedom are recovered.

The Hilbert space of loop gravity is constructed in a way that shares features with both these strategies. The analog of the state space HN in loop gravity is the space

HΓ = L2[SU(2)L/SU(2)N]. (4)

where the states ψ(hl) live. Γ is an abstract (combinatorial) graph, namely a collection of links l and nodes n, and a “boundary” relation that associates an ordered couple of nodes (sl, tl) (called source and target), to each link l. (See the left panel in Figure 1.) L is the number of links in the graph, N the number of nodes, and the L2 structure is the one defined by the the Haar measure. The denominator means that the states in HΓ are invariant under the local SU(2) gauge transformation on the nodes

ψ(Ul)→ψ(Vsl UlVtl−1), V ∈ SU(2), (5)

the same gauge transformation as in lattice Yang-Mills theory.

States in HΓ represent quantum excitation of space formed by (at most) N “quanta of space”. The notion of “quantum of space” is basic in loop gravity. It indicates a quantum excitation of the gravitational field, in the same sense in which a photon is a quantum excitation of the electromagnetic field. But there is an essential difference between the two cases, which reflects the difference between the electromagnetic field in Maxwell theory, and the gravitational field in general relativity: while the former lives over a fixed (Minkowski) metric spacetime, the second represents itself spacetime.

Accordingly, a photon is a particle that “lives in space”, that is, it carries a momentum quantum number ⃗k, or equivalently a positions quantum number ⃗x, determining the spatial localization of the photon with respect to the underlying metric space. The quanta of loop gravity, instead, are not localized in space. Rather, they define space themselves, as do classical solutions of the Einstein equations.

More precisely, the N “quanta of space” are only localized with respect to one another: the links of the graph indicates “who is next to whom”, that is, the adjacency link that defines the spatial relations among the N quanta. (See the right panel in Figure 1.) Thus, these quanta carry no quantum number such as momentum ⃗k or position ⃗x.

Rather, they carry quantum numbers that define a quantized geometry, namely a quantized version of the information contained in a classical (three- dimensional) metric. The way this happens is elegant, and follows from a key theorem due to Roger Penrose, called the spin-geometry theorem, which is at the root of loop gravity [22]. I give here an extended version of this theorem,...

 
Last edited:
  • #11
I don't think that LQG has been redefined.

Rovelli states that it is time to make the next step from the construction of the theory to the derivation of results. Nevertheless the construction is still not complete as long as certain pieces are missing. Therefore e.g. Thiemann's work regarding the Hamiltonian approach (which is not yet completed and for which the relation to spin foams is still not entirely understood) must still back up other programs

There are still open issues to be solved:
- construction, regularization and uniqueness of the Hamiltonian H
- meaning of "anomaly-free constraint algebra" in the canonical approach
- relation between H and SF (not only kinematical)
- coarse-graining of spin networks, renormalization group approach
- nature and value of the Immirzi parameter
- nature and value of the cosmological constant
- nature of matter and gauge fields (on top, emergent, ...); yes, gauge fields!
And last but not least: If a reformulation is required (which would indicate that the canonical formalism is a dead end), then one must understand why it is a dead end! We don't know yet.

My impression that Rovelli's new formulation does not address all these issue. His aim is more to develop calculational tools to derive physical results in certain sectors of the theory.

Let's look at QCD: there are several formulations of QCD (PI, canonical, lattice, ...), every approach with its own specific benefits and drawbacks. But nobody would ever claim that QCD has been reformulated (which sounds as if certain approaches would be out-dated). All approaches are still valid and are heavily used to understand to understand QCD vacuum, confinement, hadron spectroscopy, QGP, ... There is not one single formulation of QCD.

So my conclusion is that a new formulation of LQG has been constructed, but not that LQG has been reformulated.
 
  • #12
marcus said:
...What we are talking about is the evolution of geometric information. Probably the simplest way of talking about this that one can imagine.

There is no smooth manifold in the picture, in part simply to establish that a smooth manifold exists would require an uncountable infinity of physical measurements. It is too great an assumption to make about the world. The spirit of quantum mechanics is to concentrate on what we can actually observe and measure---the interrelationships between pieces of information. and how these evolve.

This is probably the reason that QG has gradually settled down to a manifoldless definition. In the redefined LQG there is no spacetime (in and of itself) there is only "what we can say about it"--a web of geometric info. Some measured or inferred volumes, areas, angles...

Now QED, for instance, needs to be redefined on this web of geometry---no longer should it be defined on a manifold. Information should be located on information, and by information. What we can say, not what "is"...

To inject a shallow note into this deep thread:

What, in this context, can we say about "Nothing at all" --- the Vacuum, about which Peacock made the comment (in his " Cosmological Physics):

"It is perhaps just as well that the average taxpayer, who funds research in physics, is unaware of the trouble we have in understanding even nothing at all" ?

In Loop Quantum Gravity abstract graphs are often sketched of vertices (drawn as dots) connected by edges (drawn as lines) that represent "what we can say" about the dimensional circumstances we live in. An example is Fig. 1 of Rovelli's Christmas review that was linked to in the original post of this thread.

The simplest thing we can say about the vacuum seems to be that it is quite symmetric; here is the same as there, and now is no different from then, as far as the vacuum is concerned. That's why we expect the laws of physics to be covariant in what we call spacetime.

Yet abstract graphs that are drawn, like Rovelli's, show no symmetry at all. They're lopsided and skew, as well they might be when gravitating matter or interacting fermions are involved. If they were drawn to represent the Vacuum (or perhaps a time average of it) wouldn't these graphs be more symmetric, perhaps even lattice-like? Lots of symmetries to explore then. Which brings me to ask: if this is so, what is it that makes or keeps the Vacuum so symmetric and, in the absence of localised mass/energy, spatially flat? Non-localised energy that can't be detected? Or something else that everybody except me understands?
 
  • #13
tom.stoer said:
...
And last but not least: If a reformulation is required (which would indicate that the canonical formalism is a dead end), then one must understand why it is a dead end! We don't know yet.

Let's look at QCD: there are several formulations of QCD (PI, canonical, lattice, ...), every approach with its own specific benefits and drawbacks. But nobody would ever claim that QCD has been reformulated (which sounds as if certain approaches would be out-dated). All approaches are still valid and are heavily used to understand to understand QCD vacuum, confinement, hadron spectroscopy, QGP, ... There is not one single formulation of QCD.

So my conclusion is that a new formulation of LQG has been constructed, but not that LQG has been reformulated.

I think I see now the distinction you are making between a new formulation
and a re formulation.

Personally I do not suspect that the Hamiltonian approach is a dead end. We cannot know the future of research, but my expectation is that people will continue to work on completing the Hamiltonian approach and it will ultimately prove equivalent.

It might (at that future point in history) look different, of course. There might, for example, be no smooth manifold, no continuum, the spinnetworks (if they remain in the Hamiltonian formulation) might not be embedded. Or they might be. I don't see us as able to predict how the various versions of the theory will look.

But as an immediate sign that the Ham. approach is not yet a dead end, there is the Freidel paper that was just posted two days ago.

MTd2 said:
http://arxiv.org/abs/1101.3524

The Hamiltonian constraint in 3d Riemannian loop quantum gravity

Valentin Bonzom, Laurent Freidel
(Submitted on 18 Jan 2011)
We discretize the Hamiltonian scalar constraint of three-dimensional Riemannian gravity on a graph of the loop quantum gravity phase space. This Hamiltonian has a clear interpretation in terms of discrete geometries: it computes the extrinsic curvature from dihedral angles. The Wheeler-DeWitt equation takes the form of difference equations, which are actually recursion relations satisfied by Wigner symbols. On the boundary of a tetrahedron, the Hamiltonian generates the exact recursion relation on the 6j-symbol which comes from the Biedenharn-Elliott (pentagon) identity. This fills the gap between the canonical quantization and the symmetries of the Ponzano-Regge state-sum model for 3d gravity.

Plus, some of the other things you mentioned remain interesting and important open problems (in whatever formulation one confronts them), such as:

tom.stoer said:
...
- nature and value of the Immirzi parameter
- nature and value of the cosmological constant
 
Last edited:
  • #14
oldman said:
...
The simplest thing we can say about the vacuum seems to be that it is quite symmetric; here is the same as there, and now is no different from then, as far as the vacuum is concerned. That's why we expect the laws of physics to be covariant in what we call spacetime.

Yet abstract graphs that are drawn, like Rovelli's, show no symmetry at all. They're lopsided and skew, as well they might be when gravitating matter or interacting fermions are involved. If they were drawn to represent the Vacuum (or perhaps a time average of it) wouldn't these graphs be more symmetric,...

I suppose that one reason for the power of General Rel is that it is general. One can have solutions with no recognizable symmetry at all.

To be a satisfactory quantum version of GR, Loop must imitate that basic feature.

Of course it is technically possible to confine LQG to an approximately flat sector. This has been done in the "graviton propagator papers" circa 2007.
====================

Had to leave abruptly to take care of something else, before finishing. Back now.
The thing about your post is that it raises intriguing questions.

BTW you mentioned the Christmas review paper. That gives one formulation of the theory, in 3 equations. He says clearly there are other formulations and he is just giving his understanding of what LQG is---so in that sense he seems to agree with Tom Stoer. Indeed the paper goes over OTHER formulations in a later section, fairly extensively----BF theory, GFT, canonical Hamiltonian style, versions using manifolds and so on.

But I find it makes discussion simpler to focus on the one current formulation. Which you may have in mind since you mentioned the recent review paper (1012.4707).

In that case one should observe that the graphs are purely combinatorial. It doesn't matter how they are drawn---with long curly lines or short wiggly lines---or lopsided with all the nodes but one off by themselves in a corner. The visual characteristics of the graph are for the most part inconsequential.

I guess the important things to communicate is that a graph is purely combinatorial and quite general. It could have 2 nodes and 4 links, or it could have billions of nodes and billions of links. It has no special symmetry. The way of treating it mathematically is supposed to be the same whether it has 2 nodes or a trillion nodes.

Combinatorial means it consists of two finite sets and two functions.
NODES = {1,2,3,...N}
LINKS = {1,2,3,...L}
s: LINKS ->NODES
t: LINKS -> NODES

The auxilliary functions s and t are the source and target functions that, for each link, tell you where that link starts from and where it ends up.
For a given link l, the two nodes that link connects are s(l) and t(l).

It's like the minimum math info that could define an oriented graph. The symbol for that simple combinatorial info is gamma Γ.

What i think is the great thing about it is that it allows you to define a Hilbertspace HΓ and do non-trivial stuff. The Hilbertspace has gauge symmetries specified by Γ

Remember that gauge symmetries are symmetries in our information, how it is presented, they are not real material symmetries of a physical situation.

The graph Γ is very much about how we sample the geometric reality of nature (or so I think anyway). It is about what degrees of geometric freedom we capture. (and which others we perhaps overlook.) My interpretation could be quite wrong---it is certainly not authoritative.

There is another interpretation----nodes as "exitations of geometry". N nodes is analogous to a Fock space where there are N particle, say N electrons. In that case the "real" universe would correspond to a graph with a HUGE number of nodes and links. But we develop the math to treat any number. And we deal with examples of small N. You can find that interpretation clearly presented in the Christmas summary paper.

Either way, there is no need for small example graphs to look like anything in particular.
I think they should be, if anything, arbitrary and irregular---to suggest the generality.
 
Last edited:
  • #15
marcus said:
I suppose that one reason for the power of General Rel is that it is general. One can have solutions with no recognizable symmetry at all.

To be a satisfactory quantum version of GR, Loop must imitate that basic feature...

... one should observe that the graphs are purely combinatorial. It doesn't matter how they are drawn---with long curly lines or short wiggly lines---or lopsided with all the nodes but one off by themselves in a corner. The visual characteristics of the graph are for the most part inconsequential.

I guess the important things to communicate is that a graph is purely combinatorial and quite general. ...there is no need for small example graphs to look like anything in particular.
I think they should be, if anything, arbitrary and irregular---to suggest the generality.

Thanks for this. I guess I was being too fussy about the RHS of figure 1 in Rovelli's paper, with it's superimpsed "grains of space". It reminded me of overinterpreted representations of atoms with whirling electrons trailing smoke. I liked when you earlier said it's all about:'What we can say, not what "is" ' . Just as Niels Bohr believed.
 
  • #16
It still isn't completely clear to me how to think of LQG, but it is getting clearer. I'm glad it is so for you as well. The December review paper is well written, I think.

Here is another enlightening short paragraph. It comes on page 6 after he has finished describing the theory (by stating 3 equations on page 2 and then discussing what they mean, with background etc. Then when that is all done, he says:

This concludes the definition of the theory. I have given here this definition without mentioning how it can be “derived” from a quantization of classical general relativity. This is like defining QED by giving its Hilbert space of free electrons and photons and its Feynman rules, without mentioning either canonical or path integral quantization. A reason for this choice is that I wanted to present the theory compactly. A second reason is that one of the main beauties of the theory is that it can be derived with rather different techniques, that involve different kinds of math and different physical ideas. The credibility of the end result is reinforced by the convergence, but each derivation “misses” some aspects of the whole. In chapter IV below I will briefly summarize the main ones of these derivations. Before this, however, let me discuss what is this theory meant to be good for.​

It seems significant to me that no single "derivation" is perfect. The various roads to the present formulation converge but none are complete. The final form of the theory, he seems to be saying, is an educated guess.

Different roads up the mountain, all converging towards the peak...but none quite reaching, so in the end one takes the helicopter. The "derivations" have been valuable to give heuristic guidance, motivation, understanding...but one should not be too tied to the rituals. To repeat a key comparison:

This is like defining QED by giving its Hilbert space of free electrons and photons and its Feynman rules, without mentioning either canonical or path integral quantization.​

Well, perhaps that would have been all right! Not only as an essay's expository plan but as an alternative historical line of development. :biggrin: Perhaps the canonical and path integral quantization could have been skipped and then reconstructed after the fact. If by some fluke the Feynman rules had been discovered first. A not entirely serious speculation.

In case anyone is new to the discussion, the recent review of LQG (December 2010) is http://arxiv.org/abs/1012.4707
 
Last edited:
  • #17
tom.stoer said:
...
- nature and value of the Immirzi parameter
- nature and value of the cosmological constant.

Earlier, Tom gave us a good list of unresolved (or only partially resolved) issues in LQG.

I think there are signs that the theory has the right (or a right) redefinition, as given in the December 2010 overview paper http://arxiv.org/abs/1012.4707

I will mention a few of the signs I see of this, but first to mention one very positive sign that just appeared: this is in response to the Lambda issue, the cosmological constant issue, that Tom indicated.

http://arxiv.org/abs/1101.4049
Cosmological constant in spinfoam cosmology
Eugenio Bianchi, Thomas Krajewski, Carlo Rovelli, Francesca Vidotto
4 pages, 2 figures
(Submitted on 20 Jan 2011)
"We consider a simple modification of the amplitude defining the dynamics of loop quantum gravity, corresponding to the introduction of the cosmological constant, and possibly related to the SL(2,C)q extension of the theory recently considered by Fairbairn-Meusburger and Han. We show that in the context of spinfoam cosmology, this modification yields the de Sitter cosmological solution."

This paper finds a nice natural place for the cosmo constant, and does not resort to the quantum group or q-deformation.

Note that it partly addresses the classical limit issue, since spinfoam cosmology uses the full theory and it is now giving a familiar DeSitter universe as largescale limit.
 
Last edited:
  • #18
"Equation (10) is the Friedmann equation in the presence of a cosmological constant , which is solved by de Sitter spacetime."

Why can we assume that an equation which has the same form as the Friedmann equation has the same meaning - ie. as a solution of an equation for a spacetime metric?
 
  • #19
The idea is always the same: one enhances LQG models on the level of the classical action / on the level of spin networks via quantum deformation / on the level of the intertwiners = as a generalization of the spin foams algebraically to produce a cc term.

Doing this in the quantum theory directly has no benefit. It shows that it can be done cosnsistently, but it does not explain this term. There are always the same questions: what is the reason for
- the cc term, in te EH action
- the quantum deformation of SU(2)
- the generalization of the intertwiner

Sorry, but Rovelli only shows that it can be done, not what it means.
 
  • #20
atyy said:
Why can we assume that an equation which has the same form as the Friedmann equation has the same meaning - ie. as a solution of an equation for a spacetime metric?

I don't think there is any problem, Atyy. They already showed the derivation in the March 2010 paper by bianchi, rovelli, vidotto. Equations 32-44 or thereabouts. They go all the way to the Friedmann equation there. The present paper just follows, with the same notation.

The Friedmann equation is an ordinary diff eqn for the scale factor a(t). The Fr. eqn. does not give you a spacetime metric, it gives you this time-varying dimensionless number a(t), using which you can make a metric if you have a manifold and the other ingredients. But a(t) itself is just a realvalued function of one real variable.

Well, the spinfoam model can give you a(t) too. At least that is how it looks to me when I go over the March 2010 paper. See what you think.
 
Last edited:
  • #21
tom.stoer said:
... It shows that it can be done consistently, but it does not explain this term...

We're seeing a number of signs that new formulation is good.

It was introduced in March/April 2010

1. right away we got spinfoam cosmology (the March Bianchi Rovelli Vidotto paper)
2. the technical analogies with Feynman diagram QED and lattice QCD
3. we get the Spinfoam Fermion paper in December, another sign the format is OK
4. we get cosmogical constant papers, especially this January 2011 one
5. we see deSitter classic universe come out of spinfoam.
6. Battisti Marciano verify the bounce in spinfoam cosmology
7. we see a manifestly covariant version exhibited

these are all signs that the format is working out really well.

Sure, you can ask "what's the explanation of the CC?"

But what I'm looking for is signs that the new manifoldless combinatorial spinfoam is a good format.
I see a lot of things happen in a short time. I see people learning how to use the format and doing some things that weren't done before or weren't done so nicely.

This is what I mean by the thread topic title. I will worry about explanations later.
 
Last edited:
  • #22
Why should the cc be explained at all?
 
  • #23
I agree that the concept how to introduce the cc seems to be physically convincing and mathematically consistent; it provides a rough understanding of the large-scale structure / deSitter space; it brings LQG and CDT closer together; it may even point towards an understanding why the cc must be positive.

But it does not explain what the cc is and why the three parameters G, ß and Lambda (which appear on the same footing in a classical Lagrangian) are so much different when looking at their quantum counterparts in LQG/SF and when comparing the treatment with AS.
 
  • #24
I cannot understand this: why should ASQG be similar to LQG/SF?
 
  • #25
MTd2 said:
I cannot understand this: why should ASQG be similar to LQG/SF?
In AS a renormalization group approach a la Kadanoff is used. Therefore AS is a "meta-theory" (or better: a method) defined on the theory-space of Riemann-geometry consisting of all possible scalar invariants R, R², ... which can be used to define an action. AS tells you something about a renormalization group flow, relevant and irrelevant operators and all that.

Now assuming that AS is correct (at least as an effective theory) it is clear that any fundamental theory of QG should not only reproduce classical GR but AS results as well (at least within a certain regime). Therefore if AS tells us something regarding physical values of G and Lambda then this theory seems to make a prediction regarding Lambda!

If this is true then we should expect that a fundamental theory like LQG should be able to make a prediction regarding Lambda as well - instead of fixing Lambda algebraically / as an input.
 
  • #26
The discussion of http://arxiv.org/abs/1003.3483 says "In detail, we have studied three approximations: (i) cutting the theory to a finite dimensional graph (the dipole), (ii) cutting the spinfoam expansion to just one term with a single vertex and (iii) the large volume limit. The main hypothesis on which this work is based is that the regime of validity in which these approximations are viable includes the semiclassical limit of the dynamics of large wavelengths. “Large” means here of the order to the size of the universe itself."

So all the divergences are removed by ignoring them. Is this derivation or hypothesis?
 
  • #27
atyy said:
... Is this derivation or hypothesis?

I think what you are talking about is simply how people do physics. Typically they start with a "first order approximation" of something. It is neither strictly speaking, neither rigorous derivation nor pure hypothesis. As a biologist you may be expecting a dichotomy, an either or. I don't know cultures and mentalities differ.

We need to be fair and objective, too, not let judgments be too much colored by set animosity.

The March 2010 paper is doing something quite new---working cosmology with spinfoam tools. So they derive partly by guesswork and simplifying assumptions, and see if they get something that looks right. In later papers they can gradually remove simplifying assumptions and guesswork premises---make the derivations more rigorous---analogous to "second order" or higher loop.

Indeed there has been already some followup to the March 2010 "Towards Spinfoam Cosmology" paper. Notice the "towards" it is meant to get a research move started, and it has.


αβγδεζηθικλμνξοπρσςτυφχψω...ΓΔΘΛΞΠΣΦΨΩ∏∑∫∂√∧± ÷←↓→↑↔~≈≠≡≤≥½∞⇐⇑⇒⇓⇔∴∃ℝℤℕℂ⋅
 
  • #28
Good. "Towards" would be the correct word to use for the "large scale limit". It has not given the right large scale limit yet.
 
  • #29
tom.stoer said:
If this is true then we should expect that a fundamental theory like LQG should be able to make a prediction regarding Lambda as well - instead of fixing Lambda algebraically / as an input.

I still do not see the problem in fixing that algebraically, seriously. Can you explain it?
 
  • #30
Compare and contrast.

http://arxiv.org/abs/1003.3483 "In detail, we have studied three approximations: (i) cutting the theory to a finite dimensional graph (the dipole), (ii) cutting the spinfoam expansion to just one term with a single vertex and (iii) the large volume limit. The main hypothesis on which this work is based is that the regime of validity in which these approximations are viable includes the semiclassical limit of the dynamics of large wavelengths. “Large” means here of the order to the size of the universe itself."

http://arxiv.org/abs/1007.2560 "A key feature to appreciate here is that, unlike in standard (quantum-)cosmological treatments, this description is the outcome of a nonperturbative evaluation of the full path integral, with everything but the scale factor (equivalently, V3(t)) summed over".
 
Last edited:
  • #31
MTd2 said:
I still do not see the problem in fixing that algebraically, seriously. Can you explain it?
If AS is right to some extend then Lambda is running and you simply can't fix it algebraically! So either you allow for "dynamical q-deformation in quantum groups" or you apply the Kadanoff block spin transformation to the spin networks and derive a kind of renormalization group equation for "intertwiner coarse graining".

It is clear that you don't see the problem of fixed Lambda in the large-distance / cosmological limit; it is this limit where we observe "fixed Lambda" in nature. But in a fully dynamical setup you can't expect that one bare parameter remains fixed. If this were true then LQG must explain the reason for that, e.g. a special kind of symmetry protecting Lambda from running. Up to now it's mysterious.
 
  • #32
tom.stoer said:
If AS is right to some extend then Lambda is running and you simply can't fix it algebraically! So either you allow for "dynamical q-deformation in quantum groups" or you apply the Kadanoff block spin transformation to the spin networks and derive a kind of renormalization group equation for "intertwiner coarse graining".

It is clear that you don't see the problem of fixed Lambda in the large-distance / cosmological limit; it is this limit where we observe "fixed Lambda" in nature. But in a fully dynamical setup you can't expect that one bare parameter remains fixed. If this were true then LQG must explain the reason for that, e.g. a special kind of symmetry protecting Lambda from running. Up to now it's mysterious.

Wether lambda may run or not is an intersting question.

I don't have much to say except to throw in that I specualted by a long shot connection to my own thinking between the E-H action and information divergence (which is very similar to an action; as extremal action and extremal information divergences are at minimum very closely related principles, both conceptually and mathematically).
https://www.physicsforums.com/showthread.php?t=239414

When I posted that I afterwards realized that it was too tenous for anyone else to connect to.

My conclusion was that it's likely the the constant will run, but not as much with observational scale, but more with the observer complexity scale. My take on theory scaling is that unlike what I think is commonly common practice there has two be TWO energy scales. First there is the scale of where you look, ie, how you zoom in using a microscope or a accelerator. The other energy scale is where the information is coded. In common physics, does not NOT scale, it's somehow quasi-fixed by our "earth-based lab-scale".

My point is that we SHOULD consider indepedently "zooming a microscope" and scaling the microscope itself, because there is a difference. Somehow the latter scale, puts a BOUND to how far the former scale can run.

If anyone knows anyone that takes this seriously and has some references I'd be extremely interested in that. What I suggest is that the very nature of RG may also need improvement. Because the theory scaling as we konw it konw has fixed one scale; the Earth based scale. Nothing wrong with that per see as an effective perspective, but I think a deeper understanding may come if we acknowledge both scales.

/Fredrik
 
  • #33
tom.stoer said:
It is clear that you don't see the problem of fixed Lambda in the large-distance / cosmological limit; it is this limit where we observe "fixed Lambda" in nature.

Yes, that one. The paper with cc is barely out. I guess you are asking too much...
 
  • #34
Alright, what a coincidence,

http://arxiv.org/abs/1101.4788

it seems exists to find the correct order of magnitude of the cosmological constant, for LQG, as well as that it also has a UV behavior just like AS...
 
  • #35
MTd2 said:
Yes, that one. The paper with cc is barely out. I guess you are asking too much...
No no. I don't want to criticize anybody (Rovelli et al.) for not developping a theory for the cc. I simply want to say that this paper does not answer this fundamental question and does not explain how the cc could fit into an RG framework (as is expected for other couplings).

---------------------

We have to disguish two different approaches (I bet Rovelli sees this more clearly than I do).
- deriving LQG based on the EH or Holst action, Ashtekar variables, loops, ... extending it via q-deformation etc.
- defining LQG using simple algebraic rules, constructing its semiclassical limit and deriving further physical predictions

The first approach was developped for decades, but still fails to provide all required insights like (especially) H. The second approach is not bad as it must be clear that any quantization of a classical theory is intrinsically incomplete; it can never resolve quantization issues, operator ordering etc. Having this in mind it is not worse to "simply write down a quantum theory". The problem with that approach was never the correct semiclassical limit (this is a minor issue) but the problem to write down a quantum theory w/o referring to classical expressions!

Look at QCD (again :-) Nobody is able to "guess" the QCD Hamiltonian; every attempt to do this would break numerous symmetries. So one tries (tried) to "derive" it. Of course there are difficulties like infinities, but one has a rather good control regarding symmetries. Nobody is able to write down the QCD PI w/o referring to the classical action (of course its undefined, infinite, has ambiguities ..., but it does not fail from the very beginning). Btw.: this hasn't changed over decades, but nobody cares as the theory seems to make the correct predictions.

Now look at LQG. The time for derivations may be over. So instead of derived LQG (which by may argument explained above is not possible to 100%) one may simply postulate LQG. The funny thing is that in contradistinction to QCD we seem to be able to write down a class of fully consistent theories of quantum gravity w/o derivation, w/o referring to classical expressions, w/o breaking of certain symmetries etc. The only (minor!) issue is the derivation of the semiclassical limit etc.

From a formal perspective this is a huge step forward. If this formal approach is correct, my concerns regarding the cc are a minor issue only.
 
  • #36
What is a semiclassical limit for you?
Why fitting cc could fit into an RG framework would be a fundamental question? :confused:
 
  • #37
@Tom
post #35 gives an insightful and convincing perspective. Also it leaves open the question of what will be the definitive form(s) of the theory. Because you earlier pointed out that at a deeper level a theory can have several equivalent presentations.

I had a minor comment about that. For me, the best presentation of the current manifoldless version is not the absolute latest (December's 1012.4707) but rather October's 1010.1939. And I would say that the notation differs slightly between them, and also that (from the standpoint of a retired mathematician with bad eyesight) their notation is inadequate/imperfect.

If anyone wants to help me say this, look at 1010.1939 and you will see that there is no symbol for a point in the group manifold SU(2)L = GL = G x G x ... x G
Physicists think that they can write down xi and have this mean either xi or else the N-tuple (x1, x2,...,xN)
depending on context. This is all right to a certain extent but after a point it becomes confusing.

In many ways I think the presentation in 1010.1939 is the clearest, but it is still deficient.
Maybe I will expand on that a bit, if it will not distract from more meaningful discussion.

============

BTW, in line with what Tom said in the previous post, there are obviously several different ways LQG can fail, not just one way. One failure mode is mathematical simplicity/complexity. To be successful a theory should (ideally) be mathematically simple.
As well as passing the empirical tests.

One point in favor of the 1010.1939 form is that it "looks like" QED and QCD, except that it is background independent and about geometry, instead of being about particles of matter living in fixed background. Somehow it manages to look like earlier field theories. The presentation on the first page uses "Feynman rules".

These Feynman rules focus on an amplitude ZC(h)
where C is a two-complex with L boundary or "surface" edges, and h is a generic element of SU(2) and h is (h1, h2,...,hL), namely a generic element of SU(2)L

The two-complex C is the "diagram". The boundary edges are the "input and output" of the diagram---think of the boundary as consisting of two separate (initial and final) components so that Z becomes a transition amplitude. Think of the L-tuple h as giving initial and final conditions. The notation h is my notational crutch which I use to keep order in my head. Rovelli, instead, makes free use of the subscript "l" which runs from 1 to L, and has no symbol for h.

The central quantity in the theory is the complex number ZC(h) and one can think of that number as saying

Zroadmap(boundary conditions)
 
Last edited:
  • #38
The thing I like about LQG is that although the ideas may be incorrect or the redefinition for that matter, they are making progress and aren't afraid to delve into these unique concepts. I've never seen so many original papers come out in a year in one specific research program!

All I see now from String Theory research programs is AdS_5 \times S^5 and holographic superconductors, they haven't really ventured into other ideas. Is AdS/CFT even a physical theory at this point, is it possible in our universe? I don't know, but many interesting things are going on in LQG and it's relatives such as CDT that appear much more interesting then the plateau that ST is facing, what the "heck" is a holographic superconductor anyways?

I think the real notion that must be addressed is the nature of space-time itself. I feel that all of our ideas in Physics rely on a specific space-time backgrounds and therefore having a quantum description of space-time at a fundamental level is a more clear approach - which LQG does. Does ST address this idea, is AdS/CFT a valid idea? Anyways enough with the merits of ST, what is LQG lacking?
 
Last edited:
  • #39
Kevin_Axion said:
...
I think the real notion that must be addressed is the nature of space-time itself.

I think that is unquestionably correct. The issue is the smooth manifold, invented by Bernie Riemann around 1850 and introduced to mathematicians with the help and support of Carl Gauss at Gottingen around that time. It is a continuum with a differential structure---technically the general idea is called "differentiable manifold".

The issue is whether or not is is time to replace the manifold with something lighter, more finite, more minimal, more "informatic" or information-theoretical.

If the historical moment is ripe to do this, then Rovelli and associates are making a significant attempt which may show the way. If the historical moment is not ripe to replace the manifold (as model of spacetime) then they will be heading off into the jungle to be tormented by savages, mosquitoes and malaria.

At the present time the proposed minimalist/informatic structure to replace manifold is a 2-complex. Or, ironically, one can also work with a kind of "dual" which is a full-blown 4D differential manifold which has a 2-complex of "defect" removed from it and is perfectly flat everywhere else.
A two-complex is basically just like a graph (of nodes and links) except it has one higher dimensionality (vertices, edges, faces). A two-complex is mathematically sufficient to carry a sketch of the geometric information (the curvatures, angles, areas between event-marked regions,...) contained in a 4D manifold where this departs from flatness. A two-complex provides a kind of finite combinatorial shorthand way of writing down the geometry of a 4D continuum.

So we will watch and see how this goes. Is it time to advance from the 1850 spacetime manifold beachhead, or not yet time to do that?

marcus said:
...

The central quantity in the theory is the complex number ZC(h) and one can think of that number as saying

Zroadmap(boundary conditions)
 
  • #40
So essentially quantum space-time is nodes connecting to create 4D tetrahedrons?
 
  • #41
Kevin_Axion said:
So essentially quantum space-time is nodes connecting to create 4D tetrahedrons?
I'm agnostic about what nature IS. I like the Niels Bohr quote that says physics is not about what nature is, but rather what we can say about it.

Also another favorite is the Rovelli quote that QG is not about what spacetime is but about how it responds to measurement.

(there was a panel discussion and he was trying to say that arguments about whether it is really made of chainlink-fence, or tinkertoy, or lego-blocks, rubberbands, or tetrahedra, or the 4D analog of tets, called 4-simplices, or general N-face polyhedra...are not good arguments. How one sets up is really just a statement about how one intends to calculate. One calculates the correlations between measurements/events. The panel discussion was with Ashtekar and Freidel, at PennState in 2009, as I recall. I can get the link if anyone is interested. It told me that QG is about geometric information, i.e. observables. not about "ontology". So I liked that and based my agnosticism on it.)

BTW I think human understanding grows gradually, almost imperceptibly, like a vine up a wall. Nothing works if it is too big a step, or jump. Therefore, for me, there is no final solution, there are only the small steps that the human mind can take now. The marvel of LQG, for me, is that it actually seems as if it might be possible to take this step now, and begin to model spacetime with something besides a manifold, and yet still do calculations (not merely roll the Monte Carlo simulation dice of CDT and Causets.)

But actually, Kevin, YES! :biggrin: Loosely speaking, the way almost everyone does speak, and with the weight on "essentially" as I think you meant it, in this approach spacetime essentially is something like what you said!
 
Last edited:
  • #42
tom.stoer said:
The problem with that approach was never the correct semiclassical limit (this is a minor issue) but the problem to write down a quantum theory w/o referring to classical expressions!

In the past two years I have repeatedly tried to stimulate a discussion on this issue with no such luck, everybody seems to be happy or just accept that. I have never seen any good thread on this issue because it seems to be sacrilegious to talk about it.

Moreover, I think the real culprit is differential equations, they are inherently a guess work, the technique is always to "add terms" to get it to fit experiment, not to mention its limited relating points to the neighbors and the notorious boundary condition requirement. It has served us well for a long time,but No fundamental theory should be like that.

As for LQG, the original idea was just the only option to make GR look like the quantum and to "see what happens", only for rovelli to conclude that spacetime and matter should be related. But how, LQG is giving hints which has not been capitalized on. I still think spacetime is ""unphysical ""and must be derived from matter and not the other way around.
 
  • #43
Kevin_Axion said:
So essentially quantum space-time is nodes connecting to create 4D tetrahedrons?

Just a little language background, in case anyone is interested: The usual name for the analogous thing in 4D, corresponding to a tet in 3D, is "4-simplex"

Tedrahedron means "four sides" and a tetrahedron does have four (triangular sides). At tet is also a "3-simplex" because it is the simplex that lives in 3D. Just like a triangle is a 2-simplex.

The official name for a 4-simplex is "pentachoron" choron means 3D room in Greek. the boundary of a pentachoron consists of five 3D "rooms"---five tetrahedrons.

To put what you said more precisely

So essentially quantum space-time is nodes connecting to create pentachorons?

Loosely speaking that's the right idea. But we didn't touch on the key notion of duality. It is easiest to think of in 2D. Take a pencil and triangulate a flat piece of paper with black equilateral triangles. Then put a blue dot in the center of each triangle and connect two dots with a blue line if their triangles are adjacent.

The blue pattern will look like a honeycomb hexagon tiling of the plane. The blue pattern is dual to the black triangulation. Each blue node is connected to three others.

Then imagine it in 3D where you start by triangulating regular 3D space with tetrahedra. Then you think of putting a blue dot at the center of each tet, and connect it with a blue line to each of the 4 neighbor blue dots in the 4 adjacent tets.

In some versions of LQG, the spin networks---the graphs that describe 3D spatial geometry--- are restricted to be dual to triangulations. And in 4D where there are foams (analogous to graphs), only foams which are dual to triangulations are allowed.

These ideas---simplexes, triangulations that chop up space or spacetime into simplexes, duals, etc.---become very familiar and non-puzzling. One gets used to them.

So that would be an additional wrinkle to the general idea you expressed.

Finally, it gets simpler aqain. You throw away the idea of triangulation and just keep the idea of a graph (for 3D) and a foam thought of either as 4D geometry, or as the evolution of 3D geometry. And you let the graphs and foams be completely general, so no more headaches about the corresponding dual triangulation or even if there is one. You just have general graphs and two-complexes, which carry information about observables (area, volume, angle,...)
===============================

Kevin, one could say that all this stuff about tetrahedrons and pentachorons and dual triangulations is just heuristic detail that helps people get to where they are going, and at some point becomes extra baggage---unnecessary complication---and gets thrown out.

You can for instance look at 1010.1939. In fact it might do you good. You see a complete presentation of the theory in very few pages and no mention of tetrahedrons :biggrin:

Nor is there any mention of differentiable manifolds. So there is nothing to chop up! There are only the geometric relations between events/measurements. That is all we ever have, in geometry. Einstein pointed it out already in 1916 "the principle of general covariance deprives space and time of the last shred of objective reality". Space has no physical existence, there are only relations among events.

We get to use all the lego blocks we want and yet there are no legoblocks. Something like that...
 
Last edited:
  • #44
At any rate, let's get back to the main topic. There is this new formulation, best presented in http://arxiv.org/abs/1010.1939 or so I think, and we have to ask is it simple enough and also wonder if it will be empirically confirmed. It gives Feynman rules for geometry leading to a way of calculating a transition amplitude a certain complex number, which I wrote

Zroadmap(boundary conditions)

the amplitude (like a probability) of going from initial to final boundary geometry following the Feynman diagram roadmap of a certain two-complex C.

A twocomplex is a finite list of abstract vertices, edges, faces: vertices where the edges arrive and depart and faces bordered by edges (the list says which connect with which).

Initial and final geometry details come as boundary edge labels which are elements of a group G = SU(2). There is some finite number L of boundary edges, so the list of L group elements labeling the edges can be written h = (h1, h2,...,hL).

So, in symbols, the complex number is ZC(h). The theory specifies a formula for computing this, which is given by equation (4) on page 1 of http://arxiv.org/abs/1010.1939 , the paper I mentioned.

Here is an earlier post that explains some of this:
marcus said:
@Tom
post #35 gives an insightful and convincing perspective. Also it leaves open the question of what will be the definitive form(s) of the theory. Because you earlier pointed out that at a deeper level a theory can have several equivalent presentations.

I had a minor comment about that. For me, the best presentation of the current manifoldless version is not the absolute latest (December's 1012.4707) but rather October's 1010.1939. And I would say that the notation differs slightly between them, and also that (from the standpoint of a retired mathematician with bad eyesight) their notation is inadequate/imperfect.

If anyone wants to help me say this, look at 1010.1939 and you will see that there is no symbol for a point in the group manifold SU(2)L = GL = G x G x ... x G
Physicists think that they can write down xi and have this mean either xi or else the N-tuple (x1, x2,...,xN)
depending on context. This is all right to a certain extent but after a point it becomes confusing.

In many ways I think the presentation in 1010.1939 is the clearest, but it is still deficient.
Maybe I will expand on that a bit, if it will not distract from more meaningful discussion.

============

BTW, in line with what Tom said in the previous post, there are obviously several different ways LQG can fail, not just one way. One failure mode is mathematical simplicity/complexity. To be successful a theory should (ideally) be mathematically simple.
As well as passing the empirical tests.

One point in favor of the 1010.1939 form is that it "looks like" QED and QCD, except that it is background independent and about geometry, instead of being about particles of matter living in fixed background. Somehow it manages to look like earlier field theories. The presentation on the first page uses "Feynman rules".

These Feynman rules focus on an amplitude ZC(h)
where C is a two-complex with L boundary or "surface" edges, and h is a generic element of SU(2) and h is (h1, h2,...,hL), namely a generic element of SU(2)L

The two-complex C is the "diagram". The boundary edges are the "input and output" of the diagram---think of the boundary as consisting of two separate (initial and final) components so that Z becomes a transition amplitude. Think of the L-tuple h as giving initial and final conditions. The notation h is my notational crutch which I use to keep order in my head. Rovelli, instead, makes free use of the subscript "l" which runs from 1 to L, and has no symbol for h.

The central quantity in the theory is the complex number ZC(h) and one can think of that number as saying

Zroadmap(boundary conditions)
 
Last edited:
  • #45
The way the equation (4) works is you let boundary information ( h ) percolate into the foam from its outside surface, and you integrate up all the other labels that the twocomplex C might have compatible with what is fixed on the surface.

The foam is like an information-sponge, with a certain welldefined boundary surface (actually a 3D hypersurface geometry, think initial + final) and you paint the outside of the sponge with some information-paint h
and the paint seeps and soaks into the inside, and constrains what colors can be there to some extent. Then you integrate out, over all what can be inside, compatible with the boundary.

So in the end the Z amplitude depends only on the choice of the unlabeled roadmap C, a pure unlabeled diagram, plus the L group element labels on the boundary graph.

If the group-labeled boundary graph happens to have two connected components you can call one "initial geometry" and one "final geometry" and then Z is a "transition amplitude" from initial to final, along the twocomplex roadmap C.

BTW Etera Livine just came out with a 90-page survey and tutorial paper on spinfoam. It is his habilitation, so he can be research director at Lyon, a job he has already be performing from the looks of it. Great! Etera has posted here at PF Beyond sometimes. His name means Ezra in the local-tradition language where he was raised. A good bible name. For some reason I like this. I guess I like the name Ezra. Anyway he is a first-rate spinfoam expert and we can probably find this paper helpful.

http://arxiv.org/abs/1101.5061
A Short and Subjective Introduction to the Spinfoam Framework for Quantum Gravity
Etera R. Livine
90 pages
(Submitted on 26 Jan 2011)
"This is my Thèse d'Habilitation (HDR) on the topic of spinfoam models for quantum gravity, which I presented in l'Ecole Normale Supérieure de Lyon on december 16 2010. The spinfoam framework is a proposal for a regularized path integral for quantum gravity, inspired from Topological Quantum Field Theory (TQFT) and state-sum models. It can also be seen as defining transition amplitudes for the quantum states of geometry for Loop Quantum Gravity (LQG)."

It may interest you to go to page 61 where begins Etera's Chapter 4 What's Next for Spinfoams?
 
Last edited:
  • #46
Awesome, thanks for the detailed explanation marcus! I'm in grade 11 so the maths only makes partial sense to me but the words will be good enough for now. About connecting the points in the center of the triangles, so you always have an N-polygon with three N-polygons meeting at each vertex, what is the significance of that, will you have more meeting at each vertex with pentachorons (applying the same procedure) because there exist more edges?
 
Last edited:
  • #47
Kevin_Axion said:
... About connecting the points in the center of the triangles, so you always have an N-polygon with three N-polygons meeting at each vertex, what is the significance of that, will you have more meeting at each vertex with pentachorons (applying the same procedure) because there exist more edges?
My writing wasn't clear Kevin. The thing about only three meeting was just a detail I pointed out about the situation on the plane when you go from equilateral triangle tiling to the dual, which is hexagonal tiling. I wanted you to picture it concretely. That particular aspect does not generalize to other polygons or to other dimensions. I was hoping you would draw a picture of how there can be two tilings each dual to the other.

It would be a good brain-exercise, I think, to imagine how ordinary 3D space can be "tiled" or triangulated by regular tetrahedra. You can set down a layer of pyramids pointing up, but then how do you fill in? Let's say you have to use regular tets (analogous to equilateral triangles) for everything.

And when you have 3D space filled with tets, what is the dual to that triangulation? This gets us off topic. If you want to pursue it maybe start a thread about dual cell-complexes or something? I'm not an expert but there may be someone good on that.
 
  • #48
The Wiki article is good: "The 5-cell can also be considered a tetrahedral pyramid, constructed as a tetrahedron base in a 3-space hyperplane, and an apex point above the hyperplane. The four sides of the pyramid are made of tetrahedron cells." - Wikipedia: 5-cell, http://en.wikipedia.org/wiki/Pentachoron#Alternative_names
Anyways, I digress. I'm sure this is slightly off-topic.
 
  • #49
Oh good! You are on your own. I googled "dual cell complex" and found this:
http://www.aerostudents.com/files/constitutiveModelling/cellComplexes.pdf

Don't know how reliable or helpful it may be.
 
  • #50
I understand some vector calculus and that appears to be what the math being used is. Thanks I'm sure that will be useful!
 
Last edited:
Back
Top