Signs LQG has the right redefinition (or wrong?)

  • Thread starter marcus
  • Start date
  • Tags
    Lqg
In summary, there will be the 2011 Zakopane QG school the first two weeks of March. Rovelli has 10 hours of lecture, presumably to present his current understanding of the theory at a level for advanced PhD students and postdocs wanting to get into LQG research. This will be, I guess, the live definitive version.
  • #106
Yes, I did notice the difference. When mentioning the divergence I always meant (26) and (27) because of their relationship through summing=refining. But yes, it is true that the equivalence is not obvious, and in fact only holds exactly for some models. In other models, there is another factor. Anyway, I'd be perfectly happy if you treat (27) too. In the summing=refining paper, they mention that (27) also has convergence issues, even without referring to (26).

I don't see how the convergence is a minor issue. If it does not even converge in principle, then the theory is meaningless. There's no point taking the first term of divergent series (well, it could be an asymptotic series, in which case you can take the first terms of divergent series). But then that would seriously damge LQG's claim to provide a non-perturbative definition of quantum gravity.
 
Physics news on Phys.org
  • #107
Just to be clear, do we both realize that we are talking about a type of IR divergence that

1. would not arise if the U is finite and
2. they have ideas of how to address anyway (but since formulation is new, haven't gotten around to working out)

or do you see things in a darker gloomier light? :biggrin:
 
  • #108
marcus said:
Just to be clear, do we both realize that we are talking about a type of IR divergence that

1. would not arise if the U is finite and
2. they have ideas of how to address anyway (but since formulation is new, haven't gotten around to working out)

or do you see things in a darker gloomier light? :biggrin:

Even if the boundary is finite, it isn't clear to me that the number of 2 complexes associated with a given finite boundary is finite. I do agree the sum is discrete, so it depends on the convergence of a probably infinite discrete sum, ie. in Eq (27) of http://arxiv.org/abs/1010.1939 , it's not clear to me that the largest j and n possible are finite.

There is an analogous problem in GFT, which both Freidel and Oriti noted in their old reviews. Freidel suggested terminating the expansion at tree level, arguing that the tree level expansion was basis independent (or something like that), while Oriti suggested GFT renormalization, which both of them worked on later. http://arxiv.org/abs/0905.3772 There's of course also Rivasseau and colleagues working on this, as you know.

The other major problem (I believe it is a problem, looking at things from AdS/CFT) is the interpretation of the formalism. I doubt the geometry of the formalism is so simply related to spacetime geometry. In AdS/CFT, many geometrical objects do not have the meaning of spacetime geometry. It's interesting to see that Barrett is exploring an approach like this. I have no idea if it's a red herring, but papers in which spin networks and AdS/CFT show up together are http://arxiv.org/abs/0905.3627 and http://arxiv.org/abs/0907.2994 .

BTW, another paper that is helpful in reading "summing=refining" http://arxiv.org/abs/1010.5437 is this explicating the relationship between the holomorphic and spin network representations http://arxiv.org/abs/1004.4550 .
 
Last edited by a moderator:
  • #109
The discussion (p10) of http://arxiv.org/abs/1101.6078 makes very interesting comments about the current models:

"Diffeomorphism invariance here actually means invariance under piecewise-linear homeomorphisms, but this is essentially equivalent. ... This invariance is seen in the Crane-Yetter model and also in the 3d gravity models, the Ponzano-Regge model and the Turaev-Viro model, the latter having a cosmological constant. The 3d gravity models can be interpreted as a sum over geometries, a feature which is carried over to the four-dimensional gravity models [BC, EPRL, FK], which however do not respect diffeomorphism invariance. ...

The most obvious omission from this list is the ability to implement the Einstein-Hilbert action. In fact, experience with state sum models in four dimensions so far is that there are models with diffeomorphism-invariance but no Einstein-Hilbert action, and there are models implementing the Einstein-Hilbert action but having (at best) only approximate diffeomorphism-invariance."
 
  • #110
I see that Barrett changed the title of his paper just a day or so after first posting it! The original title of1101.6078, which I printed as soon as it appeared, was "Induced Standard Model and Unification"
Niow we have version 2 of the paper titled "State Sum..."

I'll try to get the sense of any substantive changes I notice. Thanks for pointing out his mention of diffeo invariance. Do you think he could be mistaken on that point? I think LQG has all the diff-invariance one can expect to have after one gets rid of the smooth manifold. (And no one, including Barrett, thinks that smooth continuum exists all the way in---Barrett refers to manifold model as only an approximation.)

atyy said:
The discussion (p10) of http://arxiv.org/abs/1101.6078 makes very interesting comments about the current models:

"Diffeomorphism invariance here actually means invariance under piecewise-linear homeomorphisms, but this is essentially equivalent. ... This invariance is seen in the Crane-Yetter model and also in the 3d gravity models, the Ponzano-Regge model and the Turaev-Viro model, the latter having a cosmological constant. The 3d gravity models can be interpreted as a sum over geometries, a feature which is carried over to the four-dimensional gravity models [BC, EPRL, FK], which however do not respect diffeomorphism invariance. ...

The most obvious omission from this list is the ability to implement the Einstein-Hilbert action. In fact, experience with state sum models in four dimensions so far is that there are models with diffeomorphism-invariance but no Einstein-Hilbert action, and there are models implementing the Einstein-Hilbert action but having (at best) only approximate diffeomorphism-invariance."

I see he not only changed the title but also expanded the abstract summary:

http://arxiv.org/abs/1101.6078
State sum models, induced gravity and the spectral action
John W. Barrett
(Submitted on 31 Jan 2011 (v1), last revised 1 Feb 2011 (this version, v2))
"A proposal that the bosonic action of gravity and the standard model is induced from the fermionic action is investigated. It is suggested that this might occur naturally in state sum models."

Both changes are definite improvements (IMHO) making the message clearer and more complete.
========================
A note to myself, so I won't forget re post 97 of Atyy's: Wick rotation, deS space in both Eucl. and Lor. version, deS bounce. CDT doesn't yet put in matter. The scale of CDT computer sims was determined to be order Planck. No time to elaborate, and may be offtopic anyway.

Atyy you have provided some valuable signs that the the current formulation is NOT satisfactory and they have to be weighed against signs it is.
αβγδεζηθικλμνξοπρσςτυφχψωΓΔΘΛΞΠΣΦΨΩ∏∑∫∂√±←↓→↑↔~≈≠≡≤≥½∞ ⇐⇑⇒⇓⇔∃ℝℤℕℂ∈⊗⊕⊂⟨·|·⟩
 
Last edited:
  • #111
atyy said:
...(actually I don't believe in the bounce for spinfoams - I think Rovelli is hoping for an outcome like CDT - after performing the full sum - not just the first term - he recovers a finite classical universe - to be fair - CDT has not even discretized down to the Planck scale yet)

You might be interested in this, because of interest in cdt. They managed to estimate the size of their little universes they were creating in the computer. The natural lattice scale, basically an edge of a simplex, turns out to be about one half of one Planck length.

See for example the 2009 review paper
http://arxiv.org/abs/0906.3947
page 26 right after equation 42.

As I recall the result goes back to around 2007, I remember when it first came out. The method used to deduce the size is ingenious, but I can't recall exactly how it works, would have to go back and refresh a bit.

==============
I guess morally you could say that LOLL GETS A BOUNCE with CDT. Because she gets the classic deSitter----classic deS has a natural bounce, just one.
But remember that CDT uses Wick rotation, what they do in the computer is Wick rotated to Euclidean style. The rotated Euclidean version of deS is actually S4.

They discuss this various places so if anyone is curious I could look up a reference, why getting a hypersphere path integral with Monte Carlo really means getting the hourglass shape standard deSitter, if you would Wick rotate.

CDT sims typically do not include matter. And that is like the pure deSitter universe as well. Only has cosmo constant. Pure deSitter bounce is gentle and shallow by comparison with when you have matter and the contracting phase experiences gravitational collapse, a crunch.

But overall, I guess the CDT results are another reason to believe in bounce cosmology. If you believe anything without first seeing observational evidence. I keep that kind of thing in Limbo, believing neither yes nor no.
 
Last edited:
  • #112
marcus said:
You might be interested in this, because of interest in cdt. They managed to estimate the size of their little universes they were creating in the computer. The natural lattice scale, basically an edge of a simplex, turns out to be about one half of one Planck length.

See for example the 2009 review paper
http://arxiv.org/abs/0906.3947
page 36 right after equation 42.

Doesn't it say that the Planck length is about half the lattice spacing?
 
  • #113
marcus said:
You might be interested in this, because of interest in cdt. They managed to estimate the size of their little universes they were creating in the computer. The natural lattice scale, basically an edge of a simplex, turns out to be about one half of one Planck length.

See for example the 2009 review paper
http://arxiv.org/abs/0906.3947
page 26 right after equation 42.

As I recall the result goes back to around 2007, I remember when it first came out. The method used to deduce the size is ingenious, but I can't recall exactly how it works, would have to go back and refresh a bit.

I corrected the page, it is 26, not 36.

==============
I guess morally you could say that LOLL GETS A BOUNCE with CDT. Because she gets the classic deSitter----classic deS has a natural bounce, just one.
But remember that CDT uses Wick rotation, what they do in the computer is Wick rotated to Euclidean style. The rotated Euclidean version of deS is actually S4.

They discuss this various places so if anyone is curious I could look up a reference, why getting a hypersphere path integral with Monte Carlo really means getting the hourglass shape standard deSitter, if you would Wick rotate...

atyy said:
Doesn't it say that the Planck length is about half the lattice spacing?

you are probably right. I tend to trust you on details. (if not always about interpretations).
I'll check. As I recall the number was something like 0.48 one way or the other. I could have misread.

YES. You read it correctly, when they run these little quantum universes in the computer, they come into existence evolve and go out of existence and they always behave as if the size of the building blocks is about 2 Planck lengths.

With more computer power you can run simulations with more building blocks, but it doesn't make things finer. It just let's the universe grow bigger. The theory does not specify a minimum scale---they don't put in one by hand. It's as if "nature" (the computer sim) had one. It's a bit curious. I haven't seen it explained.

John Baez had a brief explanation of Wick rotation and why CDT uses it (the Metropolis montecarlo algorithm needs actual probabilities, not amplitudes). Might be helpful:
http://math.ucr.edu/home/baez/week206.html
 
Last edited:
  • #114
marcus said:
With more computer power you can run simulations with more building blocks, but it doesn't make things finer. It just let's the universe grow bigger. The theory does not specify a minimum scale---they don't put in one by hand. It's as if "nature" (the computer sim) had one. It's a bit curious. I haven't seen it explained.

Although it's not obvious, the computer simulations do put in a minimum scale by hand, and they hope to make this scale smaller in future simulations, since CDT is supposed to model a theory with a continuum limit (Benedetti does this analytically in 2+1D in http://arxiv.org/abs/0704.3214 ). They talk about how to make the lattice spacing smaller than the Planck scale in the review you mentioned.
 
  • #115
They talk about how to make the lattice spacing smaller than the Planck scale in the review you mentioned.
Indeed they speculate about how to modify the model to get in closer, around the bottom of page 28 and top of page 30 in that review paper. They say "work is ongoing". I haven't seen anything about that so far. It is an interesting review, a 2009 writeup of talks given in 2008. I don't know of anything more recent that is comparably complete.

To recap, and wrap up the divergence discussion, we have been talking about signs that LQG has the right redefinition, or that it doesn't. Unresolved divergence issues would be one sign that it doesn't have the right formulation yet. (Unless the issues eventually get resolved.)

We can't presume to make a final verdict, of course, only weigh the various clues and make an educated guess based on how things are going. I mentioned some "good" signs earlier--signs that the research community is increasingly judging the theory's prospects to be favorable. But against that one can balance the large-volume divergence issues.

Rovelli's most recent review paper serves as a kind of status report on this and several other critical questions. Here is what he says on page 19

==quote http://arxiv.org/abs/1012.4707 page 19 section A "open problems" ==
...
Divergences.
The theory has no ultraviolet divergences. This can be shown in various ways, for instance rewriting (1) in the spin-network basis and noticing that the area gap makes all sums finite in the direction of the ultraviolet. However, divergences might be lurking elsewhere, and they probably are. There might indeed be infrared divergences, that come from large j. The geometrical interpretation of these divergences is known. They corresponds to the “spikes” in Regge calculus: imagine taking a triangulation of a two-surface, and moving a single vertex of the triangulation at large distance from the rest of the surface. Then there will be a cycle of triangles which are very lengthened, and have arbitrary large area. This is a spike.

A number of strategies can be considered to control these infrared divergences. One is to regularize them by replacing groups with quantum groups. This step has a physical ground since this modification of the vertex amplitude corresponds to adding a cosmological constant to the dynamics of the theory. The theory with the quantum group is finite [21, 22].

The second possible strategy is to see if infrared divergences can be renormalized away, namely absorbed into a redefinition of the vertex amplitude. A research line is active in this direction [117, 118], which exploits the group-field-theory formulation of the theory.20

Finally, the third possibility is that infrared divergences could be relatively harmless on local observables, as they are in standard field theory.
==endquote==
 
Last edited:
  • #116
Atyy you called attention to one of the six wishes on John Barrett's "wish list" for a unifying state sum model. His first wish, you pointed out, was not for "diffeomorphism invariance" but for "invariance under PL homeomorphisms." That takes us out of the category of smooth manifolds. You see him backing out of manifolds, but taking with him whatever is the appropriate descendent of diff-invariance.

It is not recognized in that particular paper, but LQG does the analogous thing and retains the appropriate residual form of diff-invariance. Rovelli's most recent papers make a point of the connection with PL (piecewise linear) manifolds and also of the combinatorial version of factoring out diffeomorphism gauge.

The two are closer than may appear to you at first sight. In any case you point us in an interesting direction. We should really list ALL SIX of Barrett's goals for a state sum unification. All are potentially interesting. They are listed on page 10.
atyy said:
The discussion (p10) of http://arxiv.org/abs/1101.6078 makes very interesting comments about the current models:

"Diffeomorphism invariance here actually means invariance under piecewise-linear homeomorphisms, but this is essentially equivalent. ...
... in four dimensions so far is that there are models with diffeomorphism-invariance but no Einstein-Hilbert action, and there are models implementing the Einstein-Hilbert action but having (at best) only approximate diffeomorphism-invariance."

I'll get the page 10 "wish list" to provide context.

==quote Barrett "State sum models, induced gravity, and the spectral action"==
These features have all been seen in various models and it is not unreasonable to expect there to exist state sum models with all of them at once. The wish-list of properties for a state sum model is
• It defines a diffeomorphism-invariant quantum field theory on each 4- manifold
• The state sum can be interpreted as a sum over geometries
• Each geometry is discrete on the Planck scale
• The coupling to matter fields can be defined
• Matter modes are cut off at the Planck scale
• The action can include a cosmological constant
Diffeomorphism invariance here actually means invariance under piecewise- linear homeomorphisms, but this is essentially equivalent. The piecewise- linear homeomorphisms are maps which are linear if the triangulations are subdivided sufficiently and play the same role as diffeomorphisms in a theory ...

...The coupling of the 3d gravity models to matter is studied in [BO, FL], and extended to 4d models in [BF]. A model with a fermionic functional integral have been studied in [FB, FD], though as yet there is no model which respects diffeomorphism invariance. This is clearly an important area for future study.
===endquote===

Notice at the end he cites four LQG papers by Laurent Freidel (FL, BF, FB, FD).

And he has already gotten out of the smooth category and into piecewise-linear, why not go all the way to the 2-skeleton?

All LQG does is take the process one step further. A PL manifold is already in some sense combinatorial, just with a bunch more excess baggage. When you triangulate then the divisions between the simplexes are a foam. And all the interesting stuff happens at the joints, that is on the foam. That is where curvature occurs!

So LQG does the logical thing and focuses on the 2-complex, the foam, and labels it.

It still retains the mathematical essence of the classic diff-invariance. The point about diff-invariance in GR was to factor it out. The essential object (a "geometry" ) was an equivalence class. When you reach that level there are no more diffeomorphisms. They are merely the gauge equivalences between different representatives of the class.

LQG reflects this. You can see it still being dealt with when they divide out the multiplicity factor (the foam automorphisms) in the state sum. The foam has almost all the diffeo gauge redundancy squeezed out, but there is still some margin of double-counting because of symmetries in the foam, so they have to deal with that.

You also see Loll dealing with the same thing. I remember them dividing out by the multiplicity of a triangulation---its automorphisms---in their CDT state sum. Except for that, a triangulation represents a unique geometry: there is no more diffeo equivalence to factor out.

I don't want to take time now to look up references, but if you want, and ask about it, I think I can get links and page-refs about this. Depends if anyone is curious.
 
Last edited:
  • #117
Actually, I think diff invariance is a minor issue. I think the bigger issue is interpretation of the formalism. Rovelli has consistently said no unification of gravity and matter. I suspect there has to be unification - that's a key message from strings - and it is interesting to see Barrett exploring unification ideas - ie. that matter is essential for gravity. As you know, I believe Rovelli's philosophy leads to Asymptotic Safety, but his formalism leads elsewhere.
 
  • #118
atyy said:
...Rovelli has consistently said no unification of gravity and matter.

I don't recall Rovelli saying no unif. of grav. and matter EVER. What he says in the latest review is take one step at a time. I think the ultimate aim is unification, and the philosophy is pragmatic and incremental.

Let's first see how to formulate a backgroundless quantum field theory.
The first such, the first field should be geometry (= gravity).
When you know how to write down a backgroundless quantum geometry (=backgroundless quantum grav. field) then define matter on it.
Then you will see how to unify.

Rovelli didn't say you never unify. He has opposed the Great Leap Forward impulse of making a big jump to a dreamed-of final theory.

You and I see the same facts and you are admirably alert and perceptive, but we sometimes differ as to interpretation. I see LQG as addressing all 6 of Barrett's desiderata, and having an ultimate goal of unification, and being on track for that goal (at least for the present.)

I see the Zurich conference organizing committee as a place where Rovelli, Barrett, Nicolai can meet and discover how to see eye to eye on this project.

Maybe since you brought up Barrett's page 10 "wish list" we should list all 6 of his "wishes" and see how well the current formulation of LQG addresses them.
 
Last edited:
  • #119
Picking up on a couple of things:
marcus said:
...
I see the Zurich conference organizing committee as a place where Rovelli, Barrett, Nicolai can meet and discover how to see eye to eye on this project.

Maybe since you brought up Barrett's page 10 "wish list" we should list all 6 of his "wishes" and see how well the current formulation of LQG addresses them.

The June Zurich conference, February Lisbon workshop, and March Zakopane school are, I think the three defining QG events for 2011. We need to look at the various programs in relation.

Zurich "Quantum Theory and Gravitation"
http://www.conferences.itp.phys.ethz.ch/doku.php?id=qg11:start
(organizers Barrett, Grosse, Nicolai, Picken, Rovelli)
http://www.conferences.itp.phys.ethz.ch/doku.php?id=qg11:speakers

Zakopane "Quantum Geometry/Gravity School"
QG means Quantum Geometry and Quantum Gravity the way ESF supports it.
(organizers include Barrett, Lewandowski, Rovelli)
http://www.fuw.edu.pl/~kostecki/school3/
https://www.physicsforums.com/showpost.php?p=3117688&postcount=14

Lisbon "Higher Gauge, TQFT, Quantum Gravity" school and workshop
https://sites.google.com/site/hgtqgr/home
(organizers include Roger Picken and Jeffrey Morton)
https://sites.google.com/site/hgtqgr/speakers
(speakers include Freidel, Baratin, Dittrich...)

Since the ESF QG agency is supporting all three of these we could think of Barrett's recent paper (cited by Atyy) as suggesting a common direction, giving a hint of a keynote. He probably tries to think coherently about the whole picture. Let's look at what he calls his "wish list".

==quote Barrett http://arxiv.org/abs/1101.6078 ==
The wish-list of properties for a state sum model is
  • It defines a diffeomorphism-invariant quantum field theory on each 4-manifold
  • The state sum can be interpreted as a sum over geometries
  • Each geometry is discrete on the Planck scale
  • The coupling to matter fields can be defined
  • Matter modes are cut off at the Planck scale
  • The action can include a cosmological constant
Diffeomorphism invariance here actually means invariance under piecewise-linear homeomorphisms, but this is...
==endquote==
It is a clear cogent program except that it may be overly restrictive to assume a 4-manifold. Why have a manifold at all? since that suggests a continuous "classical trajectory" of spatial geometry.
I think the (possibly unconsidered) assumption of a 4-manifold favors a kind of preconception of what a state-sum model, or a TQFT, ought to look like.
 
Last edited:
  • #120
Atyy you quoted this of Barrett's, right where he gives his 6-point "wish list". Do you think he is right about "do not respect" or might he have overlooked something?

atyy said:
The discussion (p10) of http://arxiv.org/abs/1101.6078 makes very interesting comments about the current models:

"Diffeomorphism invariance here actually means invariance under piecewise-linear homeomorphisms, but this is essentially equivalent. ... a sum over geometries, a feature which is carried over to the four-dimensional gravity models [BC, EPRL, FK], which however do not respect diffeomorphism invariance. ..."

Barrett has a particular idea of a state-sum model that I think conforms roughly to an Atiyah TQFT paradigm. He accordingly expects to see something at least reminiscent of a manifold, with the moral equivalent of diffeomorphisms. He sets out these 6 desiderata:

==quote Barrett http://arxiv.org/abs/1101.6078 ==
The wish-list of properties for a state sum model is
  • It defines a diffeomorphism-invariant quantum field theory on each 4-manifold
  • The state sum can be interpreted as a sum over geometries
  • Each geometry is discrete on the Planck scale
  • The coupling to matter fields can be defined
  • Matter modes are cut off at the Planck scale
  • The action can include a cosmological constant
Diffeomorphism invariance here actually means invariance under piecewise-linear homeomorphisms, but this is...
==endquote==
Since by "diffeomorphism" what he means is a 1-1 onto piecewise linear map of PL manifolds, his "RESPECT diffeo" criterion seems to force models to work on something rather restrictive, a PL manifold, a given 4d triangulation if you will. What about approaches that work on some other structure containing approximately the same information, and respecting whatever of diff-invariance carries over to that structure?

I think the new formulation of LQG actually meets the red criterion because it respects all that is left of diffeo-invariance once one throws away the smooth manifold. And because it can optionally be couched in terms of a generalized TQFT on a manifold with defects. This was one of the points made in http://arxiv.org/abs/1012.4707.

Have a look at page 14, right after the paragraph that says
==quote 1012.4707 Section "Loop gravity as a generalized TQFT" ==
Therefore loop gravity is essentially a TQFT in the sense of Atiyah, where the cobordism between 3 and 4d manifold is replaced by the cobordism between graphs and foams. What is the sense of this replacement?
==endquote==
Some background on TQFT http://math.ucr.edu/home/baez/week58.html
Barrett's 1995 paper on realizing 4d QG as a generalized TQFT http://arxiv.org/abs/gr-qc/9506070
 
Last edited:
  • #121
marcus said:
  • It defines a diffeomorphism-invariant quantum field theory on each 4-manifold
  • ...
Diffeomorphism invariance here actually means invariance under piecewise-linear homeomorphisms, but this is...
This is problematic already at the classical level as we know that in 4-dim. the homöomorphic, differentiable and piecewise linear structures and classifications of homöomorphic manifolds need not coincide (Donaldson et al.) So either one abandons the manifold at all (which means that it may emerges in a certain classical limit only) or one takes the manifold seriously which means that one must answer the questions regarding differentiable structures.
 
  • #122
tom.stoer said:
This is problematic already at the classical level as we know that in 4-dim. the homöomorphic, differentiable and piecewise linear structures and classifications of homöomorphic manifolds need not coincide (Donaldson et al.) So either one abandons the manifold at all (which means that it may emerges in a certain classical limit only) or one takes the manifold seriously which means that one must answer the questions regarding differentiable structures.

Barrett is a central player in this business (see post #119) and it sounds to me like he was prepared to drop the smooth structure assumption already in 1995.
(Some background on TQFT http://math.ucr.edu/home/baez/week58.html and
Barrett's 1995 paper on realizing 4d QG as generalized TQFT http://arxiv.org/abs/gr-qc/9506070 )
As you surely know, qg people tend to think of smooth manifold as macroscopic approximation not corresponding to micro reality. One wonders what geometry could be like at very small scale, but doesn't expect it to be a 4D smooth manifold!

So PL manifold with defects is a possible model. Personally I think it makes sense to throw out the manifold completely and look at how our information is structured. minimalist.
But in this paper Barrett hangs on to the PL manifold! He wants a TQFT and he has the notion that some kind of manifold is needed to base that on.

Here is what Rovelli says:
==quote http://arxiv.org/abs/1012.4707 page 14==

Section H.Loop gravity as a generalized TQFT
...
...
Therefore loop gravity is essentially a TQFT in the sense of Atiyah, where the cobordism between 3 and 4d manifold is replaced by the cobordism between graphs and foams. What is the sense of this replacement?

TQFT defined on manifolds are in general theories that have no local degrees of freedom, such as BF or Chern-Simon theory, where the connection is locally flat. Its only degrees of freedom are global ones, captured by the holonomy of the connection wrapping around non-contractible loops in the manifold. In general relativity, we do not want a flat connection: curvature is gravity. But recall that the theory admits truncations à la Regge where curvature is concentrated in d−2 dimensional sub- manifolds. If we excise these d − 2 submanifolds from the Regge manifold, we obtain manifolds with d − 2 dimensional defects. The spin connection on these manifolds is locally flat, but it is still sufficient to describe the geometry, via its non-trivial holonomies wrapping around the defects [51]. In other words, general relativity is approximated arbitrarily well by a connection theory of a flat connection on a manifold with (Regge like) defects. Now, the relevant topology of a 3d manifold with 1d defects is precisely characterized by a graph, and the relevant topology of a 4d manifold with 2d defects is precisely characterized by a two-complex. In the first case, the graph is the 1-skeleton of the cellular complex dual to the Regge cellular decomposition. It is easy to see that this graph and the Regge manifold with defects have the same fundamental group. In the second case, the two-complex is the 2-skeleton of the cellular complex dual to the 4d Regge cellular decomposition. In this case, the faces of the two-complex wrap around the 2d Regge defects. Therefore equipping Atiyah’s manifolds with d − 2 defects amounts precisely to allowing local curvature, and hence obtaining genuinely local (but still generally covariant) bulk degrees of freedom.
==endquote==

In other words you can throw out the continuum, and work with a minimalist combinatorial structure--the graph, the two-complex (foam)--and if you ever need to for any reason you can get manifolds back.
 
Last edited:
  • #123
I guess there is a non-trivial point to make here: you can use differential geometry to show that the spinfoam approach is valid. (It may not be in accord with Nature. Experiment and observation will determine that. It is mathematically sound.)

The basic idea is "the curvature lives on the bones". Bones being math jargon for D-2 dimensional creases/cuts/punctures able to carry all the geometrical information. A smooth manifold can be approximated arbitrarily closely by a piecewise flat one with the curvature concentrated on the D-2 dimensional divisions.

Thinking about 3D geometry the "bones" are one-dimensional line segments, corresponding more or less with our everyday idea of skeletal bones. But in 2D they are zero-dimensional. And in 4D the bones are 2D---like the faces in a 2-complex, or foam.

There is something to understand here and it helps to first picture triangulating a 2D surface with flat triangles. The curvature condenses to "conical singularity points" where if you tried to flatten the surface you would find either too little or too much material. If you imagine a 2D surface triangulated with identical equilateral triangles, it would be a point where more than 6 or less than 6 triangles were joined. (this is how curvature arises in CDT.)

The situation in 3D is somewhat harder to imagine, but you still can. There the analogous picture is with tetrahedra. The curvature is concentrated on 1D "bones" too many or too few come together.

The mathematical tool used to feel out curvature is the "holonomy"---namely recording what happens when you go around a bone. In the 2D case you go around a point to detect if there is pos or neg curvature there. In the 3D case you travel along more or less any loop that goes around a 1D bone and do the same thing.

Now if you look back at the previous post, where I quoted that "page 14" passage, and think of the 3D case, you can understand the construction.

Take a 3D manifold and triangulate. The piecewise flat approximation. Now you have a web of 1D bones and all the geometry is concentrated there. Now that is not the spin network.
The spin network is in a sense "dual" to that web of bones. It is a collection of holonomy paths that explore around all the bones in an efficent manner. The spin network should be a minimal structure with enough links so that around any bone you can find a way through the network to circumnavigate that bone. And the links should be labeled with labels that record what you found out by circling every bone.

The spin network is a nexus of exploration pathways that extracts all the info from the bones. That is the 3D case.

In the 4D case it is just analogous. Triangulate (now with pentachorons instead of tets) and the bones are 2D, and the geometry lives on the bones, and the foam is the "dual" two-complex that explores, detects, records. It is hard to picture but it is the 4D analog of what the spin network does in 3D.

I am trying to help make sense of that "page 14" passage in the previous post.

This is what it means when, in post #122 https://www.physicsforums.com/showthread.php?p=3124407#post3124407 it says:
general relativity is approximated arbitrarily well by a connection theory of a flat connection on a manifold with (Regge like) defects.

What we are basically talking about, the central issue, is how spinfoam LQG can work as a generalized TQFT. And incidentally meet Barrett's "wish list" for a state sum model.
Which (it now looks increasingly likely) we can put matter on and maybe get the standard matter model.
 
Last edited:
  • #124
tom.stoer said:
This is problematic already at the classical level as we know that in 4-dim. the homöomorphic, differentiable and piecewise linear structures and classifications of homöomorphic manifolds need not coincide (Donaldson et al.) So either one abandons the manifold at all (which means that it may emerges in a certain classical limit only) or one takes the manifold seriously which means that one must answer the questions regarding differentiable structures.

Tom, in light of the above I don't see what is problematic (for any theory of QG I know about.)

The idea that spacetime could be a smooth manifold has never, AFAIK, been taken seriously in the history of QG going back at least to JA Wheeler in the 1970s.

The trajectory of a particle is not even supposed to be a smooth (differentiable) curve when looked at microscopically, much less the micro geometry of space.
 
  • #125
marcus said:
Thanks for pointing out his mention of diffeo invariance. Do you think he could be mistaken on that point? I think LQG has all the diff-invariance one can expect to have after one gets rid of the smooth manifold. (And no one, including Barrett, thinks that smooth continuum exists all the way in---Barrett refers to manifold model as only an approximation.)

After reading the final chapter of Hellmann's thesis, I think what Barrett has in mind is that the EPRL and FK models are triangulation dependent.

I'm not sure, but I believe Rovelli mentions this as being dependent on a particular 2 complex. To remove this dependence, he proposes Eq 26, which we discussed.

I think Hellmann's suggests that the triangulation dependence may be ok, if their renormalization via Pachner moves gives an ok theory (in a different sense from GFT).
 
  • #126
atyy said:
After reading the final chapter of Hellmann's thesis, I think what Barrett has in mind is that the EPRL and FK models are triangulation dependent.

I'm not sure, but I believe Rovelli mentions this as being dependent on a particular 2 complex. To remove this dependence, he proposes Eq 26, which we discussed.

I think Hellmann's suggests that the triangulation dependence may be ok, if their renormalization via Pachner moves gives an ok theory (in a different sense from GFT).

That's a really interesting comment! I'm not sure about the renormalization via Pachner moves--I don't understand that and will have to read Hellmann's thesis last chapter to try and grasp what he is talking about.

But I agree with the other things you said. The present formulation goes depend on a particular two-complex. Any finite set of two-complexes can be subsumed within a larger one, so one is not absolutely tied-down. But the large-volume limit question remains to be tackled, as we discussed re Eq 26.
===============

BTW I saw the latest bibliography entry and looked up TOCY. It is defined on page 342 of Rovelli's book--Turaev-Ooguri-Crane-Yetter. Struck me as a remarkable idea, to combine spinfoam with Kaluza-Klein. The reference the authors give is to a paper by Ooguri, he presents the model but does not call it TOCY.
 
Last edited:
  • #127
Several people have offered reasons (or hints) that LQG does NOT have the right (re)formulation so far. Atyy has pointed to equations (26) and (27) in a recent review paper, where conditions for convergence have not been shown. He is unquestionably right, although one can differ about how significant this is. Thanks to all who have offered reasons pro or con. I will look back and see what other points surfaced.

The most cogent and extensive arguments, aside from Atyy's, were offered in this post by Tom Stoer, which I quote in entirety.
tom.stoer said:
I don't think that LQG has been redefined.

Rovelli states that it is time to make the next step from the construction of the theory to the derivation of results. Nevertheless the construction is still not complete as long as certain pieces are missing. Therefore e.g. Thiemann's work regarding the Hamiltonian approach (which is not yet completed and for which the relation to spin foams is still not entirely understood) must still back up other programs

There are still open issues to be solved:
- construction, regularization and uniqueness of the Hamiltonian H
- meaning of "anomaly-free constraint algebra" in the canonical approach
- relation between H and SF (not only kinematical)
- coarse-graining of spin networks, renormalization group approach
- nature and value of the Immirzi parameter
- nature and value of the cosmological constant
- nature of matter and gauge fields (on top, emergent, ...); yes, gauge fields!
And last but not least: If a reformulation is required (which would indicate that the canonical formalism is a dead end), then one must understand why it is a dead end! We don't know yet.

My impression that Rovelli's new formulation does not address all these issue. His aim is more to develop calculational tools to derive physical results in certain sectors of the theory.

Let's look at QCD: there are several formulations of QCD (PI, canonical, lattice, ...), every approach with its own specific benefits and drawbacks. But nobody would ever claim that QCD has been reformulated (which sounds as if certain approaches would be out-dated). All approaches are still valid and are heavily used to understand to understand QCD vacuum, confinement, hadron spectroscopy, QGP, ... There is not one single formulation of QCD.

So my conclusion is that a new formulation of LQG has been constructed, but not that LQG has been reformulated.

I think all of this is worth reviewing and balancing against the plusses. To do that properly would take work (he put considerable thought into the list). If anybody wants to help out it would be very welcome! I can at best just nibble away piecemeal.
 
  • #128
From looking at the list, I'd say that a lot of what is seen as a possible trouble with the new formulation has to do with its being different from the old one.

The old approach (as most often presented) used a smooth 3D manifold, in which spinnetworks were embedded, and took a canonical or Hamiltonian approach to the dynamics.

The new approach does not need a smooth manifold---there is no continuum. And it does not need a Hamiltonian. Transition amplitudes between states of geometry are calculated via spinfoam. So that leaves unanswered questions about the prior approach.

It might happen that the older canonical LQG will be completed and that it will even turn out to be mathematically equivalent! It is hard to predict---impossible to predict.
The person most active in developing canonical (Hamiltonan) LQG is, I believe, Thomas Thiemann at Uni Erlangen. Jerzy Lewandowski at Warsaw also has an active interest in it (but not exclusively, he also works on spinfoam LQG). We'll see what these folks and their students come up with.

As Tom points out, there is no reason a theory cannot have several equivalent versions.
 
  • #129
marcus said:
The old approach (as most often presented) used a smooth 3D manifold, in which spinnetworks were embedded, ...
Only for its derivation (better: motivation)

He must so to speak throw away the ladder, after he has climbed up on it
Wittgenstein


marcus said:
The new approach does not need a smooth manifold
. Neither does the old one after its completion.

marcus said:
And it does not need a Hamiltonian.
Why does one prefer the new formalism? B/c it is superior to the old one - or because the problem of the old one couldnot be solved?

marcus said:
As Tom points out, there is no reason a theory cannot have several equivalent versions.
I have not seen a single Qxxx theory that does not have different approaches.
 
  • #130
All good points! I agree completely (also with the suspicion that a reason to adopt the new LQG is that the problem of determining the Hamiltonian proved somewhat intractible, but they still could do it.)

I would put the present situation this way: a new combined research field of QG is being forged. It takes something of Connes NC geometry, something of LQG, something of string, something of fields on curved or NC spacetime, something of Regge triangulations, something of "higher gauge" categorics, something of cosmology---all those 6 or 8 topics mentioned by the organizers of the Zurich conference.

I would say the Zurich conference is historic level, and that because Barrett is a leading organizer (with Nicolai, Grosse, Rovelli, Picken..) part of Barrett's job is to give a short list of goals (defining direction and measure of progress). He has to. And we have to pay at least partial attention.==quote Barrett http://arxiv.org/abs/1101.6078 ==
The wish-list of properties for a state sum model is
  • It defines a diffeomorphism-invariant quantum field theory on each 4-manifold
  • The state sum can be interpreted as a sum over geometries
  • Each geometry is discrete on the Planck scale
  • The coupling to matter fields can be defined
  • Matter modes are cut off at the Planck scale
  • The action can include a cosmological constant
Diffeomorphism invariance here actually means invariance under piecewise-linear homeomorphisms, but this is...
==endquote==

You have already commented on how problematical the red wish is. I think that will just have to be worked out by relaxing the structure, at first maybe to PL (piecewise linear) and perhaps even more later.

Looking at LQG research in this historic context, I would be interested to know what you see---I see it spurring a strong drive to accommodate matter, possibly trying several different ways at first.

http://www.conferences.itp.phys.ethz.ch/doku.php?id=qg11:start
 
Last edited:
  • #131
marcus said:
A new combined research field of QG is being forged. It takes something of Connes NC geometry, something of LQG, something of string, something of fields on curved or NC spacetime, something of Regge triangulations, something of "higher gauge" categorics, ...
Too complicated. All successful theories are based on rather simple structures. I agree that it may be necessary to go through all that stuff - just to find out what and why one has to throw away.
 
  • #132
tom.stoer said:
Too complicated. All successful theories are based on rather simple structures. I agree that it may be necessary to go through all that stuff - just to find out what and why one has to throw away.

Again, I fully agree. I was not suggesting that the SOLUTION would involve elements of all those disciplines.

What I said or meant to say was that a greater QG research field is being forged. A larger combined community of researchers able to appreciate and benefit from each others' ideas. That's what conferences do, I think.

Hotels in Zurich are expensive.
 
  • #134
atyy said:
While we're throwing everything and the kitchen sink, let's not forget http://arxiv.org/abs/0907.2994

Heh heh, so you would like one of them to be presenting a paper at the conference too!
Tensor network decompositions in the presence of a global symmetry
Sukhwinder Singh, Robert N. C. Pfeifer, Guifre Vidal

Personally I'm not making suggestions to the organizers, but what you say could certainly happen. We don't know the final program or the final list of speakers.

I tend to just trust the pros. When you forge a new field of resarch all it has to be is good enough and representative enough of what you have in mind, plus simple and clear enough to communicate to the broader scientific community.

If it is enough right, then other stuff that belongs in it will gradually be attracted and gather and accrete to it.

Actually they didn't put in the kitchen sink yet :biggrin: the halfdozen topics they put upfront are, I thought, selective. I can see the focus or the organic connections.
http://www.conferences.itp.phys.ethz.ch/doku.php?id=qg11:start

but we could look down the speaker list and see if, say, Guifre Vidal is on there.
http://www.conferences.itp.phys.ethz.ch/doku.php?id=qg11:speakers
It's only 30 names and its alphabetized, so it is each to check. No.

Well maybe next time. If this year's is Quantum Theory and Gravitation 2011 then maybe there will be a Quantum Theory and Gravitation 201x. Seems reasonable enough.
 
Last edited:
  • #135
marcus said:
Heh heh, so you would like one of them to be presenting a paper at the conference too!
Tensor network decompositions in the presence of a global symmetry
Sukhwinder Singh, Robert N. C. Pfeifer, Guifre Vidal

Personally I'm not making suggestions to the organizers, but what you say could certainly happen. We don't know the final program or the final list of speakers.

I tend to just trust the pros. When you forge a new field of resarch all it has to be is good enough and representative enough of what you have in mind, plus simple and clear enough to communicate to the broader scientific community.

If it is enough right, then other stuff that belongs in it will gradually be attracted and gather and accrete to it.

Actually they didn't put in the kitchen sink yet :biggrin: the halfdozen topics they put upfront are, I thought, selective. I can see the focus or the organic connections.

but we could look down the speaker list and see if, say, Guifre Vidal is on there.

Oh, he's just moved to an even more significant place than the speaker list :biggrin:
 
  • #136
atyy said:
Oh, he's just moved to an even more significant place than the speaker list :biggrin:
Well you could say a more significant place than the Zurich speaker list is Australia. And he certainly has moved to Australia. Looks like a bright promising young guy, BTW.

I'm beginning to suspect that consciously or unconsciously the organizers of the 2011 "Quantum Theory and Gravitation" conference are making a kind of statement by holding it at the ETH (Swiss-Federation Technische Hochschule) in Zurich. ETH Zurich was Einstein's alma mater university.
He was at the beginning of quantum theory with his 1905 photon paper, and at the beginning of the 1915 geometrical theory of gravity. The two themes of the conference.
It dawned on me that the organizers (Barrett, Nicolai, Rovelli, Grosse, Picken) are forging the QG research field in a place with thrilling reminders of the past.

And it is a past where the major revolutions in physics have emerged in Europe. Maybe we shouldn't mention that, it might offend some US-physics chauvins
(my etymological source says a chauvin is a balding diehard, chauve is French for bald, and we all have our share.)

But anyway, US-European issues aside, it just dawned on me that Göttingen could be next. Also a place thrilling with reminders, of Hilbert, and Heisenberg, and Gauss, and Riemann-of-the-manifolds. If you hold a major historic conference at ETH Zurich how can you not hold a followup at Uni Göttingen?

Just a two-penny dream.
 
Last edited:
  • #137
marcus said:
Well you could say a more significant place than the Zurich speaker list is Australia. And he certainly has moved to Australia. Looks like a bright promising young guy, BTW.

http://www.perimeterinstitute.ca/News/In_The_Media/Guifre_Vidal_to_Join_Perimeter_Institute_as_Senior_Faculty/
 
  • #138
atyy said:
http://www.perimeterinstitute.ca/News/In_The_Media/Guifre_Vidal_to_Join_Perimeter_Institute_as_Senior_Faculty/

From Spain to Queensland to Perimeter. Great! Information theory+condensed matter also great.
Clearly a rising star. Since his first language must be Spanish, let us say Borges' prayer for the success of this young person:

Solo una cosa no hay: es el olvido
Dios que salva el metal, salva la escoria,
y cifra en su profetica memoria
las lunas que serán y las que han sido.

Ya todo está. Los milles de reflejos
que entre los dos crepusculos del dia
tu rostro fue dejando en los espejos
y los que irá dejando todavía.

Y todo es una parte del diverso
cristal de esa memoria: el universo.
...
...

And everything is part of that diverse
crystalline memory, the universe.
 
  • #139
But actually Atyy, Perimenter may have lost its edge, at least to the extent that one does not see many PI names in the 2011 Zakopane school or the speakers list for 2011 "Quantum Theory and Gravitation" conference.

It has moved in the direction of established ideas, conventional reputation, and some celebrity hunting. Still a good place, but not as outstanding as say 4 or 5 years ago. Just my impression, but I've seen similar comments from others lately.

So the "to an even more significant place" comment, though witty, may actually not be exact.

I just checked the "QT&G" speakers list and out of 30 speakers the only PI guy was Laurent Freidel.
http://www.conferences.itp.phys.ethz.ch/doku.php?id=qg11:speakers
If I remember right he joined PI faculty back in 2006 when Perimeter really was leading edge. Still small. Freidel was only their 9th faculty appointment. Here is the 2006 announcement:
http://www.perimeterinstitute.ca/News/In_The_Media/Laurent_Freidel_becomes_Faculty/

Out of over 100 participants at Zakopane, one Perimeter guy, Tim Koslowski:
http://cift.fuw.edu.pl/users/jpa/php/listofparticipants.php
and no PI person on the Zakopane list of speakers.
 
Last edited by a moderator:
  • #140
Pedagogically speaking, the most useful and accurate introduction to LQG is probably now Livine's January "fifty-sixtyone" monograph.

http://arxiv.org/abs/1101.5061

It is amazingly good. The perspective is balanced and complete (although he declares it shaped by his own personal mix of conservative and "top-down" taste).

I would suggest printing out the TOC plus pages 1-62 and pages 79-88
I think the PDF file calls these pages 1-64 and 81-90.
The PDF adds two to the pagenumber, or some such thing.

The thing about Livine's style in this piece is that he takes it easy. He doesn't rush. He fills in detail (that a different expositor might assume we already know). He says explicitly where he is skipping something, and gives a reference.

I particularly liked seeing where he takes a paragraph or so to explain the transitional importance of Sergei Alexandrov's "CLQG".
Livine coauthored with SA back around 2002-2003 and based his 2002 PhD thesis on some ideas he developed which bridge between SU(2) labels and SL(2,C) labels, and that has turned out to make quite a difference. Stuff like Livine's "projected spin networks". I remember reading parts of Livine's PhD thesis back around 2004. He was working out ideas for bridging between the spinnetworks of the canonical approach and the spinfoams of the path integral approach and that meant relating SU(2) reps with SL(2,C) reps. And that kind of stuff has come back strongly in the past two or three years, like 2008-2010.

Alexandrov's CLQG may have been passed by---I can't say about that, maybe it was not quite on the main historical track. But it was seminal all the same. Livine in his discussion gives it its due recognition.

This is in section 2.1, the first 28 or so pages, where he is giving the history (including canonical approach) that led up to the present formulation.

This piece is actually a pleasure to read. Carefully informative but also in a certain sense "laid back" (slang for relaxed and untroubled).

If anyone wants an introduction, they could do worse than try this one.
 
Back
Top