Signs LQG has the right redefinition (or wrong?)

  • Thread starter Thread starter marcus
  • Start date Start date
  • Tags Tags
    Lqg
  • #91
atyy said:
Are you saying this limit exists in a finite universe?

Abstract math does not work in some given universe. The limit is an interesting abstract question.
Pragmatically, sure. Pragmatically it is a non-problem. In that case.
 
Physics news on Phys.org
  • #92
So we fix the boundary. As is done in the summing=refining paper. Your argument is that for fixed boundary, the summing is finite. Since refining is summing, then refining is finite. I don't see that. I think it does mean that summing is a sum over discrete terms, but not necessarily over a finite number of terms "To remove the dependence on C, two options can be envisaged: infinitely refining C, and summing over C. Since the set of foams is discrete, the latter option is easy to define in principle, at least if one disregards convergence issues." http://arxiv.org/abs/1010.5437 p2
 
  • #93
Atyy, we have company this afternoon and evening. I won't be able to answer. Your question is making sense to me and I will need a quiet moment to think about it before replying.
 
  • #94
Enjoy your company. My answer: this is where GFT renormalization must come in.
 
  • #96
Thanks for the pointers to relevant research. I will take a look later today. From the standpoint of abstract math there is no reason to assume the U is finite and it seems ugly to have to appeal to that assumption as a crutch. The question of whether a certain sequence converges is intrinsically interesting!

My observation is practical and non-math, in a sense. IF the universe is finite spatial volume (which we don't know) then it only makes physical sense to consider spin networks with up to N nodes for some large finite N.

So the whole business of taking limits with more and more nodes is moot (from a physical perspective.)

A somewhat similar observation may apply in the case where we have accelerating expansion (as in a deSitter U or an approximately deS) because then there is a cosmological event horizon. One is in a de facto finite situation. I say MAY apply. I haven't seen that worked out. I feel more confident simply considering the finite U case.

And I'm of course glad if some of the young researchers like the guy you mentioned, Perini, are working on the abstract convergence problem of the "X" sort you mentioned, where you don't assume a finite universe. It will be great if they get a result! And they may, as you suspect, bring GFT method to bear on it.
 
  • #97
OK, it's fine if we fix a spatial boundary at this stage of the game. What I don't understand then is that I thought LQG has no preferred foliation. And if in LQC there is the forever bouncing universe, then it must be unbounded in time. So what if we took the foliation that way, wouldn't we get a different answer. Or does that mean that there is a preferred foliation? Or are there only a finite number of bounces? (actually I don't believe in the bounce for spinfoams - I think Rovelli is hoping for an outcome like CDT - after performing the full sum - not just the first term - he recovers a finite classical universe - to be fair - CDT has not even discretized down to the Planck scale yet)
 
  • #98
atyy said:
OK, it's fine if we fix a spatial boundary at this stage of the game. What I don't understand then is that I thought LQG has no preferred foliation. And if in LQC there is the forever bouncing universe, then it must be unbounded in time. So what if we took the foliation that way, wouldn't we get a different answer. Or does that mean that there is a preferred foliation? Or are there only a finite number of bounces? (actually I don't believe in the bounce for spinfoams - I think Rovelli is hoping for an outcome like CDT - after performing the full sum - not just the first term - he recovers a finite classical universe - to be fair - CDT has not even discretized down to the Planck scale yet)

The bounce resolution of the BB singularity is a surprising RESULT that first appeared around 2001 under simplifying assumptions. Since then it has proven rather robust in the sense that they keep improving the theory, and changing the assumptions, and removing restrictions, and running the model over and over, and they keep getting a bounce.

They don't get "forever bouncing". That is not robust. You can for example choose parameters where you just get one bounce (where the BB was). You can't say too much about the prior contracting phase. The theory is not "omniscient" it is just a gradual incremental extension that resolves the singularity in one possible way it could be resolved.

It doesn't say if you get just one bounce, or a finite number, or an infinite number (that depends on choices and cases). It just resolves the one singularity we know about. In a possibly testable way (some phenomenologists think.)

There is more to talk about, in what you say. But I am going to get coffee and straighten up the house a little. Yesterday was fun, in fact, thanks for your good wishes!

============================
Incomplete partial reply to your next post #90. Equation (26) the topic of our discussion has a twocomplex with a boundary graph. But the graph is not labeled with area and volume labels. It is not a spinnetwork. so there is no limit on growth in the picture. one could keep adding nodes forever. So it is not the same as modeling a finite-volume universe. Or so it seems to me---as you well know I'm just an interested observer of the QG research scene, no expert! I'll get back to this later this morning. This is interesting.
 
Last edited:
  • #99
Also, why isn't a finite universe the same as assuming a spinfoam boundary?
 
  • #100
Atyy, I like your way of putting the three sorts of possible divergence.

atyy said:
...
1) UV - not present
2) IR - present but not a problem
3) X (my nomenclature) - probably present, and probably a problem.

As I've said, I don't think of your X as a practical problem at all, just an interesting abstract math one that you get when you consider a possibly infinite universe. But your pointer to it has gotten me to read more thoroughly in that Rovelli Smerlak October paper which deals with type X concerns.

As you described it the X question comes up around equation (26) of 1010.1939.
It is helpfully clarified by the Rovelli Smerlak paper, so I'll give the link
http://arxiv.org/abs/1010.5437

Notice that (26) does not have a spin-network in it, or a spinfoam. So one cannot implement the idea of a finite universe in the context of (26). There is nothing to keep one from adding cells to the complex forever.
It is more in the abstract math department. An interesting but not urgent question, as I see it.

What your question just now makes me wonder is how would one implement the idea of surrounding a cellcomplex C with a boundary that you can't stretch? Surrounding it with a fixed labeled spin-network. So that refinement is forced to terminate eventually?

The researchers do not seem to have considered that. Maybe it is a useless problem from their perspective. Perhaps I am missing something and my question is based on misunderstanding. I am trying to think about that while I do the evening chores. Hope to be able to say more later.
 
Last edited:
  • #101
To remind everybody, including myself, what the main focus of the thread is, since we have a new page I will bring forward the edited topic summary from the preceding page.
==quote==
As I see it, the QG goal is to replace the live dynamic manifold geometry of GR with a quantum field you can put matter on. The title of Dan Oriti's QG anthology said "towards a new understanding of space time and matter" That is one way of saying what the QG researchers's goal is. A new understanding of space and time, and maybe laying out matter on a new representation of space and time will reveal a new way to understand matter (no longer fields on a fixed geometry).

Sources on the 2010 redefinition of LQG are
introductory overview: http://arxiv.org/abs/1012.4707
concise rigorous formulation: http://arxiv.org/abs/1010.1939
phenomenology (testability): http://arxiv.org/abs/1011.1811
adding matter: http://arxiv.org/abs/1012.4719

Among alternative QGs, the LQG stands out for several reasons---some I already indicated---which I think are signs that the 2010 reformulation will prove a good one:

  • testable (phenomenologists like Aurelien Barrau and Wen Zhao seem to think it is falsifiable)
  • analytical (you can state LQG in a few equations, or Feynman rules, you can calculate and prove symbolically, massive numerical simulations are possible but not required)
  • similar to QED and lattice GCD (the cited papers show remarkable similarities---the two-complex works both as a Feynman diagram and as a lattice)
  • looks increasingly like a reasonable way to set up a background independent quantum field theory.
  • an explicitly Lorentz covariant version of LQG has been exhibited
  • matter added
  • a couple of different ways to include the cosmological constant
  • indications that you recover the classic deSitter universe.
  • LQG defined this way turns out to be a generalized topological quantum field theory (see TQFT axioms introduced by Atiyah).
  • sudden speed-up in the rate of progress, more researchers, more papers

These are just signs---the 2010 reformulation might be right---or to put it differently, there may be good reason for us to understand the theory, as presented in brief by the October paper http://arxiv.org/abs/1010.1939.
...
...

==endquote==

αβγδεζηθικλμνξοπρσςτυφχψωΓΔΘΛΞΠΣΦΨΩ∏∑∫∂√±←↓→↑↔~≈≠≡≤≥½∞ ⇐⇑⇒⇓⇔∃ℝℤℕℂ∈⊗⊕⊂ ⟨·|·⟩
 
  • #102
Why do you not read the boundary Γ specified in Eq (26) of http://arxiv.org/abs/1010.1939 as a spin network (or a spin network at two different times)? On the bottom of p4, Rovelli says "When Γ is disconnected, for instance if it is formed by two connected components, expression (20) defines transition amplitudes between the connected components. This transition amplitude can be interpreted as a quantum mechanical sum over histories. Slicing a two-complex, we obtain a history of spin networks, in steps where the graph changes at the vertices."
 
  • #103
I don't read the boundary Γ as a spin-network because it is simply a graph. No intertwiners at the nodes or spin labels on the links. These are what give scale to a spin-network (as vol and area).

A mere graph is just adjacency relationship without any idea of scale.

So in (26) the boundary does not constrain the size. It can stretch indefinitely---by billions of lightyears if necessary.
 
  • #104
marcus said:
I don't read the boundary Γ as a spin-network because it is simply a graph. No intertwiners at the nodes or spin labels on the links. These are what give scale to a spin-network (as vol and area).

A mere graph is just adjacency relationship without any idea of scale.

So in (26) the boundary does not constrain the size. It can stretch indefinitely---by billions of lightyears if necessary.

Eq (26) is the same as (27) according to summing=refining. (27) is in the spin network basis, if you compare to (20), (21). Both (26) and (27) are defined with the same boundary graph.
 
  • #105
atyy said:
Eq (26) is the same as (27) according to summing=refining. (27) is in the spin network basis, if you compare to (20), (21). Both (26) and (27) are defined with the same boundary graph.

we mustn't confuse 26 and 27!
It is more complicated to get from one to the other than you may think. "s=r" is not a naive equality to be taken literally. You have to do a lot, change what you are working with, define Z*, put the whole thing on a different footing, and introduce multiplicity factors, in order to get from one to the other. I am still trying to figure out how they get from 26 to 27.

anyway the convergence divergence issue you brought up was (26)
It has no spinfoams or spinnetworks in. It has no control on the size of the universe.
It's convergence is an interesting problem without immediate practical physical signif.
============

Would you like to discuss (27) now? as mathematically on a separate footing?

Notice what plugs into the LHS and RHS of 27, the arguments, is something new. It is not the old L-tuple of group elements h1...hL.
It is tuples of halfintegers! (j1...jL) and intertwiners (i1...iN)

those are different mathematical animals from plain old SU(2) elements h1...hL.
And the process of summing is different from the limit.

It will take me a little while to change gears, but I could shift over and look at 27 if you'd like.
 
  • #106
Yes, I did notice the difference. When mentioning the divergence I always meant (26) and (27) because of their relationship through summing=refining. But yes, it is true that the equivalence is not obvious, and in fact only holds exactly for some models. In other models, there is another factor. Anyway, I'd be perfectly happy if you treat (27) too. In the summing=refining paper, they mention that (27) also has convergence issues, even without referring to (26).

I don't see how the convergence is a minor issue. If it does not even converge in principle, then the theory is meaningless. There's no point taking the first term of divergent series (well, it could be an asymptotic series, in which case you can take the first terms of divergent series). But then that would seriously damge LQG's claim to provide a non-perturbative definition of quantum gravity.
 
  • #107
Just to be clear, do we both realize that we are talking about a type of IR divergence that

1. would not arise if the U is finite and
2. they have ideas of how to address anyway (but since formulation is new, haven't gotten around to working out)

or do you see things in a darker gloomier light? :biggrin:
 
  • #108
marcus said:
Just to be clear, do we both realize that we are talking about a type of IR divergence that

1. would not arise if the U is finite and
2. they have ideas of how to address anyway (but since formulation is new, haven't gotten around to working out)

or do you see things in a darker gloomier light? :biggrin:

Even if the boundary is finite, it isn't clear to me that the number of 2 complexes associated with a given finite boundary is finite. I do agree the sum is discrete, so it depends on the convergence of a probably infinite discrete sum, ie. in Eq (27) of http://arxiv.org/abs/1010.1939 , it's not clear to me that the largest j and n possible are finite.

There is an analogous problem in GFT, which both Freidel and Oriti noted in their old reviews. Freidel suggested terminating the expansion at tree level, arguing that the tree level expansion was basis independent (or something like that), while Oriti suggested GFT renormalization, which both of them worked on later. http://arxiv.org/abs/0905.3772 There's of course also Rivasseau and colleagues working on this, as you know.

The other major problem (I believe it is a problem, looking at things from AdS/CFT) is the interpretation of the formalism. I doubt the geometry of the formalism is so simply related to spacetime geometry. In AdS/CFT, many geometrical objects do not have the meaning of spacetime geometry. It's interesting to see that Barrett is exploring an approach like this. I have no idea if it's a red herring, but papers in which spin networks and AdS/CFT show up together are http://arxiv.org/abs/0905.3627 and http://arxiv.org/abs/0907.2994 .

BTW, another paper that is helpful in reading "summing=refining" http://arxiv.org/abs/1010.5437 is this explicating the relationship between the holomorphic and spin network representations http://arxiv.org/abs/1004.4550 .
 
Last edited by a moderator:
  • #109
The discussion (p10) of http://arxiv.org/abs/1101.6078 makes very interesting comments about the current models:

"Diffeomorphism invariance here actually means invariance under piecewise-linear homeomorphisms, but this is essentially equivalent. ... This invariance is seen in the Crane-Yetter model and also in the 3d gravity models, the Ponzano-Regge model and the Turaev-Viro model, the latter having a cosmological constant. The 3d gravity models can be interpreted as a sum over geometries, a feature which is carried over to the four-dimensional gravity models [BC, EPRL, FK], which however do not respect diffeomorphism invariance. ...

The most obvious omission from this list is the ability to implement the Einstein-Hilbert action. In fact, experience with state sum models in four dimensions so far is that there are models with diffeomorphism-invariance but no Einstein-Hilbert action, and there are models implementing the Einstein-Hilbert action but having (at best) only approximate diffeomorphism-invariance."
 
  • #110
I see that Barrett changed the title of his paper just a day or so after first posting it! The original title of1101.6078, which I printed as soon as it appeared, was "Induced Standard Model and Unification"
Niow we have version 2 of the paper titled "State Sum..."

I'll try to get the sense of any substantive changes I notice. Thanks for pointing out his mention of diffeo invariance. Do you think he could be mistaken on that point? I think LQG has all the diff-invariance one can expect to have after one gets rid of the smooth manifold. (And no one, including Barrett, thinks that smooth continuum exists all the way in---Barrett refers to manifold model as only an approximation.)

atyy said:
The discussion (p10) of http://arxiv.org/abs/1101.6078 makes very interesting comments about the current models:

"Diffeomorphism invariance here actually means invariance under piecewise-linear homeomorphisms, but this is essentially equivalent. ... This invariance is seen in the Crane-Yetter model and also in the 3d gravity models, the Ponzano-Regge model and the Turaev-Viro model, the latter having a cosmological constant. The 3d gravity models can be interpreted as a sum over geometries, a feature which is carried over to the four-dimensional gravity models [BC, EPRL, FK], which however do not respect diffeomorphism invariance. ...

The most obvious omission from this list is the ability to implement the Einstein-Hilbert action. In fact, experience with state sum models in four dimensions so far is that there are models with diffeomorphism-invariance but no Einstein-Hilbert action, and there are models implementing the Einstein-Hilbert action but having (at best) only approximate diffeomorphism-invariance."

I see he not only changed the title but also expanded the abstract summary:

http://arxiv.org/abs/1101.6078
State sum models, induced gravity and the spectral action
John W. Barrett
(Submitted on 31 Jan 2011 (v1), last revised 1 Feb 2011 (this version, v2))
"A proposal that the bosonic action of gravity and the standard model is induced from the fermionic action is investigated. It is suggested that this might occur naturally in state sum models."

Both changes are definite improvements (IMHO) making the message clearer and more complete.
========================
A note to myself, so I won't forget re post 97 of Atyy's: Wick rotation, deS space in both Eucl. and Lor. version, deS bounce. CDT doesn't yet put in matter. The scale of CDT computer sims was determined to be order Planck. No time to elaborate, and may be offtopic anyway.

Atyy you have provided some valuable signs that the the current formulation is NOT satisfactory and they have to be weighed against signs it is.
αβγδεζηθικλμνξοπρσςτυφχψωΓΔΘΛΞΠΣΦΨΩ∏∑∫∂√±←↓→↑↔~≈≠≡≤≥½∞ ⇐⇑⇒⇓⇔∃ℝℤℕℂ∈⊗⊕⊂⟨·|·⟩
 
Last edited:
  • #111
atyy said:
...(actually I don't believe in the bounce for spinfoams - I think Rovelli is hoping for an outcome like CDT - after performing the full sum - not just the first term - he recovers a finite classical universe - to be fair - CDT has not even discretized down to the Planck scale yet)

You might be interested in this, because of interest in cdt. They managed to estimate the size of their little universes they were creating in the computer. The natural lattice scale, basically an edge of a simplex, turns out to be about one half of one Planck length.

See for example the 2009 review paper
http://arxiv.org/abs/0906.3947
page 26 right after equation 42.

As I recall the result goes back to around 2007, I remember when it first came out. The method used to deduce the size is ingenious, but I can't recall exactly how it works, would have to go back and refresh a bit.

==============
I guess morally you could say that LOLL GETS A BOUNCE with CDT. Because she gets the classic deSitter----classic deS has a natural bounce, just one.
But remember that CDT uses Wick rotation, what they do in the computer is Wick rotated to Euclidean style. The rotated Euclidean version of deS is actually S4.

They discuss this various places so if anyone is curious I could look up a reference, why getting a hypersphere path integral with Monte Carlo really means getting the hourglass shape standard deSitter, if you would Wick rotate.

CDT sims typically do not include matter. And that is like the pure deSitter universe as well. Only has cosmo constant. Pure deSitter bounce is gentle and shallow by comparison with when you have matter and the contracting phase experiences gravitational collapse, a crunch.

But overall, I guess the CDT results are another reason to believe in bounce cosmology. If you believe anything without first seeing observational evidence. I keep that kind of thing in Limbo, believing neither yes nor no.
 
Last edited:
  • #112
marcus said:
You might be interested in this, because of interest in cdt. They managed to estimate the size of their little universes they were creating in the computer. The natural lattice scale, basically an edge of a simplex, turns out to be about one half of one Planck length.

See for example the 2009 review paper
http://arxiv.org/abs/0906.3947
page 36 right after equation 42.

Doesn't it say that the Planck length is about half the lattice spacing?
 
  • #113
marcus said:
You might be interested in this, because of interest in cdt. They managed to estimate the size of their little universes they were creating in the computer. The natural lattice scale, basically an edge of a simplex, turns out to be about one half of one Planck length.

See for example the 2009 review paper
http://arxiv.org/abs/0906.3947
page 26 right after equation 42.

As I recall the result goes back to around 2007, I remember when it first came out. The method used to deduce the size is ingenious, but I can't recall exactly how it works, would have to go back and refresh a bit.

I corrected the page, it is 26, not 36.

==============
I guess morally you could say that LOLL GETS A BOUNCE with CDT. Because she gets the classic deSitter----classic deS has a natural bounce, just one.
But remember that CDT uses Wick rotation, what they do in the computer is Wick rotated to Euclidean style. The rotated Euclidean version of deS is actually S4.

They discuss this various places so if anyone is curious I could look up a reference, why getting a hypersphere path integral with Monte Carlo really means getting the hourglass shape standard deSitter, if you would Wick rotate...

atyy said:
Doesn't it say that the Planck length is about half the lattice spacing?

you are probably right. I tend to trust you on details. (if not always about interpretations).
I'll check. As I recall the number was something like 0.48 one way or the other. I could have misread.

YES. You read it correctly, when they run these little quantum universes in the computer, they come into existence evolve and go out of existence and they always behave as if the size of the building blocks is about 2 Planck lengths.

With more computer power you can run simulations with more building blocks, but it doesn't make things finer. It just let's the universe grow bigger. The theory does not specify a minimum scale---they don't put in one by hand. It's as if "nature" (the computer sim) had one. It's a bit curious. I haven't seen it explained.

John Baez had a brief explanation of Wick rotation and why CDT uses it (the Metropolis montecarlo algorithm needs actual probabilities, not amplitudes). Might be helpful:
http://math.ucr.edu/home/baez/week206.html
 
Last edited:
  • #114
marcus said:
With more computer power you can run simulations with more building blocks, but it doesn't make things finer. It just let's the universe grow bigger. The theory does not specify a minimum scale---they don't put in one by hand. It's as if "nature" (the computer sim) had one. It's a bit curious. I haven't seen it explained.

Although it's not obvious, the computer simulations do put in a minimum scale by hand, and they hope to make this scale smaller in future simulations, since CDT is supposed to model a theory with a continuum limit (Benedetti does this analytically in 2+1D in http://arxiv.org/abs/0704.3214 ). They talk about how to make the lattice spacing smaller than the Planck scale in the review you mentioned.
 
  • #115
They talk about how to make the lattice spacing smaller than the Planck scale in the review you mentioned.
Indeed they speculate about how to modify the model to get in closer, around the bottom of page 28 and top of page 30 in that review paper. They say "work is ongoing". I haven't seen anything about that so far. It is an interesting review, a 2009 writeup of talks given in 2008. I don't know of anything more recent that is comparably complete.

To recap, and wrap up the divergence discussion, we have been talking about signs that LQG has the right redefinition, or that it doesn't. Unresolved divergence issues would be one sign that it doesn't have the right formulation yet. (Unless the issues eventually get resolved.)

We can't presume to make a final verdict, of course, only weigh the various clues and make an educated guess based on how things are going. I mentioned some "good" signs earlier--signs that the research community is increasingly judging the theory's prospects to be favorable. But against that one can balance the large-volume divergence issues.

Rovelli's most recent review paper serves as a kind of status report on this and several other critical questions. Here is what he says on page 19

==quote http://arxiv.org/abs/1012.4707 page 19 section A "open problems" ==
...
Divergences.
The theory has no ultraviolet divergences. This can be shown in various ways, for instance rewriting (1) in the spin-network basis and noticing that the area gap makes all sums finite in the direction of the ultraviolet. However, divergences might be lurking elsewhere, and they probably are. There might indeed be infrared divergences, that come from large j. The geometrical interpretation of these divergences is known. They corresponds to the “spikes” in Regge calculus: imagine taking a triangulation of a two-surface, and moving a single vertex of the triangulation at large distance from the rest of the surface. Then there will be a cycle of triangles which are very lengthened, and have arbitrary large area. This is a spike.

A number of strategies can be considered to control these infrared divergences. One is to regularize them by replacing groups with quantum groups. This step has a physical ground since this modification of the vertex amplitude corresponds to adding a cosmological constant to the dynamics of the theory. The theory with the quantum group is finite [21, 22].

The second possible strategy is to see if infrared divergences can be renormalized away, namely absorbed into a redefinition of the vertex amplitude. A research line is active in this direction [117, 118], which exploits the group-field-theory formulation of the theory.20

Finally, the third possibility is that infrared divergences could be relatively harmless on local observables, as they are in standard field theory.
==endquote==
 
Last edited:
  • #116
Atyy you called attention to one of the six wishes on John Barrett's "wish list" for a unifying state sum model. His first wish, you pointed out, was not for "diffeomorphism invariance" but for "invariance under PL homeomorphisms." That takes us out of the category of smooth manifolds. You see him backing out of manifolds, but taking with him whatever is the appropriate descendent of diff-invariance.

It is not recognized in that particular paper, but LQG does the analogous thing and retains the appropriate residual form of diff-invariance. Rovelli's most recent papers make a point of the connection with PL (piecewise linear) manifolds and also of the combinatorial version of factoring out diffeomorphism gauge.

The two are closer than may appear to you at first sight. In any case you point us in an interesting direction. We should really list ALL SIX of Barrett's goals for a state sum unification. All are potentially interesting. They are listed on page 10.
atyy said:
The discussion (p10) of http://arxiv.org/abs/1101.6078 makes very interesting comments about the current models:

"Diffeomorphism invariance here actually means invariance under piecewise-linear homeomorphisms, but this is essentially equivalent. ...
... in four dimensions so far is that there are models with diffeomorphism-invariance but no Einstein-Hilbert action, and there are models implementing the Einstein-Hilbert action but having (at best) only approximate diffeomorphism-invariance."

I'll get the page 10 "wish list" to provide context.

==quote Barrett "State sum models, induced gravity, and the spectral action"==
These features have all been seen in various models and it is not unreasonable to expect there to exist state sum models with all of them at once. The wish-list of properties for a state sum model is
• It defines a diffeomorphism-invariant quantum field theory on each 4- manifold
• The state sum can be interpreted as a sum over geometries
• Each geometry is discrete on the Planck scale
• The coupling to matter fields can be defined
• Matter modes are cut off at the Planck scale
• The action can include a cosmological constant
Diffeomorphism invariance here actually means invariance under piecewise- linear homeomorphisms, but this is essentially equivalent. The piecewise- linear homeomorphisms are maps which are linear if the triangulations are subdivided sufficiently and play the same role as diffeomorphisms in a theory ...

...The coupling of the 3d gravity models to matter is studied in [BO, FL], and extended to 4d models in [BF]. A model with a fermionic functional integral have been studied in [FB, FD], though as yet there is no model which respects diffeomorphism invariance. This is clearly an important area for future study.
===endquote===

Notice at the end he cites four LQG papers by Laurent Freidel (FL, BF, FB, FD).

And he has already gotten out of the smooth category and into piecewise-linear, why not go all the way to the 2-skeleton?

All LQG does is take the process one step further. A PL manifold is already in some sense combinatorial, just with a bunch more excess baggage. When you triangulate then the divisions between the simplexes are a foam. And all the interesting stuff happens at the joints, that is on the foam. That is where curvature occurs!

So LQG does the logical thing and focuses on the 2-complex, the foam, and labels it.

It still retains the mathematical essence of the classic diff-invariance. The point about diff-invariance in GR was to factor it out. The essential object (a "geometry" ) was an equivalence class. When you reach that level there are no more diffeomorphisms. They are merely the gauge equivalences between different representatives of the class.

LQG reflects this. You can see it still being dealt with when they divide out the multiplicity factor (the foam automorphisms) in the state sum. The foam has almost all the diffeo gauge redundancy squeezed out, but there is still some margin of double-counting because of symmetries in the foam, so they have to deal with that.

You also see Loll dealing with the same thing. I remember them dividing out by the multiplicity of a triangulation---its automorphisms---in their CDT state sum. Except for that, a triangulation represents a unique geometry: there is no more diffeo equivalence to factor out.

I don't want to take time now to look up references, but if you want, and ask about it, I think I can get links and page-refs about this. Depends if anyone is curious.
 
Last edited:
  • #117
Actually, I think diff invariance is a minor issue. I think the bigger issue is interpretation of the formalism. Rovelli has consistently said no unification of gravity and matter. I suspect there has to be unification - that's a key message from strings - and it is interesting to see Barrett exploring unification ideas - ie. that matter is essential for gravity. As you know, I believe Rovelli's philosophy leads to Asymptotic Safety, but his formalism leads elsewhere.
 
  • #118
atyy said:
...Rovelli has consistently said no unification of gravity and matter.

I don't recall Rovelli saying no unif. of grav. and matter EVER. What he says in the latest review is take one step at a time. I think the ultimate aim is unification, and the philosophy is pragmatic and incremental.

Let's first see how to formulate a backgroundless quantum field theory.
The first such, the first field should be geometry (= gravity).
When you know how to write down a backgroundless quantum geometry (=backgroundless quantum grav. field) then define matter on it.
Then you will see how to unify.

Rovelli didn't say you never unify. He has opposed the Great Leap Forward impulse of making a big jump to a dreamed-of final theory.

You and I see the same facts and you are admirably alert and perceptive, but we sometimes differ as to interpretation. I see LQG as addressing all 6 of Barrett's desiderata, and having an ultimate goal of unification, and being on track for that goal (at least for the present.)

I see the Zurich conference organizing committee as a place where Rovelli, Barrett, Nicolai can meet and discover how to see eye to eye on this project.

Maybe since you brought up Barrett's page 10 "wish list" we should list all 6 of his "wishes" and see how well the current formulation of LQG addresses them.
 
Last edited:
  • #119
Picking up on a couple of things:
marcus said:
...
I see the Zurich conference organizing committee as a place where Rovelli, Barrett, Nicolai can meet and discover how to see eye to eye on this project.

Maybe since you brought up Barrett's page 10 "wish list" we should list all 6 of his "wishes" and see how well the current formulation of LQG addresses them.

The June Zurich conference, February Lisbon workshop, and March Zakopane school are, I think the three defining QG events for 2011. We need to look at the various programs in relation.

Zurich "Quantum Theory and Gravitation"
http://www.conferences.itp.phys.ethz.ch/doku.php?id=qg11:start
(organizers Barrett, Grosse, Nicolai, Picken, Rovelli)
http://www.conferences.itp.phys.ethz.ch/doku.php?id=qg11:speakers

Zakopane "Quantum Geometry/Gravity School"
QG means Quantum Geometry and Quantum Gravity the way ESF supports it.
(organizers include Barrett, Lewandowski, Rovelli)
http://www.fuw.edu.pl/~kostecki/school3/
https://www.physicsforums.com/showpost.php?p=3117688&postcount=14

Lisbon "Higher Gauge, TQFT, Quantum Gravity" school and workshop
https://sites.google.com/site/hgtqgr/home
(organizers include Roger Picken and Jeffrey Morton)
https://sites.google.com/site/hgtqgr/speakers
(speakers include Freidel, Baratin, Dittrich...)

Since the ESF QG agency is supporting all three of these we could think of Barrett's recent paper (cited by Atyy) as suggesting a common direction, giving a hint of a keynote. He probably tries to think coherently about the whole picture. Let's look at what he calls his "wish list".

==quote Barrett http://arxiv.org/abs/1101.6078 ==
The wish-list of properties for a state sum model is
  • It defines a diffeomorphism-invariant quantum field theory on each 4-manifold
  • The state sum can be interpreted as a sum over geometries
  • Each geometry is discrete on the Planck scale
  • The coupling to matter fields can be defined
  • Matter modes are cut off at the Planck scale
  • The action can include a cosmological constant
Diffeomorphism invariance here actually means invariance under piecewise-linear homeomorphisms, but this is...
==endquote==
It is a clear cogent program except that it may be overly restrictive to assume a 4-manifold. Why have a manifold at all? since that suggests a continuous "classical trajectory" of spatial geometry.
I think the (possibly unconsidered) assumption of a 4-manifold favors a kind of preconception of what a state-sum model, or a TQFT, ought to look like.
 
Last edited:
  • #120
Atyy you quoted this of Barrett's, right where he gives his 6-point "wish list". Do you think he is right about "do not respect" or might he have overlooked something?

atyy said:
The discussion (p10) of http://arxiv.org/abs/1101.6078 makes very interesting comments about the current models:

"Diffeomorphism invariance here actually means invariance under piecewise-linear homeomorphisms, but this is essentially equivalent. ... a sum over geometries, a feature which is carried over to the four-dimensional gravity models [BC, EPRL, FK], which however do not respect diffeomorphism invariance. ..."

Barrett has a particular idea of a state-sum model that I think conforms roughly to an Atiyah TQFT paradigm. He accordingly expects to see something at least reminiscent of a manifold, with the moral equivalent of diffeomorphisms. He sets out these 6 desiderata:

==quote Barrett http://arxiv.org/abs/1101.6078 ==
The wish-list of properties for a state sum model is
  • It defines a diffeomorphism-invariant quantum field theory on each 4-manifold
  • The state sum can be interpreted as a sum over geometries
  • Each geometry is discrete on the Planck scale
  • The coupling to matter fields can be defined
  • Matter modes are cut off at the Planck scale
  • The action can include a cosmological constant
Diffeomorphism invariance here actually means invariance under piecewise-linear homeomorphisms, but this is...
==endquote==
Since by "diffeomorphism" what he means is a 1-1 onto piecewise linear map of PL manifolds, his "RESPECT diffeo" criterion seems to force models to work on something rather restrictive, a PL manifold, a given 4d triangulation if you will. What about approaches that work on some other structure containing approximately the same information, and respecting whatever of diff-invariance carries over to that structure?

I think the new formulation of LQG actually meets the red criterion because it respects all that is left of diffeo-invariance once one throws away the smooth manifold. And because it can optionally be couched in terms of a generalized TQFT on a manifold with defects. This was one of the points made in http://arxiv.org/abs/1012.4707.

Have a look at page 14, right after the paragraph that says
==quote 1012.4707 Section "Loop gravity as a generalized TQFT" ==
Therefore loop gravity is essentially a TQFT in the sense of Atiyah, where the cobordism between 3 and 4d manifold is replaced by the cobordism between graphs and foams. What is the sense of this replacement?
==endquote==
Some background on TQFT http://math.ucr.edu/home/baez/week58.html
Barrett's 1995 paper on realizing 4d QG as a generalized TQFT http://arxiv.org/abs/gr-qc/9506070
 
Last edited:

Similar threads

  • · Replies 7 ·
Replies
7
Views
4K
  • · Replies 9 ·
Replies
9
Views
4K
Replies
12
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 37 ·
2
Replies
37
Views
6K
  • · Replies 14 ·
Replies
14
Views
4K
  • · Replies 13 ·
Replies
13
Views
3K
Replies
4
Views
4K
  • · Replies 2 ·
Replies
2
Views
3K