Signs LQG has the right redefinition (or wrong?)

  • Thread starter Thread starter marcus
  • Start date Start date
  • Tags Tags
    Lqg
  • #121
marcus said:
  • It defines a diffeomorphism-invariant quantum field theory on each 4-manifold
  • ...
Diffeomorphism invariance here actually means invariance under piecewise-linear homeomorphisms, but this is...
This is problematic already at the classical level as we know that in 4-dim. the homöomorphic, differentiable and piecewise linear structures and classifications of homöomorphic manifolds need not coincide (Donaldson et al.) So either one abandons the manifold at all (which means that it may emerges in a certain classical limit only) or one takes the manifold seriously which means that one must answer the questions regarding differentiable structures.
 
Physics news on Phys.org
  • #122
tom.stoer said:
This is problematic already at the classical level as we know that in 4-dim. the homöomorphic, differentiable and piecewise linear structures and classifications of homöomorphic manifolds need not coincide (Donaldson et al.) So either one abandons the manifold at all (which means that it may emerges in a certain classical limit only) or one takes the manifold seriously which means that one must answer the questions regarding differentiable structures.

Barrett is a central player in this business (see post #119) and it sounds to me like he was prepared to drop the smooth structure assumption already in 1995.
(Some background on TQFT http://math.ucr.edu/home/baez/week58.html and
Barrett's 1995 paper on realizing 4d QG as generalized TQFT http://arxiv.org/abs/gr-qc/9506070 )
As you surely know, qg people tend to think of smooth manifold as macroscopic approximation not corresponding to micro reality. One wonders what geometry could be like at very small scale, but doesn't expect it to be a 4D smooth manifold!

So PL manifold with defects is a possible model. Personally I think it makes sense to throw out the manifold completely and look at how our information is structured. minimalist.
But in this paper Barrett hangs on to the PL manifold! He wants a TQFT and he has the notion that some kind of manifold is needed to base that on.

Here is what Rovelli says:
==quote http://arxiv.org/abs/1012.4707 page 14==

Section H.Loop gravity as a generalized TQFT
...
...
Therefore loop gravity is essentially a TQFT in the sense of Atiyah, where the cobordism between 3 and 4d manifold is replaced by the cobordism between graphs and foams. What is the sense of this replacement?

TQFT defined on manifolds are in general theories that have no local degrees of freedom, such as BF or Chern-Simon theory, where the connection is locally flat. Its only degrees of freedom are global ones, captured by the holonomy of the connection wrapping around non-contractible loops in the manifold. In general relativity, we do not want a flat connection: curvature is gravity. But recall that the theory admits truncations à la Regge where curvature is concentrated in d−2 dimensional sub- manifolds. If we excise these d − 2 submanifolds from the Regge manifold, we obtain manifolds with d − 2 dimensional defects. The spin connection on these manifolds is locally flat, but it is still sufficient to describe the geometry, via its non-trivial holonomies wrapping around the defects [51]. In other words, general relativity is approximated arbitrarily well by a connection theory of a flat connection on a manifold with (Regge like) defects. Now, the relevant topology of a 3d manifold with 1d defects is precisely characterized by a graph, and the relevant topology of a 4d manifold with 2d defects is precisely characterized by a two-complex. In the first case, the graph is the 1-skeleton of the cellular complex dual to the Regge cellular decomposition. It is easy to see that this graph and the Regge manifold with defects have the same fundamental group. In the second case, the two-complex is the 2-skeleton of the cellular complex dual to the 4d Regge cellular decomposition. In this case, the faces of the two-complex wrap around the 2d Regge defects. Therefore equipping Atiyah’s manifolds with d − 2 defects amounts precisely to allowing local curvature, and hence obtaining genuinely local (but still generally covariant) bulk degrees of freedom.
==endquote==

In other words you can throw out the continuum, and work with a minimalist combinatorial structure--the graph, the two-complex (foam)--and if you ever need to for any reason you can get manifolds back.
 
Last edited:
  • #123
I guess there is a non-trivial point to make here: you can use differential geometry to show that the spinfoam approach is valid. (It may not be in accord with Nature. Experiment and observation will determine that. It is mathematically sound.)

The basic idea is "the curvature lives on the bones". Bones being math jargon for D-2 dimensional creases/cuts/punctures able to carry all the geometrical information. A smooth manifold can be approximated arbitrarily closely by a piecewise flat one with the curvature concentrated on the D-2 dimensional divisions.

Thinking about 3D geometry the "bones" are one-dimensional line segments, corresponding more or less with our everyday idea of skeletal bones. But in 2D they are zero-dimensional. And in 4D the bones are 2D---like the faces in a 2-complex, or foam.

There is something to understand here and it helps to first picture triangulating a 2D surface with flat triangles. The curvature condenses to "conical singularity points" where if you tried to flatten the surface you would find either too little or too much material. If you imagine a 2D surface triangulated with identical equilateral triangles, it would be a point where more than 6 or less than 6 triangles were joined. (this is how curvature arises in CDT.)

The situation in 3D is somewhat harder to imagine, but you still can. There the analogous picture is with tetrahedra. The curvature is concentrated on 1D "bones" too many or too few come together.

The mathematical tool used to feel out curvature is the "holonomy"---namely recording what happens when you go around a bone. In the 2D case you go around a point to detect if there is pos or neg curvature there. In the 3D case you travel along more or less any loop that goes around a 1D bone and do the same thing.

Now if you look back at the previous post, where I quoted that "page 14" passage, and think of the 3D case, you can understand the construction.

Take a 3D manifold and triangulate. The piecewise flat approximation. Now you have a web of 1D bones and all the geometry is concentrated there. Now that is not the spin network.
The spin network is in a sense "dual" to that web of bones. It is a collection of holonomy paths that explore around all the bones in an efficent manner. The spin network should be a minimal structure with enough links so that around any bone you can find a way through the network to circumnavigate that bone. And the links should be labeled with labels that record what you found out by circling every bone.

The spin network is a nexus of exploration pathways that extracts all the info from the bones. That is the 3D case.

In the 4D case it is just analogous. Triangulate (now with pentachorons instead of tets) and the bones are 2D, and the geometry lives on the bones, and the foam is the "dual" two-complex that explores, detects, records. It is hard to picture but it is the 4D analog of what the spin network does in 3D.

I am trying to help make sense of that "page 14" passage in the previous post.

This is what it means when, in post #122 https://www.physicsforums.com/showthread.php?p=3124407#post3124407 it says:
general relativity is approximated arbitrarily well by a connection theory of a flat connection on a manifold with (Regge like) defects.

What we are basically talking about, the central issue, is how spinfoam LQG can work as a generalized TQFT. And incidentally meet Barrett's "wish list" for a state sum model.
Which (it now looks increasingly likely) we can put matter on and maybe get the standard matter model.
 
Last edited:
  • #124
tom.stoer said:
This is problematic already at the classical level as we know that in 4-dim. the homöomorphic, differentiable and piecewise linear structures and classifications of homöomorphic manifolds need not coincide (Donaldson et al.) So either one abandons the manifold at all (which means that it may emerges in a certain classical limit only) or one takes the manifold seriously which means that one must answer the questions regarding differentiable structures.

Tom, in light of the above I don't see what is problematic (for any theory of QG I know about.)

The idea that spacetime could be a smooth manifold has never, AFAIK, been taken seriously in the history of QG going back at least to JA Wheeler in the 1970s.

The trajectory of a particle is not even supposed to be a smooth (differentiable) curve when looked at microscopically, much less the micro geometry of space.
 
  • #125
marcus said:
Thanks for pointing out his mention of diffeo invariance. Do you think he could be mistaken on that point? I think LQG has all the diff-invariance one can expect to have after one gets rid of the smooth manifold. (And no one, including Barrett, thinks that smooth continuum exists all the way in---Barrett refers to manifold model as only an approximation.)

After reading the final chapter of Hellmann's thesis, I think what Barrett has in mind is that the EPRL and FK models are triangulation dependent.

I'm not sure, but I believe Rovelli mentions this as being dependent on a particular 2 complex. To remove this dependence, he proposes Eq 26, which we discussed.

I think Hellmann's suggests that the triangulation dependence may be ok, if their renormalization via Pachner moves gives an ok theory (in a different sense from GFT).
 
  • #126
atyy said:
After reading the final chapter of Hellmann's thesis, I think what Barrett has in mind is that the EPRL and FK models are triangulation dependent.

I'm not sure, but I believe Rovelli mentions this as being dependent on a particular 2 complex. To remove this dependence, he proposes Eq 26, which we discussed.

I think Hellmann's suggests that the triangulation dependence may be ok, if their renormalization via Pachner moves gives an ok theory (in a different sense from GFT).

That's a really interesting comment! I'm not sure about the renormalization via Pachner moves--I don't understand that and will have to read Hellmann's thesis last chapter to try and grasp what he is talking about.

But I agree with the other things you said. The present formulation goes depend on a particular two-complex. Any finite set of two-complexes can be subsumed within a larger one, so one is not absolutely tied-down. But the large-volume limit question remains to be tackled, as we discussed re Eq 26.
===============

BTW I saw the latest bibliography entry and looked up TOCY. It is defined on page 342 of Rovelli's book--Turaev-Ooguri-Crane-Yetter. Struck me as a remarkable idea, to combine spinfoam with Kaluza-Klein. The reference the authors give is to a paper by Ooguri, he presents the model but does not call it TOCY.
 
Last edited:
  • #127
Several people have offered reasons (or hints) that LQG does NOT have the right (re)formulation so far. Atyy has pointed to equations (26) and (27) in a recent review paper, where conditions for convergence have not been shown. He is unquestionably right, although one can differ about how significant this is. Thanks to all who have offered reasons pro or con. I will look back and see what other points surfaced.

The most cogent and extensive arguments, aside from Atyy's, were offered in this post by Tom Stoer, which I quote in entirety.
tom.stoer said:
I don't think that LQG has been redefined.

Rovelli states that it is time to make the next step from the construction of the theory to the derivation of results. Nevertheless the construction is still not complete as long as certain pieces are missing. Therefore e.g. Thiemann's work regarding the Hamiltonian approach (which is not yet completed and for which the relation to spin foams is still not entirely understood) must still back up other programs

There are still open issues to be solved:
- construction, regularization and uniqueness of the Hamiltonian H
- meaning of "anomaly-free constraint algebra" in the canonical approach
- relation between H and SF (not only kinematical)
- coarse-graining of spin networks, renormalization group approach
- nature and value of the Immirzi parameter
- nature and value of the cosmological constant
- nature of matter and gauge fields (on top, emergent, ...); yes, gauge fields!
And last but not least: If a reformulation is required (which would indicate that the canonical formalism is a dead end), then one must understand why it is a dead end! We don't know yet.

My impression that Rovelli's new formulation does not address all these issue. His aim is more to develop calculational tools to derive physical results in certain sectors of the theory.

Let's look at QCD: there are several formulations of QCD (PI, canonical, lattice, ...), every approach with its own specific benefits and drawbacks. But nobody would ever claim that QCD has been reformulated (which sounds as if certain approaches would be out-dated). All approaches are still valid and are heavily used to understand to understand QCD vacuum, confinement, hadron spectroscopy, QGP, ... There is not one single formulation of QCD.

So my conclusion is that a new formulation of LQG has been constructed, but not that LQG has been reformulated.

I think all of this is worth reviewing and balancing against the plusses. To do that properly would take work (he put considerable thought into the list). If anybody wants to help out it would be very welcome! I can at best just nibble away piecemeal.
 
  • #128
From looking at the list, I'd say that a lot of what is seen as a possible trouble with the new formulation has to do with its being different from the old one.

The old approach (as most often presented) used a smooth 3D manifold, in which spinnetworks were embedded, and took a canonical or Hamiltonian approach to the dynamics.

The new approach does not need a smooth manifold---there is no continuum. And it does not need a Hamiltonian. Transition amplitudes between states of geometry are calculated via spinfoam. So that leaves unanswered questions about the prior approach.

It might happen that the older canonical LQG will be completed and that it will even turn out to be mathematically equivalent! It is hard to predict---impossible to predict.
The person most active in developing canonical (Hamiltonan) LQG is, I believe, Thomas Thiemann at Uni Erlangen. Jerzy Lewandowski at Warsaw also has an active interest in it (but not exclusively, he also works on spinfoam LQG). We'll see what these folks and their students come up with.

As Tom points out, there is no reason a theory cannot have several equivalent versions.
 
  • #129
marcus said:
The old approach (as most often presented) used a smooth 3D manifold, in which spinnetworks were embedded, ...
Only for its derivation (better: motivation)

He must so to speak throw away the ladder, after he has climbed up on it
Wittgenstein


marcus said:
The new approach does not need a smooth manifold
. Neither does the old one after its completion.

marcus said:
And it does not need a Hamiltonian.
Why does one prefer the new formalism? B/c it is superior to the old one - or because the problem of the old one couldnot be solved?

marcus said:
As Tom points out, there is no reason a theory cannot have several equivalent versions.
I have not seen a single Qxxx theory that does not have different approaches.
 
  • #130
All good points! I agree completely (also with the suspicion that a reason to adopt the new LQG is that the problem of determining the Hamiltonian proved somewhat intractible, but they still could do it.)

I would put the present situation this way: a new combined research field of QG is being forged. It takes something of Connes NC geometry, something of LQG, something of string, something of fields on curved or NC spacetime, something of Regge triangulations, something of "higher gauge" categorics, something of cosmology---all those 6 or 8 topics mentioned by the organizers of the Zurich conference.

I would say the Zurich conference is historic level, and that because Barrett is a leading organizer (with Nicolai, Grosse, Rovelli, Picken..) part of Barrett's job is to give a short list of goals (defining direction and measure of progress). He has to. And we have to pay at least partial attention.==quote Barrett http://arxiv.org/abs/1101.6078 ==
The wish-list of properties for a state sum model is
  • It defines a diffeomorphism-invariant quantum field theory on each 4-manifold
  • The state sum can be interpreted as a sum over geometries
  • Each geometry is discrete on the Planck scale
  • The coupling to matter fields can be defined
  • Matter modes are cut off at the Planck scale
  • The action can include a cosmological constant
Diffeomorphism invariance here actually means invariance under piecewise-linear homeomorphisms, but this is...
==endquote==

You have already commented on how problematical the red wish is. I think that will just have to be worked out by relaxing the structure, at first maybe to PL (piecewise linear) and perhaps even more later.

Looking at LQG research in this historic context, I would be interested to know what you see---I see it spurring a strong drive to accommodate matter, possibly trying several different ways at first.

http://www.conferences.itp.phys.ethz.ch/doku.php?id=qg11:start
 
Last edited:
  • #131
marcus said:
A new combined research field of QG is being forged. It takes something of Connes NC geometry, something of LQG, something of string, something of fields on curved or NC spacetime, something of Regge triangulations, something of "higher gauge" categorics, ...
Too complicated. All successful theories are based on rather simple structures. I agree that it may be necessary to go through all that stuff - just to find out what and why one has to throw away.
 
  • #132
tom.stoer said:
Too complicated. All successful theories are based on rather simple structures. I agree that it may be necessary to go through all that stuff - just to find out what and why one has to throw away.

Again, I fully agree. I was not suggesting that the SOLUTION would involve elements of all those disciplines.

What I said or meant to say was that a greater QG research field is being forged. A larger combined community of researchers able to appreciate and benefit from each others' ideas. That's what conferences do, I think.

Hotels in Zurich are expensive.
 
  • #133
  • #134
atyy said:
While we're throwing everything and the kitchen sink, let's not forget http://arxiv.org/abs/0907.2994

Heh heh, so you would like one of them to be presenting a paper at the conference too!
Tensor network decompositions in the presence of a global symmetry
Sukhwinder Singh, Robert N. C. Pfeifer, Guifre Vidal

Personally I'm not making suggestions to the organizers, but what you say could certainly happen. We don't know the final program or the final list of speakers.

I tend to just trust the pros. When you forge a new field of resarch all it has to be is good enough and representative enough of what you have in mind, plus simple and clear enough to communicate to the broader scientific community.

If it is enough right, then other stuff that belongs in it will gradually be attracted and gather and accrete to it.

Actually they didn't put in the kitchen sink yet :biggrin: the halfdozen topics they put upfront are, I thought, selective. I can see the focus or the organic connections.
http://www.conferences.itp.phys.ethz.ch/doku.php?id=qg11:start

but we could look down the speaker list and see if, say, Guifre Vidal is on there.
http://www.conferences.itp.phys.ethz.ch/doku.php?id=qg11:speakers
It's only 30 names and its alphabetized, so it is each to check. No.

Well maybe next time. If this year's is Quantum Theory and Gravitation 2011 then maybe there will be a Quantum Theory and Gravitation 201x. Seems reasonable enough.
 
Last edited:
  • #135
marcus said:
Heh heh, so you would like one of them to be presenting a paper at the conference too!
Tensor network decompositions in the presence of a global symmetry
Sukhwinder Singh, Robert N. C. Pfeifer, Guifre Vidal

Personally I'm not making suggestions to the organizers, but what you say could certainly happen. We don't know the final program or the final list of speakers.

I tend to just trust the pros. When you forge a new field of resarch all it has to be is good enough and representative enough of what you have in mind, plus simple and clear enough to communicate to the broader scientific community.

If it is enough right, then other stuff that belongs in it will gradually be attracted and gather and accrete to it.

Actually they didn't put in the kitchen sink yet :biggrin: the halfdozen topics they put upfront are, I thought, selective. I can see the focus or the organic connections.

but we could look down the speaker list and see if, say, Guifre Vidal is on there.

Oh, he's just moved to an even more significant place than the speaker list :biggrin:
 
  • #136
atyy said:
Oh, he's just moved to an even more significant place than the speaker list :biggrin:
Well you could say a more significant place than the Zurich speaker list is Australia. And he certainly has moved to Australia. Looks like a bright promising young guy, BTW.

I'm beginning to suspect that consciously or unconsciously the organizers of the 2011 "Quantum Theory and Gravitation" conference are making a kind of statement by holding it at the ETH (Swiss-Federation Technische Hochschule) in Zurich. ETH Zurich was Einstein's alma mater university.
He was at the beginning of quantum theory with his 1905 photon paper, and at the beginning of the 1915 geometrical theory of gravity. The two themes of the conference.
It dawned on me that the organizers (Barrett, Nicolai, Rovelli, Grosse, Picken) are forging the QG research field in a place with thrilling reminders of the past.

And it is a past where the major revolutions in physics have emerged in Europe. Maybe we shouldn't mention that, it might offend some US-physics chauvins
(my etymological source says a chauvin is a balding diehard, chauve is French for bald, and we all have our share.)

But anyway, US-European issues aside, it just dawned on me that Göttingen could be next. Also a place thrilling with reminders, of Hilbert, and Heisenberg, and Gauss, and Riemann-of-the-manifolds. If you hold a major historic conference at ETH Zurich how can you not hold a followup at Uni Göttingen?

Just a two-penny dream.
 
Last edited:
  • #137
marcus said:
Well you could say a more significant place than the Zurich speaker list is Australia. And he certainly has moved to Australia. Looks like a bright promising young guy, BTW.

http://www.perimeterinstitute.ca/News/In_The_Media/Guifre_Vidal_to_Join_Perimeter_Institute_as_Senior_Faculty/
 
  • #138
atyy said:
http://www.perimeterinstitute.ca/News/In_The_Media/Guifre_Vidal_to_Join_Perimeter_Institute_as_Senior_Faculty/

From Spain to Queensland to Perimeter. Great! Information theory+condensed matter also great.
Clearly a rising star. Since his first language must be Spanish, let us say Borges' prayer for the success of this young person:

Solo una cosa no hay: es el olvido
Dios que salva el metal, salva la escoria,
y cifra en su profetica memoria
las lunas que serán y las que han sido.

Ya todo está. Los milles de reflejos
que entre los dos crepusculos del dia
tu rostro fue dejando en los espejos
y los que irá dejando todavía.

Y todo es una parte del diverso
cristal de esa memoria: el universo.
...
...

And everything is part of that diverse
crystalline memory, the universe.
 
  • #139
But actually Atyy, Perimenter may have lost its edge, at least to the extent that one does not see many PI names in the 2011 Zakopane school or the speakers list for 2011 "Quantum Theory and Gravitation" conference.

It has moved in the direction of established ideas, conventional reputation, and some celebrity hunting. Still a good place, but not as outstanding as say 4 or 5 years ago. Just my impression, but I've seen similar comments from others lately.

So the "to an even more significant place" comment, though witty, may actually not be exact.

I just checked the "QT&G" speakers list and out of 30 speakers the only PI guy was Laurent Freidel.
http://www.conferences.itp.phys.ethz.ch/doku.php?id=qg11:speakers
If I remember right he joined PI faculty back in 2006 when Perimeter really was leading edge. Still small. Freidel was only their 9th faculty appointment. Here is the 2006 announcement:
http://www.perimeterinstitute.ca/News/In_The_Media/Laurent_Freidel_becomes_Faculty/

Out of over 100 participants at Zakopane, one Perimeter guy, Tim Koslowski:
http://cift.fuw.edu.pl/users/jpa/php/listofparticipants.php
and no PI person on the Zakopane list of speakers.
 
Last edited by a moderator:
  • #140
Pedagogically speaking, the most useful and accurate introduction to LQG is probably now Livine's January "fifty-sixtyone" monograph.

http://arxiv.org/abs/1101.5061

It is amazingly good. The perspective is balanced and complete (although he declares it shaped by his own personal mix of conservative and "top-down" taste).

I would suggest printing out the TOC plus pages 1-62 and pages 79-88
I think the PDF file calls these pages 1-64 and 81-90.
The PDF adds two to the pagenumber, or some such thing.

The thing about Livine's style in this piece is that he takes it easy. He doesn't rush. He fills in detail (that a different expositor might assume we already know). He says explicitly where he is skipping something, and gives a reference.

I particularly liked seeing where he takes a paragraph or so to explain the transitional importance of Sergei Alexandrov's "CLQG".
Livine coauthored with SA back around 2002-2003 and based his 2002 PhD thesis on some ideas he developed which bridge between SU(2) labels and SL(2,C) labels, and that has turned out to make quite a difference. Stuff like Livine's "projected spin networks". I remember reading parts of Livine's PhD thesis back around 2004. He was working out ideas for bridging between the spinnetworks of the canonical approach and the spinfoams of the path integral approach and that meant relating SU(2) reps with SL(2,C) reps. And that kind of stuff has come back strongly in the past two or three years, like 2008-2010.

Alexandrov's CLQG may have been passed by---I can't say about that, maybe it was not quite on the main historical track. But it was seminal all the same. Livine in his discussion gives it its due recognition.

This is in section 2.1, the first 28 or so pages, where he is giving the history (including canonical approach) that led up to the present formulation.

This piece is actually a pleasure to read. Carefully informative but also in a certain sense "laid back" (slang for relaxed and untroubled).

If anyone wants an introduction, they could do worse than try this one.
 
  • #141
marcus said:
But actually Atyy, Perimenter may have lost its edge, at least to the extent that one does not see many PI names in the 2011 Zakopane school or the speakers list for 2011 "Quantum Theory and Gravitation" conference.

It has moved in the direction of established ideas, conventional reputation, and some celebrity hunting. Still a good place, but not as outstanding as say 4 or 5 years ago. Just my impression, but I've seen similar comments from others lately.

Well, it's true that Livine's moved from Perimeter ;)
 

Similar threads

  • · Replies 7 ·
Replies
7
Views
4K
  • · Replies 9 ·
Replies
9
Views
4K
Replies
12
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 37 ·
2
Replies
37
Views
6K
  • · Replies 14 ·
Replies
14
Views
4K
  • · Replies 13 ·
Replies
13
Views
3K
Replies
4
Views
4K
  • · Replies 2 ·
Replies
2
Views
3K