Signs LQG has the right redefinition (or wrong?)

  • Thread starter Thread starter marcus
  • Start date Start date
  • Tags Tags
    Lqg
  • #51
marcus said:
It would be a good brain-exercise, I think, to imagine how ordinary 3D space can be "tiled" or triangulated by regular tetrahedra. You can set down a layer of pyramids pointing up, but then how do you fill in? Let's say you have to use regular tets (analogous to equilateral triangles) for everything.

And when you have 3D space filled with tets, what is the dual to that triangulation? This gets us off topic. If you want to pursue it maybe start a thread about dual cell-complexes or something? I'm not an expert but there may be someone good on that.


Regular tetrahedra can not fill space. Tetrahedra combined with octahedra can fill space. See isotropic vector matrix or octet-truss.

...and I think the dual is packed rhombic dodecahedra
 
Last edited:
Physics news on Phys.org
  • #52
marcus said:
Oh good! You are on your own. I googled "dual cell complex" and found this:
http://www.aerostudents.com/files/constitutiveModelling/cellComplexes.pdf

Don't know how reliable or helpful it may be.

The dual skeleton is defined quite nicely on p31 in this paper http://arxiv.org/abs/1101.5061"

which you identified in the bibliography thread.
 
Last edited by a moderator:
  • #53
sheaf said:
The dual skeleton is defined quite nicely on p31 in this paper http://arxiv.org/abs/1101.5061"

which you identified in the bibliography thread.

Thanks! I checked page 31 of Etera Livine's spinfoams paper and it does give a nice understandable presentation. That paper is like a little introductory textbook!
I will quote a sample passage from page 31:

==quote Livine 1101.5061 ==

Starting with the simpler case of a three-dimensional space-time, a space-time triangulation consist in tetrahedra glued together along their triangles. The dual 2-skeleton is defined as follows. The spinfoam vertices σ are dual to each tetrahedron. Those vertices are all 4-valent with the four attached edges being dual to the four triangles of the tetrahedron. Each edge e then relates two spinfoam vertices, representing the triangle which glues the two corresponding tetrahedra. Finally, the spinfoam faces f are reconstructed as dual to the triangulation’s edges. Indeed, considering an edge of the triangulation, we go all around the edge and look at the closed sequences of spinfoam vertices and edges which represent respectively all the tetrahedra and triangles that share that given edge. This line bounds the spinfoam face, or plaquette, dual to that edge. Finally, each spinfoam edge e has three plaquettes around it, representing the three triangulations edges of its dual triangle. To summarize the situation:

3d triangulation ↔ spinfoam 2-complex
___________________________________
tetrahedron T ↔ 4-valent vertex σ
triangle t ↔ edge e
edge ↔ plaquette f

The setting is very similar for the four-dimensional case. The triangulated space-time is made from 4-simplices glued together at tetrahedra. Each 4-simplex is a combinatorial structure made of 5 boundary tetrahedra, glued to each other through 10 triangles. Once again, we define the spinfoam 2-complex as the dual 2-skeleton:
...
==endquote==
 
Last edited by a moderator:
  • #54
Helios said:
Regular tetrahedra can not fill space...

I think that is right, Helios. The dihedral angle of a regular tet is about 70.5 degrees,

Suppose I allow two kinds of tet. Can it be done? Please tell us if you know.


[This may not be absolutely on topic, because all we need to accomplish what Etera is talking about is some sort of tetrahedral triangulation of space, which I'm pretty sure exists (if we relax the regularity condition slightly). But it's not a bad exercise for the imagination to think about it. Helios might be a good teacher here.]
 
  • #55
Helios said:
Regular tetrahedra can not fill space.

But irregular tetrahedra can!
 
  • #56
MTd2 said:
But irregular tetrahedra can!

Indeed, only slightly irregular. The construction I was vaguely remembering was one in Loll's 2001 paper. I'll get the reference. (Loll Ambjorn Jurkiewicz 2001). they are doing 2+1 gravity so spacetime is 3D. The basic idea is simple layering. They have two types of tets, red and blue. Both look almost regular but slightly distorted. The red have an equilateral base but the wrong height (slightly taller or shorter than they should be). They set them out in a red layer covering a surface (a plane say) with little trianglebase pyramids.
Now where each pyramid meets its neighbor there is a kind of V-shaped canyon.
(I could be misremembering this, but you will, I hope, see how to correct me.)

The blue tets are also nearly regular but slightly stretched in some direction. They have a dihedral angle so that they precisely fit into that V-shape canyon. You hold the tet with one edge horizontal like the keel of a little boat. It fits right in. The top will be a horizontal edge rotated at right angles.

So now you have the upsidedown picture of a blue layer with upsidedown pyramid holes. So you put in red tets with their flat equilateral bases directed upwards. Now you have a level ground again, made of their bases, and you can start another layer.

I could be wrong. I am just recalling from that paper by Renate Loll et al. I haven't checked back to see. Please correct me if I'm wrong about how they do it. Let me get the reference. This is the best introduction to CDT I know. It is easy, concrete, and does not gloss over anything. If anyone knows a better introduction, please say.

http://arxiv.org/abs/hep-th/0105267
Dynamically Triangulating Lorentzian Quantum Gravity
J. Ambjorn (NBI, Copenhagen), J. Jurkiewicz (U. Krakow), R. Loll (AEI, Golm)
41 pages, 14 figures
(Submitted on 27 May 2001)
"Fruitful ideas on how to quantize gravity are few and far between. In this paper, we give a complete description of a recently introduced non-perturbative gravitational path integral whose continuum limit has already been investigated extensively in d less than 4, with promising results. It is based on a simplicial regularization of Lorentzian space-times and, most importantly, possesses a well-defined, non-perturbative Wick rotation. We present a detailed analysis of the geometric and mathematical properties of the discretized model in d=3,4. This includes a derivation of Lorentzian simplicial manifold constraints, the gravitational actions and their Wick rotation. We define a transfer matrix for the system and show that it leads to a well-defined self-adjoint Hamiltonian. In view of numerical simulations, we also suggest sets of Lorentzian Monte Carlo moves. We demonstrate that certain pathological phases found previously in Euclidean models of dynamical triangulations cannot be realized in the Lorentzian case."
 
Last edited:
  • #57
I welcome disagreement and corrections, but I want to keep hitting the main topic. I think there are signs that LQG has made the right redefinition and has reached an exciting stage of development. Please disagree, either in general or on details. I will give some details.

First notice that CDT AsymSafe and Causets appear persistently numerical (not analytic)---they run on massive computer experiments instead of equations. This is a wonderful way to discover things, a great heuristic tool, but it does not prove theorems. At least so far, many of the other approaches seem insufficiently analytical and lack the symbolic equations that are traditional in physics.

As I see it, the QG goal is to replace the live dynamic manifold geometry of GR with a quantum field you can put matter on. The title of Dan Oriti's QG anthology said "towards a new understanding of space time and matter" That is one way of saying what the QG researchers's goal is. A new understanding of space and time, and maybe laying out matter on a new representation of space and time will reveal a new way to understand matter (no longer fields on a fixed geometry).

Sources on the 2010 redefinition of LQG are
introductory overview: http://arxiv.org/abs/1012.4707
concise rigorous formulation: http://arxiv.org/abs/1010.1939
phenomenology (testability): http://arxiv.org/abs/1011.1811
adding matter: http://arxiv.org/abs/1012.4719

Among alternative QGs, the LQG stands out for several reasons---some I already indicated---which I think are signs that the 2010 reformulation will prove a good one:

  • testable (phenomenologists like Aurelien Barrau and Wen Zhao seem to think it is falsifiable)
  • analytical (you can state LQG in a few equations, or Feynman rules, you can calculate and prove symbolically, massive numerical simulations are possible but not required)
  • similar to QED and lattice GCD (the cited papers show remarkable similarities---the two-complex works both as a Feynman diagram and as a lattice)
  • looks increasingly like a reasonable way to set up a background independent quantum field theory.
  • an explicitly Lorentz covariant version of LQG has been exhibited
  • matter added
  • a couple of different ways to include the cosmological constant
  • indications that you recover the classic deSitter universe.
  • sudden speed-up in the rate of progress, more researchers, more papers

These are just signs---the 2010 reformulation might be right---or to put it differently, there may be good reason for us to understand the theory, as presented in brief by the October paper http://arxiv.org/abs/1010.1939.

So I will copy my last substantive post about that and try to move forward from there.

marcus said:
@Tom
post #35 gives an insightful and convincing perspective. Also it leaves open the question of what will be the definitive form(s) of the theory. Because you earlier pointed out that at a deeper level a theory can have several equivalent presentations.

I had a minor comment about that. For me, the best presentation of the current manifoldless version is not the absolute latest (December's 1012.4707) but rather October's 1010.1939. And I would say that the notation differs slightly between them, and also that (from the standpoint of a retired mathematician with bad eyesight) their notation is inadequate/imperfect.

If anyone wants to help me say this, look at 1010.1939 and you will see that there is no symbol for a point in the group manifold SU(2)L = GL = G x G x ... x G
Physicists think that they can write down xi and have this mean either xi or else the N-tuple (x1, x2,...,xN)
depending on context. This is all right to a certain extent but after a point it becomes confusing.

In many ways I think the presentation in 1010.1939 is the clearest, but it is still deficient.
Maybe I will expand on that a bit, if it will not distract from more meaningful discussion.

============

BTW, in line with what Tom said in the previous post, there are obviously several different ways LQG can fail, not just one way. One failure mode is mathematical simplicity/complexity. To be successful a theory should (ideally) be mathematically simple.
As well as passing the empirical tests.

One point in favor of the 1010.1939 form is that it "looks like" QED and QCD, except that it is background independent and about geometry, instead of being about particles of matter living in fixed background. Somehow it manages to look like earlier field theories. The presentation on the first page uses "Feynman rules".

These Feynman rules focus on an amplitude ZC(h)
where C is a two-complex with L boundary or "surface" edges, and h is a generic element of SU(2) and h is (h1, h2,...,hL), namely a generic element of SU(2)L

The two-complex C is the "diagram". The boundary edges are the "input and output" of the diagram---think of the boundary as consisting of two separate (initial and final) components so that Z becomes a transition amplitude. Think of the L-tuple h as giving initial and final conditions. The notation h is my notational crutch which I use to keep order in my head. Rovelli, instead, makes free use of the subscript "l" which runs from 1 to L, and has no symbol for h.

The central quantity in the theory is the complex number ZC(h) and one can think of that number as saying

Zroadmap(boundary conditions)
 
Last edited:
  • #58
To recapitulate, there are signs the 2010 reformulation might be right---or to put it another way, good reasons for us to understand the theory, as presented in brief by the October paper http://arxiv.org/abs/1010.1939.

There is a relatively simple direct way to grasp the theory: understand equation (4) on page 1 of that paper. That equation defines the central quantity of the theory: a complex number ZC(h). It is a geometry evolution amplitude---the amplitude (related to probabliity) that the geometry will evolve from initial to final specified by boundary labels denoted h along a roadmap specified by the twocomplex ("foam") denoted C.

Zroadmap(boundary conditions)

There is no extra baggage, no manifold, no embeddings. Understanding comes down to understanding that equation (4)

I've made one change in notation from what you see in equation (4), namely introduced
a symbol h to stand for (h1, h2,...,hL), the generic element of SU(2)L. L is the number of boundary links in the network surrounding the foam. So h is an ordered collection of group elements helping to determine geometric boundary conditions.

One thing on the agenda, if we want to understand (4) is to see why the integrals are over the specified number of copies of the groups----why there are that many labels to integrate out, instead of some other number. So for example you see on the first integral the exponent 2(E-L) - V. We integrate over that many copies of the group. Let's see why it is that number. E and V are the numbers of edges and vertices in the foam C. So E-L is the number of internal edges.
 
Last edited:
  • #59
tom.stoer said:
The only (minor!) issue is the derivation of the semiclassical limit etc.

Why is this only a minor issue?

How about the classical limit?
 
  • #60
I think that the derivation of a certain limit is a minor issue compared to the problem that a construction of a consistent, anomaly-free theory (derived as quantization of a classical theory) is not available.
 
  • #61
@Tom
The post #35 which Atyy just now quote was one of the most cogent (convincing) ones on the thread. It is balanced and nuanced, so I want to quote the whole, as context. I think I understand how, when you look at it in the entire context, you can say that verifying some limit is a project of minor stature compared with postulating a QFT which is not "derived" from classic by traditional "tried-and-true" methods
tom.stoer said:
... I don't want to criticize anybody (Rovelli et al.) for not developping a theory for the cc. I simply want to say that this paper does not answer this fundamental question and does not explain how the cc could fit into an RG framework (as is expected for other couplings).

---------------------

We have to disguish two different approaches (I bet Rovelli sees this more clearly than I do).
- deriving LQG based on the EH or Holst action, Ashtekar variables, loops, ... extending it via q-deformation etc.
- defining LQG using simple algebraic rules, constructing its semiclassical limit and deriving further physical predictions

The first approach was developped for decades, but still fails to provide all required insights like (especially) H. The second approach is not bad as it must be clear that any quantization of a classical theory is intrinsically incomplete; it can never resolve quantization issues, operator ordering etc. Having this in mind it is not worse to "simply write down a quantum theory". The problem with that approach was never the correct semiclassical limit (this is a minor issue) but the problem to write down a quantum theory w/o referring to classical expressions!

Look at QCD (again :-) Nobody is able to "guess" the QCD Hamiltonian; every attempt to do this would break numerous symmetries. So one tries (tried) to "derive" it. Of course there are difficulties like infinities, but one has a rather good control regarding symmetries. Nobody is able to write down the QCD PI w/o referring to the classical action (of course its undefined, infinite, has ambiguities ..., but it does not fail from the very beginning). Btw.: this hasn't changed over decades, but nobody cares as the theory seems to make the correct predictions.

Now look at LQG. The time for derivations may be over. So instead of derived LQG (which by may argument explained above is not possible to 100%) one may simply postulate LQG. The funny thing is that in contradistinction to QCD we seem to be able to write down a class of fully consistent theories of quantum gravity w/o derivation, w/o referring to classical expressions, w/o breaking of certain symmetries etc. The only (minor!) issue is the derivation of the semiclassical limit etc.

From a formal perspective this is a huge step forward. If this formal approach is correct, my concerns regarding the cc are a minor issue only.

Postulating is the word you used. It may indeed be time to postulate a quantum understanding of space and time, rather than continue struggling to derive. After all I suppose one could say that Quantum Theory itself was originally "invented" by strongly intuitive people like Bohr and Heisenberg with the help of their more mathematically adept friends. It had to be invented de novo before one could say what it means to "quantize" some classical thing.

Or it may not yet be time to take this fateful step of postulating a new spacetime and a new no-fixed-manifold field theory.

So there is the idea of the stature of the problem. A new idea of spacetime somehow has more stature than merely checking a limit. If the limit is wrong one can often go back and fix what was giving the trouble. We already saw that in LQG in 2007. So it could be no big deal compared with postulating the right format in the first place. I can see the sense of your saying "minor".

αβγδεζηθικλμνξοπρσςτυφχψωΓΔΘΛΞΠΣΦΨΩ∏∑∫∂√±←↓→↑↔ ~≈≠≡ ≤≥½∞(⇐⇑⇒⇓⇔∴∃ℝℤℕℂ⋅)
 
Last edited:
  • #62
tom.stoer said:
I think that the derivation of a certain limit is a minor issue compared to the problem that a construction of a consistent, anomaly-free theory (derived as quantization of a classical theory) is not available.

Yes, there is no need, in fact no reason, to go from classical theory to quantum theory. But isn't the semiclassical and classical limits very important? We seek all quantum theories consistent with the known experimental data. This is the same sort of concern that string theory should be shown to contain the standard model of particle physics. We ask if there is more than one such theory so that future experiments and observatoins can distinguish between them.
 
  • #63
I agree that deriving this limit is important, but if there is a class of theories they may differ only in the quantum regime (e.g. by operator ordering or anomlies which may vanish in the classical limit) and therefore this limit doesn't tell us much about the quantum theory itself.
 
  • #64
continuing on bit by bit with the project I mentioned earlier of understanding equation (4)
marcus said:
...
One thing on the agenda, if we want to understand (4) is to see why the integrals are over the specified number of copies of the groups----why there are that many labels to integrate out, instead of some other number. So for example you see on the first integral the exponent 2(E-L) - V. We integrate over that many copies of the group. Let's see why it is that number. E and V are the numbers of edges and vertices in the foam C. So E-L is the number of internal edges.

I try to use only regular symbols and avoid going to Tex, so I cannot duplicate the fancy script Vee used for the total valence of all the faces of the two-complex C.
That is, you count the number of edges that each face f has, and add it all up.
Naturally there will be overcounting because a given edge can belong to several faces.
So this number is bigger than E the number of edges.

I see no specially good symbol so I will make a bastard use of the backwards ∃
to stand for the total edges of all the faces, added up.

Now in equation (4) you see there is the second integral which is over a cartesian product of ∃ - L copies of the group SU(2). Namely it is a Haar measure integral over SU(2)∃-L

How to think about this? We look at the total sides ∃ of all the faces and we throw away the boundary edges, and we keep only the internal edges in our count. Now this goes back to equation (2)! "a group integration to each couple consisting of a face and an internal edge." So that is beginning to make sense. BTW anyone who wants to help talk through the sums and integrals of equation (4) is heartily welcome!
 
  • #65
Just as QED does not replace classical but simply goes deeper---we still use the Maxwell equations!---so the job of LQG is not to replace the differentiable manifold (that Riemann gave us around 1850) but to go deeper. That's obvious, but occasionally reminding ourselves of it may still be appropriate. The manifold is where differential equations live---we will never give it up.

But this equation (4) of http://arxiv.org/abs/1010.1939 is (or could be) the handle on geometry deeper than the manifold. So I want to "parse" it a little. "Parse" is what one learns to do with sentences, in school. It means to divide up into parts.

You see that equation (4) is preceded by four Feynman rules
I'm going to explain more explicitly but one brief observation is that in (4) the second integration and the second product over edges together implement Rule 2.

The other portions of (4) implement Rule 3.

Let's see if we can conveniently type some parts of equation (4) without resorting to LaTex.
Typing at an internet discussion board, as opposed to writing on a blackboard, is an abiding bottleneck.

(SU2)∃-L dhef

Remember that e and f are just numbers tagging the edges and faces of the foam.
e = 1,2,...,E
f = 1,2,...,F
and the backwards ∃ is the "total valence" of all the faces, the number of edges of each face, added up. The paper uses a different symbol for that, which I cannot type. So anyway ∃-L is the total internal valence of all the faces. What you get if you add up the number edges which are not boundary that each face has. Recall that L is the number of boundary edges (those bordering only one face, the unshared edges.)

So let's see how the integral looks. It is a part of equation (4) that helps to implement Rule 2.
================

Well it looks OK. The integral is over the group manifold
(SU2)∃-L
consisting of ∃-L copies of the compact group SU2. It seems to read OK. If anyone thinks it doesn't, please say.

Then what goes into that integral, to implement geometric Feynman Rule 2, is a product over all the edges e bordering a given face f.
I'll try typing that too.
αβγδεζηθικλμνξοπρσςτυφχψωΓΔΘΛΞΠΣΦΨΩ∏∑∫∂√±←↓→↑↔ ~≈≠≡ ≤≥½∞(⇐⇑⇒⇓⇔∴∃ℝℤℕℂ⋅)
 
Last edited:
  • #66
To keep on track, since we have a new page, I will copy the "business part" of my last substantive post.
==quote==

As I see it, the QG goal is to replace the live dynamic manifold geometry of GR with a quantum field you can put matter on. The title of Dan Oriti's QG anthology said "towards a new understanding of space time and matter" That is one way of saying what the QG researchers's goal is. A new understanding of space and time, and maybe laying out matter on a new representation of space and time will reveal a new way to understand matter (no longer fields on a fixed geometry).

Sources on the 2010 redefinition of LQG are
introductory overview: http://arxiv.org/abs/1012.4707
concise rigorous formulation: http://arxiv.org/abs/1010.1939
phenomenology (testability): http://arxiv.org/abs/1011.1811
adding matter: http://arxiv.org/abs/1012.4719

Among alternative QGs, the LQG stands out for several reasons---some I already indicated---which I think are signs that the 2010 reformulation will prove a good one:

  • testable (phenomenologists like Aurelien Barrau and Wen Zhao seem to think it is falsifiable)
  • analytical (you can state LQG in a few equations, or Feynman rules, you can calculate and prove symbolically, massive numerical simulations are possible but not required)
  • similar to QED and lattice GCD (the cited papers show remarkable similarities---the two-complex works both as a Feynman diagram and as a lattice)
  • looks increasingly like a reasonable way to set up a background independent quantum field theory.
  • an explicitly Lorentz covariant version of LQG has been exhibited
  • matter added
  • a couple of different ways to include the cosmological constant
  • indications that you recover the classic deSitter universe.
  • sudden speed-up in the rate of progress, more researchers, more papers

These are just signs---the 2010 reformulation might be right---or to put it differently, there may be good reason for us to understand the theory, as presented in brief by the October paper http://arxiv.org/abs/1010.1939.
...
...
[To expand on the point that in 1010.1939 form] it "looks like" QED and QCD, except that it is background independent and about geometry, instead of being about particles of matter living in fixed background. Somehow it manages to look like earlier field theories. The presentation on the first page uses "Feynman rules".

These Feynman rules focus on an amplitude ZC(h)
where C is a two-complex with L boundary or "surface" edges, and h is a generic element of SU(2) and h is (h1, h2,...,hL), namely a generic element of SU(2)L

The two-complex C is the "diagram". The boundary edges are the "input and output" of the diagram---think of the boundary as consisting of two separate (initial and final) components so that Z becomes a transition amplitude. Think of the L-tuple h as giving initial and final conditions. The symbol h is my notational crutch which I use to keep order in my head. Rovelli, instead, makes free use of the subscript "l" which runs from 1 to L, and has no symbol for h.

The central quantity in the theory is the complex number ZC(h) and one can think of that number as saying a quantum probability, a transition amplitude:

Zroadmap(boundary conditions)

==endquote==

I added some clarification and emphasis to the last sentence.
 
Last edited:
  • #67
OK so part of equation (4) is an integral of a product of group characters which addresses Rule 2 of the list of Feynman rules.

(SU2)∃-L dhefe ∈ ∂f χjf(hef)

where the idea is you fix a face in the twocomplex, call it f, and you look at all the edges e that are bordering that face, and you look at their labels hef. These labels are abstract group elements belonging to SU(2). But what you want to integrate is a number. So you cook the group element hef down to a number χjf(hef) and multiply the numbers corresponding to every edge of the face, to get a product number for the face, and then start adding those numbers. That's it, that's the integral (the particular integral piece we are looking at.)

But what's the superscript jf on the chi? Well a set of nice representations of the group SU(2) are labeled by halfintegers j, and if you look back in equation (4) you see that there is a sum running through the possible j, for each face f. So there is a sum over the possible choices jf. And the character chi is just the dumbed-down version of the jf-rep. The trace of the rep matrix.

It is basically just a contraption to squeeze the juice out of the apples. You pull the lever and squeeze out the juice and add it up (the adding up part is the integral.)

There is another part of equation (4) that responds to geometric Feynman rule 3. I will get to that later hopefully later this afternoon.

I really like how they get this number Z. This quantum probability number ZC (h)

αβγδεζηθικλμνξοπρσςτυφχψωΓΔΘΛΞΠΣΦΨΩ∏∑∫∂√±←↓→↑↔ ~≈≠≡ ≤≥½∞(⇐⇑⇒⇓⇔∴∃ℝℤℕℂ⋅ ∈ )
 
Last edited:
  • #68
I accidentally lost most of this post (#68) while editing and adding to it. What follows is just a fragment, hard to understand without the vanished context
=======fragment========

Going back to ∫(SL2C)2(E-L)-V dgev I see that the explanation of the exponent 2(E-L)-V is to look at Rule 1 and Rule 4 together.

Rule 1 says for every internal edge you expect two integrals dgev
where the v stands for either the source or the target vertex of that edge.

Well there are L boundary edges, and the total number of edges in the foam is E. So there are E-L internal edges. So Rule 1 would have you expect 2(E-L) integrations dgev over SL(2,C).

Simple enough, but then Rule 4 says at each vertex one integration is redundant and is omitted.
So V being the number of vertices, that means V integrations are dropped. And we are left with
2(E-L) - V.

Intuitively what all those SL(2, C) integrations are doing is working out all the possible gauge tranformations that could happen to a given SU(2) label hef on an edge e of a face f.

Now we need to look at Rule 3 and see how it is implemented in equation (4)

Rule 3 says to assign to each face f in the foam a certain sum ∑jf
the sum is over all possible halfintegers j, since we are focusing on a particular face f we are going to tag that run of halfintegers jf.
And that sum is simply a sum of group character numbers (multiplied by an integer 2j+1 which is the dimension of the vectorspace of the j-th rep). Here's the sum:
jf (2jf+1)χγ(jf+1), jf (g)

Now the only thing I didn't specify is what group element that generic "g" stands for, that is plugged into the character χ.


jf (2jf+1)χγ(jf+1), jf (∏e ∈ ∂f (gese hef gete-1)εlf)



=====end fragment===

Since the notation when lost is hard to recover, I am going to leave this as it is and not try to edit it.
I will start a new post.

Found another fragment of the original post #68!
==quote==
Let's move on and see how equation (4) implements geometric Feynman Rule 3.
Now we are going to be integrating over multiple copies of a somewhat larger group, SL(2,C)

(SL2C)2(E-L)-V dgev


As before we take a rep, and since we are working with a halfinteger jf this time it's going to be tagged by a pair of numbers γ(jf+1), jf, and we plug in a group element, which gives a matrix. And then as before we take the TRACE of that matrix, which does the desired thing and gives us a complex number.

Here it is:
χγ(jf+1), jf (g)

That's what happens when we plug any old generic g from SL(2,C) into the rep. Now we have to say which "g" we want to plug in. It is going to be a PRODUCT of "g"s that we pick up going around the chosen face. And also, meanwhile going around, integrating out every possible SL(2,C) gauge transformation on the edge labels. Quite an elaborate circle dance!

Before, when we were implementing Rule 2, it was simpler. We just plugged a single group element hef into the rep, and that hef was what we happened to be integrating over.

For starters we can look at the wording of Rule 3 and see that it associates A SUM TO EACH FACE.
So there down in equation (4) is the sum symbol, and the sum clearly involves all the edges that go around the face. So that's one obvious reason it's more complicated.

==endquote==

As I said above ,I am going to leave this as it is and start a new post.

αβγδεζηθικλμνξοπρσςτυφχψωΓΔΘΛΞΠΣΦΨΩ∏∑∫∂√±←↓→↑↔ ~≈≠≡ ≤≥½∞ ⇐⇑⇒⇓⇔∴∃ℝℤℕℂ⋅∈
 
Last edited:
  • #69
For anybody coming in new to this thread, at the moment I am chewing over the first page of what I think is the best current presentation of LQG, which is an October 2010 paper
http://arxiv.org/abs/1010.1939

Accidentally trashed much of my earlier post (#68) so will try to reconstruct using whatever remains.

In post #67 I was talking about how equation (4) implements Feynman Rule 2.

Now let's look at Rule 3 and see how it is carried out.

There's one tricky point about Rule 3--it involves elements g of a larger group SL(2 C).
This has a richer set of representation, so the characters are not simply labeled by halfintegers.

As before, what is inside the integral will be a product of group character numbers of the form χ(g) where this time g is in SL(2,C). The difference is that SL(2,C) reps are not classified by a single halfinteger j, but by a pair of numbers p,j where j is a halfinteger but p doesn't have to be a halfinteger, can be a real, like for instance the immirzi number γ = .274... multiplied by a half integer (j+1). Clearly a positive real number, not a halfinteger.

χγ(jf+1), jf (g)Rule 3 says to assign to each face f in the foam a certain sum ∑jf
the sum is over all possible halfintegers j, since we are focusing on a particular face f we are going to tag that run of halfintegers jf.
And that sum is simply a sum of group character numbers (multiplied by an integer 2j+1 which is the dimension of the vectorspace of the j-th rep). Here's the sum:
jf (2jf+1)χγ(jf+1), jf (g)

Now the only thing I didn't specify is what group element that generic "g" stands for, that is plugged into the character χ. Well it stands for a kind of circle-dance where you take a product of edge labels going around the face.

e ∈ ∂f (gese hef gete-1)εlf

And when you do that there is the question of orientation. Each edge has its own orientation given by its source and target vertex assignment. And each face has an oriention, a preferred cyclic ordering of the edges. Since edges are shared by two or more faces, you can't count on the orientations of edges being consistent. So what the epsilon exponent does is fix that. It is either 1 or -1, whatever is needed to make orientation agree.

===========================
Now looking at the first integral of equation (4),
namely ∫(SL2C)2(E-L)-V dgev ,
we can explain the exponent 2(E-L)-V by referring back to Rule 1 and Rule 4 together.Rule 1 says for every internal edge you expect two integrals dgev
where the v stands for either the source or the target vertex of that particular edge e so gev stands for either
gese or gete

Well there are L boundary edges, and the total number of edges in the foam is E. So there are E-L internal edges. So Rule 1 would have you expect 2(E-L) integrations dgev over SL(2,C).

Rule 4 then adds the provision at each vertex one integration is redundant and is omitted.
So V being the number of vertices, that means V integrations are dropped. And we are left with
2(E-L) - V.

Intuitively what those SL(2, C) integrations are doing is working out all the possible gauge tranformations that could happen to a given SU(2) label hef on an edge e of a face f.αβγδεζηθικλμνξοπρσςτυφχψωΓΔΘΛΞΠΣΦΨΩ∏∑∫∂√±←↓→↑↔ ~≈≠≡ ≤≥½∞ ⇐⇑⇒⇓⇔∴∃ℝℤℕℂ⋅∈[/QUOTE]
 
Last edited:
  • #70
I see I made a typo error on the page above. It should be εef not εlf.

That's enough parsing of equation (4). It is the central equation of the LQG formulation we're talking about in this thread. Consider it discussed, at least for the time being. The topic question is whether it is the right redefinition or not, of the theory. I think it is, and gave some reasons.
marcus said:
As I see it, the QG goal is to replace the live dynamic manifold geometry of GR with a quantum field you can put matter on. The title of Dan Oriti's QG anthology said "towards a new understanding of space time and matter" That is one way of saying what the QG researchers's goal is. A new understanding of space and time, and maybe laying out matter on a new representation of space and time will reveal a new way to understand matter (no longer fields on a fixed geometry).

Sources on the 2010 redefinition of LQG are
introductory overview: http://arxiv.org/abs/1012.4707
concise rigorous formulation: http://arxiv.org/abs/1010.1939
phenomenology (testability): http://arxiv.org/abs/1011.1811
adding matter: http://arxiv.org/abs/1012.4719

Among alternative QGs, the LQG stands out for several reasons---some I already indicated---which I think are signs that the 2010 reformulation will prove a good one:

  • testable (phenomenologists like Aurelien Barrau and Wen Zhao seem to think it is falsifiable)
  • analytical (you can state LQG in a few equations, or Feynman rules, you can calculate and prove symbolically, massive numerical simulations are possible but not required)
  • similar to QED and lattice GCD (the cited papers show remarkable similarities---the two-complex works both as a Feynman diagram and as a lattice)
  • looks increasingly like a reasonable way to set up a background independent quantum field theory.
  • an explicitly Lorentz covariant version of LQG has been exhibited
  • matter added
  • a couple of different ways to include the cosmological constant
  • indications that you recover the classic deSitter universe.
  • sudden speed-up in the rate of progress, more researchers, more papers

These are just signs---the 2010 reformulation might be right---or to put it differently, there may be good reason for us to understand the theory, as presented in brief by the October paper http://arxiv.org/abs/1010.1939...

So can you think of any reasons to offer why the new formulation is NOT the right way to go? If you gave some arguments against this formulation which then got covered over by my struggling with the main equation, please help by bringing those arguments/signs forward here so we can take a fresh look at them.
 
Last edited:
  • #71
Another sign: LQG defined this way turns out to be a generalized topological quantum field theory (TQFT).

==quote page 2 section III "TQFT on manifolds with defects" ==
...
If C is a two-complex bounded by the (possibly disconnected) graph Γ then (4) defines a state in HΓ which satisfies the TQFT composition axioms [27]. Thus the model formulated above defines a generalized TQFT in the sense of Atiyah.
==endquote==

αβγδεζηθικλμνξοπρσςτυφχψωΓΔΘΛΞΠΣΦΨΩ∏∑∫∂√±←↓→↑↔ ~≈≠≡ ≤≥½∞ ⇐⇑⇒⇓⇔∴∃ℝℤℕℂ⋅∈ ⊗ ⊕
 
Last edited:
  • #72
Continuing to hit the key points of http://arxiv.org/abs/1010.1939
The hilbertspace HΓ of LQG is essentially squareintegrable complexvalued functions on the L-fold cartesian product SU(2)L.
Now a generic L-tuple of SU(2) elements is what I was writing h. And the equation (4) defines a function ZC of h.

The spin networks form a basis for the quantum states HΓ. To have sufficient understanding of the subject matter, I should be able to write any spin network also as a function of h. See equation (15) on page 3 of the paper. I'll try typing what a spin network
{Γ, jl, in: l=1,...,L and n=1,...,N}
looks like as a complexvalued function of h

Here it is (following equation 15)

⟨⊗ldjlDjl(hl) | ⊗ninΓ

"where Djl (hl) is the Wigner matrix in the spin-j representation and ⟨·|·⟩Γ indicates the pattern of index contraction between the indices of the matrix elements and those of the intertwiners given by the structure of the graph. A G-intertwiner, where G is a Lie group, is an element of a (fixed) basis of the G-invariant subspace of the tensor product ⊗lHjl of irreducible G-representations —here those associated to the links l bounded by n. Since the Area is the SU2 Casimir, the spin jl is easily recognized as the Area quantum number and in is the Volume quantum number."

αβγδεζηθικλμνξοπρσςτυφχψωΓΔΘΛΞΠΣΦΨΩ∏∑∫∂√±←↓→↑↔ ~≈≠≡ ≤≥½∞ ⇐⇑⇒⇓⇔∴∃ℝℤℕℂ⋅∈ ⊗ ⊕
⊂ ⟨·|·⟩
 
Last edited:
  • #73
I've listed ten* indications that the current LQG formulation is the right one. No one seems able to provide countervailing evidence.

I also get the impression that the LQG research community has swung over to the new version, or if not entirely yet is not putting up much resistance. (e.g. look at the makeup of the QG school that starts one month from now at Zakopane.)

https://www.physicsforums.com/showthread.php?p=3110549#post3110549

*see posts #70 and #71

=============================
Hi Atyy, thanks for your opinion!

The indication of a de Sitter universe is just that, an indication. Physicists are always doing calculations to first order approx and then gradually improving the accuracy. It's great they got deSitter at first order. The day is young on that one. :biggrin:

I don't see how you can say "probably" divergent. Are you such a great expert that you can put probability measures on the future of research. The arguments in the literature are that the theory is NOT UV divergent. As Tom has said, the prospect of IR divergence doesn't worry him much. It's a common ailment that other theories have learned to live with.

It's not a high priority to address the IR divergence issue, I think. But ways to fix that have been proposed as well. Someone will get around to studying that eventually.

=====================

Meanwhile, Atyy, doesn't it seem as if the string community is casting around for 4D/nonstring alternatives?

Horava's 4D skew gravity
Verlinde's kinky polymer vision of entropic gravity
Nima's quantum polytopes (his Pirsa talk was about scattering but he hinted at work on gravity in progress)

It wouldn't surprise me if Nima comes up with something on quantum polytope geometry/gravity that is 4D, non-supersymmetric, and looks like a cousin of Rovelli and Rivasseau reformulation of LQG GFT, where quantum polytopes have been coming up frequently as well!
==================

Careful, your information is out of date. There has been an abrupt increase of interest, research activity, and number of researchers just in the past 3 years. Also the formulation has changed radically. You may not know what is going on because you are interested in your own ideas and wish to dismiss the QG realworld.
==================

Atyy, that's interesting! What is the "X" divergence (your name for it). I need a page and paragraph reference so I can see what you are quoting of Rovelli in context. Eyes get tired scanning over page after page looking for quotes. Point me to it and I will be glad to look!
 
Last edited:
  • #74
It is based on probably divergent series, and the indication of a de Sitter universe removes the higher order terms by ignoring them.
 
  • #75
marcus said:
I've listed ten* indications that the current LQG formulation is the right one. No one seems able to provide countervailing evidence.
I think it is more accurate to say that nobody really cares anymore after 25 years.
 
  • #76
I say probably divergent because Rovelli says so.

There are 3 sorts of divergences in Rovelli's classification.

1) UV - not present
2) IR - present but not a problem
3) X (my nomenclature) - probably present, and probably a problem.
 
  • #77
atyy said:
I say probably divergent because Rovelli says so.

3) X (my nomenclature) - probably present, and probably a problem.

I asked for a page reference in my initial response https://www.physicsforums.com/showpost.php?p=3111122&postcount=73 to this post, and you have not offered one.
I assume this is because you cannot find anywhere that Rovelli says "probably present and probably a problem" about some kind of divergence.

So far, if we cannot get a handle on it and discuss it, this "X" is just a mystifying "Atyyism" :smile:
Please give some concrete substance to your comment!
 
  • #78
marcus said:
I asked for a page reference in my initial response https://www.physicsforums.com/showpost.php?p=3111122&postcount=73 to this post, and you have not offered one.
I assume this is because you cannot find anywhere that Rovelli says "probably present and probably a problem" about some kind of divergence.

So far, if we cannot get a handle on it and discuss it, this "X" is just a mystifying "Atyyism" :smile:
Please give some concrete substance to your comment!


Please quote the page request explicitly.
 
Last edited:
  • #79
marcus said:
I've listed ten* indications that the current LQG formulation is the right one. No one seems able to provide countervailing evidence.

I also get the impression that the LQG research community has swung over to the new version, or if not entirely yet is not putting up much resistance. (e.g. look at the makeup of the QG school that starts one month from now at Zakopane.)

https://www.physicsforums.com/showthread.php?p=3110549#post3110549

*see posts #70 and #71

atyy said:
I say probably divergent because Rovelli says so.

There are 3 sorts of divergences in Rovelli's classification.

1) UV - not present
2) IR - present but not a problem
3) X (my nomenclature) - probably present, and probably a problem.

marcus said:
Atyy, that's interesting! What is the "X" divergence (your name for it). I need a page and paragraph reference so I can see what you are quoting of Rovelli in context. Eyes get tired scanning over page after page looking for quotes. Point me to it and I will be glad to look!

marcus said:
I asked for a page reference in my initial response https://www.physicsforums.com/showpost.php?p=3111122&postcount=73 to this post, and you have not offered one.
I assume this is because you cannot find anywhere that Rovelli says "probably present and probably a problem" about some kind of divergence.

So far, if we cannot get a handle on it and discuss it, this "X" is just a mystifying "Atyyism" :smile:
Please give some concrete substance to your comment!

atyy said:
Please quote the page request explicitly.

OK, done. I can't tell whether you are just playing games or whether you are really confused about a type of very large-scale (cosmological) divergence that R. mentioned.

If I knew exactly what you meant by "X" divergence, maybe I could help clarify.
 
  • #80
The request appears to be after my post mentioning X, not before.
 
  • #81
atyy said:
The request appears to be after my post mentioning X, not before.

I've asked you for page refs several times. It's an ongoing problem. Not giving pointer can (in some people) be associated with inaccurate paraphrase or quotes out of context that seem to mean something else. You must surely be aware of this. In this case I did ask for specific pointer AFTER your comment about "X" divergence.

Lets not quibble over trivia. I'm interested to know what you think is this X that Rovelli says "probably divergent and probably a problem" about. Or if he actually did not say that then what is this X that YOU think is probable and probably a problem?

I'm interested to know! It could be a type of divergence which might arise if you include the whole universe (with no cosmological event horizon) in the analysis. So if the universe is infinite you get bigger and bigger spinnetworks, growing in size without limit. That would be interesting to discuss and to think of how it might be handled. But since you don't say what you mean by "X" I am unable to be sure what you think is a problem! :smile:
 
  • #82
marcus said:
I've asked you for page refs several times. It's an ongoing problem. Not giving pointer can (in some people) be associated with inaccurate paraphrase or quotes out of context that seem to mean something else. You must surely be aware of this. In this case I did ask for specific pointer AFTER your comment about "X" divergence.

Good. And it appeared in a post preceding my mention of X. That's ok. But in that case, if I don't provide the page reference, it's because I haven't seen it, not because it doesn't exist.

http://arxiv.org/abs/1010.1939 p6

UV "There are no ultraviolet divergences, be cause there are no trans-Planckian degrees of freedom.

IR "However, there are potential large-volume divergences, coming from the sum over j"

X "The second source of divergences is given by the limit (26)."
 
  • #83
To keep on track, since we have a new page, I will copy the "business part" of my last substantive post.
==quote==
As I see it, the QG goal is to replace the live dynamic manifold geometry of GR with a quantum field you can put matter on. The title of Dan Oriti's QG anthology said "towards a new understanding of space time and matter" That is one way of saying what the QG researchers's goal is. A new understanding of space and time, and maybe laying out matter on a new representation of space and time will reveal a new way to understand matter (no longer fields on a fixed geometry).

Sources on the 2010 redefinition of LQG are
introductory overview: http://arxiv.org/abs/1012.4707
concise rigorous formulation: http://arxiv.org/abs/1010.1939
phenomenology (testability): http://arxiv.org/abs/1011.1811
adding matter: http://arxiv.org/abs/1012.4719

Among alternative QGs, the LQG stands out for several reasons---some I already indicated---which I think are signs that the 2010 reformulation will prove a good one:

  • testable (phenomenologists like Aurelien Barrau and Wen Zhao seem to think it is falsifiable)
  • analytical (you can state LQG in a few equations, or Feynman rules, you can calculate and prove symbolically, massive numerical simulations are possible but not required)
  • similar to QED and lattice GCD (the cited papers show remarkable similarities---the two-complex works both as a Feynman diagram and as a lattice)
  • looks increasingly like a reasonable way to set up a background independent quantum field theory.
  • an explicitly Lorentz covariant version of LQG has been exhibited
  • matter added
  • a couple of different ways to include the cosmological constant
  • indications that you recover the classic deSitter universe.
  • LQG defined this way turns out to be a generalized topological quantum field theory (see TQFT axioms introduced by Atiyah).
  • sudden speed-up in the rate of progress, more researchers, more papers

These are just signs---the 2010 reformulation might be right---or to put it differently, there may be good reason for us to understand the theory, as presented in brief by the October paper http://arxiv.org/abs/1010.1939.
...
...
[To expand on the point that in 1010.1939 form] it "looks like" QED and QCD, except that it is background independent and about geometry, instead of being about particles of matter living in fixed background. Somehow it manages to look like earlier field theories. The presentation on the first page uses "Feynman rules".

These Feynman rules focus on an amplitude ZC(h)
where C is a two-complex with L boundary or "surface" edges, and h is a generic element of SU(2) and h is (h1, h2,...,hL), namely a generic element of SU(2)L

The two-complex C is the "diagram". The boundary edges are the "input and output" of the diagram---think of the boundary as consisting of two separate (initial and final) components so that Z becomes a transition amplitude. ...

The central quantity in the theory is the complex number ZC(h) and one can think of that number as saying a quantum probability, a transition amplitude:

Zroadmap(boundary conditions)

==endquote==
==quote http://arxiv.org/abs/1010.1939 page 2 section III "TQFT on manifolds with defects" ==
...
If C is a two-complex bounded by the (possibly disconnected) graph Γ then (4) defines a state in HΓ which satisfies the TQFT composition axioms [27]. Thus the model formulated above defines a generalized TQFT in the sense of Atiyah.
==endquote==

αβγδεζηθικλμνξοπρσςτυφχψωΓΔΘΛΞΠΣΦΨΩ∏∑∫∂√±←↓→↑↔~≈≠≡≤≥½∞ ⇐⇑⇒⇓⇔∃ℝℤℕℂ∈⊗⊕⊂ ⟨·|·⟩
 
Last edited:
  • #84
atyy said:
...
X "The second source of divergences is given by the limit (26)."

That problem goes away if the universe you are modeling has a finite size.
Would you like to have that explained?
 
  • #85
marcus said:
That problem goes away if the universe you are modeling has a finite size.
Would you like to have that explained?

Sure.

Rovelli says that for the IR divergence, but not for X.

IR "This is consistent with the fact that q-deformed amplitudes are suppressed for large spins, correspondingly to the fact that the presence of a cosmological constant sets a maximal distance and effectively puts the system in a box"."

X "Less is known in this regard, but it is tempting to conjecture that this sum could be regularized by the quantum deformation as well."
 
  • #86
atyy said:
That problem goes away if the universe you are modeling has a finite size.
Would you like to have that explained?
Sure.

we don't have to speculate about "quantum deformation". Sure R. mentioned it and it is interesting to think how it might affect the picture. But (26) is already not a problem if the U simply has finite size.

That is because LQG has a UV cutoff, effectively. It has a limit how fine resolution, how small you can measure. The "cell size" does not shrink below some scale.

(26) is about considering larger and larger foams, ordered by inclusion. U finite implies that process must terminate. So limit exists. That's all I was saying.
 
  • #87
marcus said:
we don't have to speculate about "quantum deformation". Sure R. mentioned it and it is interesting to think how it might affect the picture. But (26) is already not a problem if the U simply has finite size.

That is because LQG has a UV cutoff, effectively. It has a limit how fine resolution, how small you can measure. The "cell size" does not shrink below some scale.

(26) is about considering larger and larger foams, ordered by inclusion. U finite implies that process must terminate. So limit exists. That's all I was saying.



Then how can "summing = refining"?

http://arxiv.org/abs/1010.5437
 
  • #88
atyy said:
Then how can "summing = refining"?

http://arxiv.org/abs/1010.5437

Please say explicitly what you think the problem with that is.

You may be confused by the words. "Refining" here does not have a metric scale connotation. All it can mean is to add more cells to the complex.

You have to look directly at the math. What the objects are and how the limits are defined.
You can't just go impressionistically/vaguely by the words. I don't know what your source of confusion is, can only guess---unless you spell out what you are thinking.

But I know that there is no inconsistency between the two types of limit, as defined.
On the one hand summing over cell-complexes and on the other hand taking a cell complex and adding more and more cells to it.

Really it's fine! :smile:
 
  • #89
I'm taking issue with your interpretation that summing = size of the universe.

So a bigger and bigger universe means more and more refining?

The basic result in the summing=refining paper is "We have observed that under certain general conditions, if this limit exist, it can equally be expressed as the sum over foams, by simply restricting the amplitudes to those with nontrivial spins."

Are you saying this limit exists in a finite universe?
 
Last edited:
  • #90
atyy said:
I'm taking issue with your interpretation that summing = size of the universe.

So a bigger and bigger universe means more and more refining?

Forget the words Atyy, look at the actual math which is the meaning of the "s=r"
paper.

In what I said the U has a finite size. So don't be talking about bigger bigger U.
The U has some size. Say roughly hypersphere w radius of curvature 100 Gly. (a NASA WMAP lower bound estimate from around 2007 as I recall)

Say you start with a dipole spin network like this ([]) labeled to agree with that 100 Gly
(you've surely seen that dipole graph before in R papers, better drawn)
and you start refining. That means adding nodes and links

for the the next twenty gazillion years adding complexity to the graph DOES in fact correspond to the intuitive idea of refining.

But then the process has to terminate, because you got down to where every node has the min vol and every link has the min area.

You run into the finite resolution barrier. smaller is meaningless.

Better to actually look at what the math says than take issue with the words.

Could you be being a wee bit suspicious? and thinking everybody is trying to fool you because you don't understand something? :smile: Take it easy. That X is a nonproblem, pragmatically speaking.
 
  • #91
atyy said:
Are you saying this limit exists in a finite universe?

Abstract math does not work in some given universe. The limit is an interesting abstract question.
Pragmatically, sure. Pragmatically it is a non-problem. In that case.
 
  • #92
So we fix the boundary. As is done in the summing=refining paper. Your argument is that for fixed boundary, the summing is finite. Since refining is summing, then refining is finite. I don't see that. I think it does mean that summing is a sum over discrete terms, but not necessarily over a finite number of terms "To remove the dependence on C, two options can be envisaged: infinitely refining C, and summing over C. Since the set of foams is discrete, the latter option is easy to define in principle, at least if one disregards convergence issues." http://arxiv.org/abs/1010.5437 p2
 
  • #93
Atyy, we have company this afternoon and evening. I won't be able to answer. Your question is making sense to me and I will need a quiet moment to think about it before replying.
 
  • #94
Enjoy your company. My answer: this is where GFT renormalization must come in.
 
  • #96
Thanks for the pointers to relevant research. I will take a look later today. From the standpoint of abstract math there is no reason to assume the U is finite and it seems ugly to have to appeal to that assumption as a crutch. The question of whether a certain sequence converges is intrinsically interesting!

My observation is practical and non-math, in a sense. IF the universe is finite spatial volume (which we don't know) then it only makes physical sense to consider spin networks with up to N nodes for some large finite N.

So the whole business of taking limits with more and more nodes is moot (from a physical perspective.)

A somewhat similar observation may apply in the case where we have accelerating expansion (as in a deSitter U or an approximately deS) because then there is a cosmological event horizon. One is in a de facto finite situation. I say MAY apply. I haven't seen that worked out. I feel more confident simply considering the finite U case.

And I'm of course glad if some of the young researchers like the guy you mentioned, Perini, are working on the abstract convergence problem of the "X" sort you mentioned, where you don't assume a finite universe. It will be great if they get a result! And they may, as you suspect, bring GFT method to bear on it.
 
  • #97
OK, it's fine if we fix a spatial boundary at this stage of the game. What I don't understand then is that I thought LQG has no preferred foliation. And if in LQC there is the forever bouncing universe, then it must be unbounded in time. So what if we took the foliation that way, wouldn't we get a different answer. Or does that mean that there is a preferred foliation? Or are there only a finite number of bounces? (actually I don't believe in the bounce for spinfoams - I think Rovelli is hoping for an outcome like CDT - after performing the full sum - not just the first term - he recovers a finite classical universe - to be fair - CDT has not even discretized down to the Planck scale yet)
 
  • #98
atyy said:
OK, it's fine if we fix a spatial boundary at this stage of the game. What I don't understand then is that I thought LQG has no preferred foliation. And if in LQC there is the forever bouncing universe, then it must be unbounded in time. So what if we took the foliation that way, wouldn't we get a different answer. Or does that mean that there is a preferred foliation? Or are there only a finite number of bounces? (actually I don't believe in the bounce for spinfoams - I think Rovelli is hoping for an outcome like CDT - after performing the full sum - not just the first term - he recovers a finite classical universe - to be fair - CDT has not even discretized down to the Planck scale yet)

The bounce resolution of the BB singularity is a surprising RESULT that first appeared around 2001 under simplifying assumptions. Since then it has proven rather robust in the sense that they keep improving the theory, and changing the assumptions, and removing restrictions, and running the model over and over, and they keep getting a bounce.

They don't get "forever bouncing". That is not robust. You can for example choose parameters where you just get one bounce (where the BB was). You can't say too much about the prior contracting phase. The theory is not "omniscient" it is just a gradual incremental extension that resolves the singularity in one possible way it could be resolved.

It doesn't say if you get just one bounce, or a finite number, or an infinite number (that depends on choices and cases). It just resolves the one singularity we know about. In a possibly testable way (some phenomenologists think.)

There is more to talk about, in what you say. But I am going to get coffee and straighten up the house a little. Yesterday was fun, in fact, thanks for your good wishes!

============================
Incomplete partial reply to your next post #90. Equation (26) the topic of our discussion has a twocomplex with a boundary graph. But the graph is not labeled with area and volume labels. It is not a spinnetwork. so there is no limit on growth in the picture. one could keep adding nodes forever. So it is not the same as modeling a finite-volume universe. Or so it seems to me---as you well know I'm just an interested observer of the QG research scene, no expert! I'll get back to this later this morning. This is interesting.
 
Last edited:
  • #99
Also, why isn't a finite universe the same as assuming a spinfoam boundary?
 
  • #100
Atyy, I like your way of putting the three sorts of possible divergence.

atyy said:
...
1) UV - not present
2) IR - present but not a problem
3) X (my nomenclature) - probably present, and probably a problem.

As I've said, I don't think of your X as a practical problem at all, just an interesting abstract math one that you get when you consider a possibly infinite universe. But your pointer to it has gotten me to read more thoroughly in that Rovelli Smerlak October paper which deals with type X concerns.

As you described it the X question comes up around equation (26) of 1010.1939.
It is helpfully clarified by the Rovelli Smerlak paper, so I'll give the link
http://arxiv.org/abs/1010.5437

Notice that (26) does not have a spin-network in it, or a spinfoam. So one cannot implement the idea of a finite universe in the context of (26). There is nothing to keep one from adding cells to the complex forever.
It is more in the abstract math department. An interesting but not urgent question, as I see it.

What your question just now makes me wonder is how would one implement the idea of surrounding a cellcomplex C with a boundary that you can't stretch? Surrounding it with a fixed labeled spin-network. So that refinement is forced to terminate eventually?

The researchers do not seem to have considered that. Maybe it is a useless problem from their perspective. Perhaps I am missing something and my question is based on misunderstanding. I am trying to think about that while I do the evening chores. Hope to be able to say more later.
 
Last edited:
Back
Top