# Spin networks in Loop Quantum Gravity

Gold Member
Dearly Missed
LQG is based on a smooth compact manifold M
The configuration space is A the connections on M:
by choice here we use SO(3) connections----the smooth rotation-valued 1-forms on M.

The connections represent all the possible configurations of gravity, or curvature, on the manifold-----in other words the possible geometries. There is no fixed prior choice of metric.

The quantum state space of the theory is a linear space L consisting of complex-valued functions on A. We are following the notation in the Rovelli-Upadhya LQG primer.
A class of "cylindrical functions" is defined, spanning L, and using these functions an inner product is defined, so that we have a Hilbert space.

Labeled networks enter here as a way of arriving at a basis for the Hilbert space. The set of "cylindrical functions" is highly redundant. They are very simple to define but there is a lot of overlap and the set is not linearly independent. To get a linearly independent spanning set of functions we have to be more methodical and selective. So to begin this thread I am going to describe a simplified version of labeled networks.

Without loss of generality, the networks can be taken to be trivalent---three legs meeting at each node. A node where more than 3 legs meet can always be broken down into a kind of "traffic circle" of tee-joint, or trivalent, nodes.

Gold Member
Dearly Missed
Rovelli-Smolin's original 1995 paper "Spin Networks and Quantum Gravity" narrows things down to trivalent graphs on page 12, saying "As defined by Penrose, a spin network is a trivalent graph, &Gamma;, in which the links l are labeled with positive integers....'the color'....such that the sum of the colors of three links adjacent to a node is even and none of them is larger than the sum of the other two..."

This sounds arcane at first and takes getting used to, but after a while it gets to seem reasonable enough----gets intuitive in fact. For LQG, the authors use Penrose's trivalent graphs but simply have them embedded in the manifold, so that the legs of the graph are paths in M and any connection A can do parallel transport along such a path--resulting in a rotation.

The thing to keep in mind is that the network is a machine for deriving a number from a connection. We have to be able to define functions on the space of connections. These functions will constitute a basis and allow us to define operators like the area operator. The physical meaning of the graph is less important than its use in an efficient strategy to define a basis of the function space. After one has a basis one can take linear combinations and describe whatever states one pleases.

So we are free to pick absolutely any representation of the rotation group we please. This is a key point, and it offers a hope of simplifying the description.

The numbers or "colors" labeling the legs of the graph correspond to a set of irreducible representations of SO(3) chosen in advance----for no other purpose than to get numbers.
The irred. reps are unique up to isomorphism anyway, so it does not matter how they are defined.

Rovelli's abstract treatment allows us complete latitude in the choice, it merely says "choose some". So let us do just that and say for each non-negative even interger m, Vm is the complex vector space of homogeneous polynomials of degree m in two variables. Each such m corresponds to an irred. rep of SO(3) and every irred. rep of SO(3) is isomorphic to one on this list.

The puzzle now, for me and anyone who wishes to help me understand labeled networks in LQG better, is to see how to evaluate such a state on connections. We take a connection A and run its parallel transport on each leg of the graph (there is some orientation so one knows what start and finish are) and gets a rotation. Then if the leg is labeled m, one applies the representation and comes up with a linear operator: Vm ---> Vm

Others may discover that the business of evaluating the labeled network on a connection proceeds more conveniently if they take a different but equivalent realization of Vm, or use a purely abstract m+1 dimensional vectorspace over the complex numbers.

I want to try to see what goes on if Vm is the homogeneous polynomials of degree m, since that is one common way of realizing the irreducible representations of the group. If it doesnt work out I will back up and try an approach more like what Hurkyl was using earlier.

Originally posted by marcus
LQG is based on a smooth compact manifold M
The configuration space is A the connections on M:
by choice here we use SO(3) connections----the smooth rotation-valued 1-forms on M.

The connections represent all the possible configurations of gravity, or curvature, on the manifold-----in other words the possible geometries. There is no fixed prior choice of metric.

The quantum state space of the theory is a linear space L consisting of complex-valued functions on A. We are following the notation in the Rovelli-Upadhya LQG primer.
A class of "cylindrical functions" is defined, spanning L, and using these functions an inner product is defined, so that we have a Hilbert space.

Labeled networks enter here as a way of arriving at a basis for the Hilbert space. The set of "cylindrical functions" is highly redundant. They are very simple to define but there is a lot of overlap and the set is not linearly independent. To get a linearly independent spanning set of functions we have to be more methodical and selective. So to begin this thread I am going to describe a simplified version of labeled networks.

Without loss of generality, the networks can be taken to be trivalent---three legs meeting at each node. A node where more than 3 legs meet can always be broken down into a kind of "traffic circle" of tee-joint, or trivalent, nodes.

Last edited:
Hurkyl
Staff Emeritus
Gold Member
If I understand the definitions correctly, I can rewrite a lot of that into elementary terms, but I don't know how well it correspond to the actual physical notions they're supposed to represent.

Suppose we have drawn a graph in a three dimensional space. We create a basis element as follows:

For each link in the graph, choose some n-dimensional real vector space Rn and identify SO(3) with some subgroup of GL(n; R). Also, choose a basis for the invariant subspace of Rn, and at each node incident with the link place one of those basis vectors. (They may be the same)

To evaluate a basis element at a given connection, the procedure is as follows.

For each link of the graph, we have two corresponding n-vectors, one at each node incident with the link. Parallel transport the vector at the first node over to the second node and take their dot product. (This will be the same whichever we call first and second) Then, multiply all of the dot products.

Define a spin network to be a linear combination of these basis elements. Evaluate the spin network at a particular connection by defining evaluation to be a linear operation.

(There is a slight glitch in the above; they aren't quite basis elements because I've allowed reselection of the bases of the invariant subspaces. If you restrict this ability, then the basis elements should form an actual basis. I'm not sure if this matters at all because evaluation will still be well-defined)

Gold Member
Dearly Missed
thanks for the light shed. my last post is obfuscated.
If you had not replied in a way that cuts thru some of
the underbrush, I believe I would have deleted it.

It seems that Penrose labeled networks can be used without
any higher order group representation ceremony----one just labels the legs with positive integers ("colours" in Penrose's terminology).

And when it comes time to turn a network into loops, one
arranges to move along each leg however many times the label says. At the nodes there is a procedure for "routing" which is a bit like a freeway interchange.

And in the end by some miracle like Euler's Bridges of Koenigsberg or the divine intervention of Knot Theory, one manages to loop around and around and pass along each leg the number of times specified by its color.

The only thing we need anything remotely like group representations for is simply to have a chosen basis so that the SO(3) elements that emerge as we traverse each leg can appear as matrices.

that is so we can multiply the whole string of them (as in your example) and take the trace.

If I remember correctly, loops where you run parallel transport with the connection and then take trace, are sometimes called "Wilson" loops. A forkloads of loops---in humongous linear combination---are called "multiloops".

They span the state space, though there are too many to be linearly independent. They seem to be a very sensible way of reading a connection---a reasonable kind of function to define on the configuration space.

Penrose, it seems, didnt always think of his networks as "spin" networks---if he divided the "colour" label by 2, then he called it a "spin". But just a convention of labeling with integers or sometimes half integers. I've been reading
gr-qc/9602023 (by Roberto De Pietri coauthoring with Rovelli)
and they get the spectrum of the area operator and develop the theory and calculate quite a bunch without any group representation business or higher order tensors and such.

They use "Reidemeister moves" out of Knot theory and refer to
inventions of Vaughn Jones. they draw somewhat better pictures than average. I came across the paper because I saw other papers referring to it when they wanted something nitty-gritty, like actually proving linear independence of the labeled network basis or actually calculating eigenvalues.

So here is DePietri/Rovelli calculating with diagrams in a purely combinatorial way and getting answers----and it corresponds
on another level to a parade of tensors festooned with irreducible representations.

It looks like with DePietri/Rovelli approach you just have to multiply a string of rotation matrices and take the trace.

Anyway, this is my impression so far from gr-qc/9602023,
and your finding simpler ways to do it seems very much in line with this.

Last edited:
Hurkyl
Staff Emeritus
Gold Member
Incidentally, I think I have the significance of irreducibility wrong in my simplification; there are two things the tensor product of representations can mean, and I think I picked the wrong one.

Gold Member
Dearly Missed
Originally posted by Hurkyl
Incidentally, I think I have the significance of irreducibility wrong in my simplification; there are two things the tensor product of representations can mean, and I think I picked the wrong one.

If so it's like a path integral where one tries all the ways of getting from A to B.

I am beginning to see more clearly that there are two very distinct developments in LQG (which deep thinkers may have discovered to be equivalent in some sense, as so often happens).

One is combinatorial and based on simple loops. doesnt require group representation and tensor calculus to define the orthonormal basis of the states because it works openly at a combinatorial level.

There is (you realize better than I, I imagine) a lot of combinatorics *clothed* in tensor calculus. So the other approach has the necessary dirtywork done for it automatically by sophisticated tensorial instruments---or hopefully done automatically.

DePietri/Rovelli takes the combinatorial approach and, as if to advertise this, they begin by quoting Roger Penrose:

"My own view is that ultimately physical laws should find their most natural expression in terms of essentially combinatorial principles, that is to say, in terms of finite processes such as counting.....Thus....some form of discrete or combinatorial space time should emerge."

And they make the point that Penrose did not label his "spin networks" with group representations but with simple NUMBERS which he sometimes called colours and sometimes, after dividing them by two, called spins. they point out that Penrose actually vacillated between this terminology----the confusion began with him.

A coloured network, if you divide all the labels by two, becomes a "spin" network. And that name caught on with physicists. But when you actually start routing loops thru the network and sorting the freeway lanes out at the nodes it is the colour number that is the real number. That is how many lanes there are!

The group rep spin that calling the labels (divided by two) "spins" is not essential to the graph.

the main thing is to see how to turn the coloured trivalent graph into a collection of loops.

A loop is an immediate primitive intuitive function on the config space of connections-----transport around the loop and do trace!

So combinations of loops or multiloops were the original basis for LQG, as the name itself illustrates.

The Penrose colored networks are a *combinatorial* means of eliminating redundancy and getting an orthornormal basis of multiloops.

DePietri/Rovelli offer a Mathematica program in their paper. a program they wrote to do loop calculations. this is on page 24 of the paper. Apparently they put the program on line. They are into
combinatorial calculations. Also they construct the basis to actually be (not just orthogonal but) orthonormal.

Interesting difference in style between Rovelli by himself (LivingReviews) and Rovelli with other people. Get a different picture of the theory from reading him mixed with Upadhya and him mixed with DePietri.

Gold Member
Dearly Missed
Hurkyl I have been drawing pictures of trivalent "freeway"
intersections and it is really true!

As long as the numbers of lanes p, q, r add up to an even number
and no one of them exceeds the sum of the other two
then one can easily do the routing thru the node.

I find this fact very comforting

I imagine it was immediately obvious to you, but I had
to draw pictures to assure myself of it.

I like these "ribbon" diagrams or "ribbon and circle" maps that let you draw the routing. And each one, b'gorra, is tantamount to a quantum state of gravity since it defines a function on the connections.

have to go but will get back to this later today

jeff
Originally posted by Hurkyl
Incidentally, I think I have the significance of irreducibility wrong in my simplification; there are two things the tensor product of representations can mean, and I think I picked the wrong one.

By an IR (irreducible representation), we mean a rep that acts irreducibly on the space of states, that is, there is no proper subspace of states which transform only into linear combinations of each other. We say that there is no proper invariant subspace of states. Equivalently, the matrices of the rep can't be decomposed into a nontrivial block-diagonal form. Here, the IR's of the gauge group are classified by spin.

Last edited:
Hurkyl
Staff Emeritus
Gold Member
My problem was that I was going by the wrong meaning of the tensor product of representations. (which led to me having the ramifications of irreducibility incorrect, at least in my intuition)

With V and W both representations of G, I was mentally applying elements of G only to one half of the tensor product V*W, as if V and W were representations of different groups G and H (which just happened to be isomorphic), rather than the definition intended for the situation at hand where G is supposed to be applied to each part of the product.

Anyways, the thing that has been bugging me most is that we're working with n-dimensional spaces for arbitrary in... but the whole result is supposed to be diffeomorphism invariant, is it not? How would, say, a 4 dimensional vector transform under an arbitrary smooth coordinate transformation of a 3 dimensional manifold?!?!

Gold Member
Dearly Missed
Our worrying about the irreducible representations of SO(3) or SU(2) may be just a tempest in a teapot that goes back to Penrose having called his colored networks by the confusing name of "spin" networks. Some of the time he used integer labels called colours and other times he divided them by two and called them "spins"

DePietri/Rovelli mentions this vacillation on Penrose's part. I find that paper exceptionally clear about the combinatorial aspect of things. Here is my take on it---hope not to far from yours as well!

Somewhere we all got this fixed idea in our heads that the legs of the graph have to be labeled with group representations, but it isnt really so. It wouldnt mean anything physically and there is no need for it.

The theory is based on loops-----continuous piecewise smooth maps of S1 into M (the circle into the manifold).
Any loop defines a quantum state---a function defined on the connections---by running transport, getting a matrix, and saying trace.

The label on a network leg is basically a number of freeway lanes. DePietri/Rovelli describe a way of untangling the loops out of a network. Once it comes down to loops, you just evaluate them as loops.

arXiv:gr-qc/9602023

I am not ready to consider diffeomorphism equivalent classes of labeled networks, or even of loops. I know we will come to that but now I just want to get as clear as I can about these state functions defined on the space of connections.

DePietri/Rovelli definition of "spin network" has no group rep stuff in it. It simply says:

"A spin network S is given by a graph &Gamma;S in M, and by a compatible coloring {pS} of the associated oriented trivalent virtual graph...."

the coloring is just number labels on the legs so that at each node the sum of the three numbers is even and no one of them exceeds the sum of the other two.

these two "compatible coloring" conditions let you peel it apart and untangle it to get loops. strands of Mozarella.
Sometimes Rovelli has called these multistrand legs by the name "ropes" because of multiple parallel strands.

So then a "spin network" is just a pair (&Gamma;S, {pS}) consisting of a graph and a coloring of the associated trivalent graph.

they go through some choices and details at this point, around page 10, and describe the state function defined by the network.
There is a Vaughn Jones operator from knot theory but it is easy to visualize---an easy special case of something more complicated.

I now think I made a false step by trying to summarize the development in Rovelli/Upadhya---because unnecessarily abstract.
Want to get closer to the original loops based development.

suppertime, must go but will try to get back later

Hurkyl
Staff Emeritus
Gold Member
Hrm.

I don't like that paper as much. After reading the first one, I came away feeling like I had an idea what was going on. The definitions of an irreducible representation and invariant subspaces of tensor products of representations are fairly straightforward, so while I don't really know how to use them yet, I know what they are. While I couldn't follow all the details, much of the proof of the quantization of area made sense. Some steps were even, dare I say, obvious.

I came away from this new paper with virtually zero understanding. I like combinatorial things, but upon first reading they didn't do a very good job of seperating the combinatorial aspect from the messy underlying mathematics... though glancing back that feeling may go away if I figure out which parts to read and which parts to temporarily ignore. I felt like some of the paper was trivially obvious (such as how to make a ribbon-net out of a graph), but the important stuff was too dense to penetrate.

However, appendix C was very intruiguing (graphical representation of tensor products)

However, it did trigger some do-it-myself ideas (which is a good thing, because I learn a lot of things by trying to figure out by myself how to do it, then match what I did to the real theory)... in particular, representing formal linear combinations of loops... but there's still one important question...

What is a loop good for?!?!

Last edited:
Gold Member
Dearly Missed
Originally posted by Hurkyl
match what I did to the real theory)... in particular, representing formal linear combinations of loops... but there's still one important question...

What is a loop good for?!?!

I've been reading yet a third (!) paper, this time a non-mathematical "birds eye view" by Ashtekar himself giving
his intuitive understanding. It is written for other mathematicians and the like but presents the ideas without much in the way of formulas. it is a 2001 paper, 24 pages with references.

"Quantum Geometry and Gravity: Recent Advances"

arXiv:gr-qc/0112038

He has a 6-page fairly down-to-earth section on applications---to the big bang and to black holes (entropy, quantum states, horizon area). It is a good intuitive survey of (loop approach to) quantum gravity. Might provide some answers to the basic question "what is a loop good for"

Blearyeyed and blundering as I am, I will give you my take on it too! (But Ashtekar is maybe even more central than Rovelli, and he writes with strong figurative, intuitive language trying to give understanding rather than technical detail. So the paper would be better than anything I could say.)

Ashtekar is the one who changed the variables in GR from the metric to the connections---Ashtekar's so-called "new variables". After that the configuration space is the space of connections.

Loops are the simplest functions on the space of connections. Thus loops are the simplest quantum states of gravity. That is what they are good for.

Networks, huge polymers filling space, with tiny planck-sized segments jointed at nodes, can be built up from simple loops.
These network states are very large superpositions of simple loop states.

Ashtekar says on page 5, talking about a typical quantum state of the space around the reader---

"The state of quantum geometry around you, for example, must have so many elementary exitations that about 1068 of them intersect the sheet of paper you are reading."

The thing about a loop is, I think, that there is no simpler way to get a number from a connection than running around the loop and doing trace. It is like a wave function describing the position of a particle on a line-----the configuration space is the real axis and the state is a function defined on the reals. In the case of quantum geometry the configuration space is all possible connections and the state is a function defined on them.

Also, happily enough, it turns out that loops allow the area and volume operators to be defined---eigenvalues calculated etc. Have also seen some quantum geometry work with area and curvature. must go help make supper. more on this paper later.

jeff
For what it's worth, I was asking around about good papers for you guys to start with, and this one was recommended. I went through it and think it's pretty good. It's clear and uncomplicated and covers just about all the material in Rovelli/Upadhya (and more) but in less sketchy terms. The only thing that's missing is the material in Rovelli/Upadhya appendix B generalizing the area operator spectrum to the case of nodes on the surface whose area is being calculated.

[URL [Broken]

Last edited by a moderator:
Hurkyl
Staff Emeritus
Gold Member
The densitized triad has come up, I think, in all three papers. It would be good to know what that is.

This last one defines a local triad as:

qab(x) = eai(x) ebi(x)

I'm trying to figure out what this means.

My best guess is that for each point x on the manifold, e(x) is supposed to be a linear transformation from some representation of so(3) to the tangent space at x. e has the additional property that, when written as a matrix:

qab = [e et]ab

Which also gives me the feeling we should really be using a metric on our representation of so(3) to write this:

qab(x) = &delta;ij eai(x) ebj(x)

Gold Member
Dearly Missed
"The last one" is, I guess, Gaul/Rovelli and in particular on pages 7 and 8----equations (1)-(3).

I haven't read all I would need to on this but will see if I can
help...

[I started this reply yesterday, and realized this morning that it was not helpful. So I deleted the rest and hope to have something more sensible to say later.]

Last edited:
Hurkyl
Staff Emeritus
Gold Member
I have this feeling that figuring out what a densitized triad is will help intuit the other things.

But basically, I'm just hoping to be able to follow at least one of these papers from the beginning to the end without having to skip over anything.

jeff
Originally posted by Hurkyl
I have this feeling that figuring out what a densitized triad is will help intuit the other things.

The demonstration that (Aia(x),Eia(x)) forms a canonical pair on the phase space of GR, where Eai(x) &equiv; eeia(x) is a density of weight one (A tensor density of weight k is an object that appears to be a tensor, but under diffeomorphisms comes out as a tensor multiplied by k powers of the jacobian of the coordinate transformation) isn't particularly illuminating, though you'll need to understand the concept of a spin connection. Suffice it to say that in the context of LQG, Aia(x) and Eia(x) are viewed as a yang-mills connection and it's conjugate electric field respectively (though each have dual geometrical interpretations in GR, they are of no direct importance in LQG). Here's some background on the triads themselves which should help:

Diffeomorphisms xa → x&prime;a(xa) on 3-dimensional riemannian manifolds act on tensors by Va..d → V'a..d = &part;ex'a...&part;fx'dVe..f in which Tba &equiv; &part;bx'a &isin; GL(3,R). However, there are no spinorial reps of GL(3,R) to go along with the tensorial ones. This can be seen by noting that since GL(3,R) contains SO(3) as a subgroup, it always gives an SO(3) rep by restriction, but the spinorial reps of SO(3) don't arise from reps of GL(3,R).

This means that to couple spinorial degrees of freedom requires a modified framework in which matrices Tba are replaced by spinorial matrix representations of SO(3), and this is where triads eai(x) comes in. They form an orthonormal basis of tangent vectors named by the "internal" index i=1,2,3 in the tangent space at x, and just as we can couple fermions straight away in flat spacetime, we can couple spinors to these "internal" indices in the ("flat") tangent space.

We therefore view the internal index as labelling the generators of the lie algebra of SU(2) with the result that an SU(2) gauge symmetry has been introduced into the theory corresponding to the freedom with which the triads satisfying qab(x) = eai(x)ebi(x) can be chosen (Note also that we infer the summation over the gauge indices from the absence of any gauge indices on the LHS). Of course, the gauge group can be taken as SO(3) - a choice that has been considered more seriously recently in LQG - though from the preceding perspective there would be little point in doing so.

In LQG, these gauge degrees of freedom encode gravitational states.

Hope this helps.

Last edited:
Gold Member
Dearly Missed
Originally posted by Hurkyl
I have this feeling that figuring out what a densitized triad is will help intuit the other things.

But basically, I'm just hoping to be able to follow at least one of these papers from the beginning to the end without having to skip over anything.

I will try to copy in the most condensed description I've seen so far and maybe it can serve as focus. This is Ashtekar talking to mathematicians at a June 2001 conference at Stonybrook. The style is mathly as opposed to physish (fewer indices) so it is easier to copy in. Maybe I will eventually be able to get thru these few paragraphs (!) without skipping. Or at least can point to specific hard spots. I feel somewhat the same as you just said, Hurkyl, and want to focus on some exposition condensed enough to get into one PF post. Also want to see how difficult it is to transcribe Ashtekar's notation.

Quote from Asktekar "Quantum Geometry in Action...." Section 2

arXiv:math-ph/0202008 (Feb. 2002)

"Let me now turn to specifics. It is perhaps simplest to begin with a Hamiltonian or symplectic description of general relativity. The phase space is the cotangent bundle. The configuration variable is a connection, A on a fixed 3-manifold &Sigma; representing 'space' and (as in gauge theories) the momenta are the 'electric field' 2-forms E, both of which take values in the Lie-algebra of SU(2). In the present gravitational context, the momenta acquire a geometrical significance: their Hodge-duals *E can be naturally interpreted as orthonormal triads (with density weight 1) and determine the dynamical, Riemannian geometry of &Sigma;. Thus, (in contrast to Wheeler's geometrodynamics) the Riemannian structures on &Sigma; are now built from momentum variables. The basic kinematic objects are holonomies of A, which dictate how spinors are parallel transported along curves, and the 2-forms E, which determine the Riemannian metric of &Sigma;. (Matter couplings to gravity have also been studied extensively [2, 1].)

In the quantum theory, the fundamental excitations of geometry are most conveniently expressed in terms of holonomies [3, 4]. They are thus one-dimensional, polymer-like and, in analogy with gauge theories, can be thought of as 'flux lines of the electric field'. More precisely, they turn out to be flux lines of areas: an elementary flux line deposits a quantum of area on any 2-surface S it intersects. Thus, if quantum geometry were to be excited along just a few flux lines, most surfaces would have zero area and the quantum state would not at all resemble a classical geometry. Semi-classical geometries can result only if a huge number of these elementary excitations are superposed in suitably dense configurations [13, 14]. The state of quantum geometry around you, for example, must have so many elementary excitations that about 1068 of them intersect the sheet of paper you are reading, to endow it an area of about 100 cm2. Even in such states, the geometry is still distributional, concentrated on the underlying elementary flux lines; but if suitably coarse-grained, it can be approximated by a smooth metric. Thus, the continuum picture is only an approximation that arises from coarse graining of semi-classical states.

These quantum states span a specific Hilbert space H = L2(A; d&mu;o), consisting of functions on the space of (suitably generalized) connections which are square integrable with respect to a natural, diffeomorphism invariant (regular, Borel) measure &mu;o. This space is very large. However, it can be conveniently decomposed into a family of orthonormal, finite dimensional sub-spaces H=SUM&gamma;, j H&gamma;, j, labelled by finite graphs &gamma; each edge of which itself is labelled by a non-trivial irreducible representation of SU(2) (or, a half-integer, or a spin j) [5]. H&gamma;, j can be regarded as the Hilbert space of a spin-system'. These spaces are extremely simple to work with; this is why very explicit calculations are feasible. Elements of H&gamma;, j are referred to as spin-network states [5].

As one would expect from the structure of the classical theory, the basic quantum operators are the holonomies ^hp along paths p in &Sigma; and the triads ^*E [6]. Both sets of operators are densely defined and self-adjoint on H. Furthermore, a striking result is that all eigenvalues of the triad operators are discrete. This key property is, in essence, the origin of the fundamental discreteness of quantum geometry. For, just as the classical Riemannian geometry of &Sigma; is determined by the triads *E, all Riemannian geometry operators----such as the area operator ^AS associated with a 2-surface S or the volume operator ^VR associated with a region R----are constructed from ^*E. However, since even the classical quantities AS and VR are non-polynomial functionals of the triads, the construction of the corresponding ^AS and ^VR is quite subtle and requires a great deal of care. But their final expressions are rather simple [6].

In this regularization, the underlying background independence turns out to be a blessing. For, diffeomorphism invariance constrains the possible forms of the final expressions severely and the detailed calculations then serve essentially to fix numerical coefficients and other details. Let us illustrate this point with the example of the area operators ^AS. Since they are associated with 2-surfaces S while the states have 1-dimensional support, the diffeomorphism covariance requires that the action of ^AS on a state &Psi;&gamma;, j must be concentrated at the intersections of S with &gamma;. The detailed expression bears out this fact: the action of ^AS on &Psi;&gamma;, j is dictated simply by the spin labels jI attached to those edges of &gamma; which intersect S. For all surfaces S and 3-dimensional regions R in &Sigma;, ^AS and ^VR are densely defined, self-adjoint operators. All their eigenvalues are discrete. [6]..."

My comment: The notation here differs in minor ways from what's found elsewhere and I have tried to adhere to Ashtekar's notation as much as typographically possible. Make any corrections you see to make, if comparing this with the original paper. Here a hat ^ preceding a symbol says it is an operator. Also the underlying 3D manifold is &Sigma; instead of M, and the 2D surface whose area is to be measured is S. The spin network is written with a lowercase &gamma; instead of the uppercase &Gamma; we've seen elsewhere. Was tempted to change this, but decided best to stick consistently to Ashtekar's notation in this quote.

Last edited:
Gold Member
Dearly Missed
getting back to work on Rovelli/Upadhya

we were working through Rovelli/Upadhya and were about up to section D (page 3) "The operator E(&Sigma;)"

several nice things will happen right away
(1) we get to choose the tau basis of su(2), written &tau;, consisting of -i/2 times pauli matrices, and (2) we get to do a functional derivative---as in variational calculus we get to take derivative with respect to the connection!
By analogy with ordinary phase space
if A is like "x" or position then E is like "p" or momentum

So to start out with this section, xa are local coordinates in &Sigma; with indices a,b,c = 1,2,3 and i,j,k are other indices = 1,2,3, used just for expressing elements of the Lie algebra su(2) in the &tau;i basis. These i,j,k are sometimes called "internal indices".

And right off we can use the &tau; basis to write the connection A,
at a point x there are three directions you can go in the manifold and each one corresponds to an infinitesimal rotation. At the point x, A is a 1-form with values in su(2) and written A = Aadxa. This means Aa is a matrix in the Lie algebra for a = 1,2,3 and we can write it in the &tau; matrix basis

Aa = Aai &tau;i

So Rovelli/Upadhya write the connection at a point x this way:

Aa(x)dxa = Aai(x) &tau;idxa

Now we have quantum state functions &Psi;&xi;
where &xi; is a spin network----Rovelli/Upadhya use symbol s for it but this gets confused with an integration variable s so I am saying &xi;

The most exciting thing about this section, and equations (4) thru (11) is that we get to take the derivative of functions &Psi;&xi; with respect to a connection A. Well &Psi;&xi; is defined on a space of functions A: M --> su(2).
So if we take derivative of &Psi;&xi;(A) it has to be &delta;/&delta;A

And this is where we really need the &tau; basis of su(2), because at some point in the ceremony of taking the derivative we have to shake things down to the level of numbers.

The first step in doing this functional derivative is to consider the holonomy U(&gamma;, A) which is an element of su(2) gotten by going along the path &gamma;. They refer us to equation (38) in the appendix C on holonomies:

dU(s)/ds + d&gamma;a/ds Aa(&gamma;(s))U(s)..............(R/U equation 38)

that is just the defining equation for U(s) which is an element of SU(2)----redoing this, I would say SO(3)----for each s. It is what is meant by the holonomy U(&gamma;(s), A) of A along &gamma;.

Now Rovelli/Upadhya invoke equation (38) to state their equation (5) which is "a standard result". This is an integral almost too long for me to write out. It is the derivative of holonomy with respect to the connection and obviously crucial to the whole section:

&delta;/&delta;Aai(x) U(&gamma;, A) = &int; ds dxa(s)/ds &delta;3(&gamma;(s),x) U(&gamma;(0,s), A) &tau;i U(&gamma;(s,1), A)

That is probably enough for one post. Note the presence of the delta-function in the integral. Also that the path &gamma; is being divided into a (0, s) part and a (s, 1) part by a breakpoint s. And the integral is over all possible breakpoints s.

Last edited:
Gold Member
Dearly Missed
pauli sigma matrices and the tau matrices

Hurkyl you are clearly familiar with these tau generators of su(2) but in case anyone else is reading the thread they are IIRC

Code:
0 -i/2
-i/2 0

0 -1/2
1/2 0

-i/2 0
0  i/2`

And you find if you flip them and take complex conjugate
that you get the negative of the matrix you started with

so they are "skew Hermitian" and also they are obviously trace zero, so all groups-for-dummies folk know that they exponentiate properly to things in SU(2) as they are supposed to

Last edited:
Hurkyl
Staff Emeritus
Gold Member
All right, so my guess as to the meaning of the triads was correct; they pull ordinary tangent vectors into a representation of su(2)/so(3). In fact, writing q and e as matrices, and considering two column vectors v and w:

vtqw = vteetw
= (etv)t (etw)

So applying the metric to two tangent vectors coincides with using et to transform those tangent vectors into the representation and taking an ordinary dot product.

(I use matrix notation because, IMHO, it's cleaner and neater than indices whenever I don't need to work with individual elements or rows/columns)

So now it's clear that eai encapsulate the idea of local coordinate axes. It's late so I'm gonna have to postpone further contemplating for another time.

Gold Member
Dearly Missed

Hurkyl their equation (5)----the "standard result" on which this section is based----looks like the product rule in differential calculus to me:

(fgh)' = f'gh + fg'h + fgh'

The integral is just a big sum
The holonomy along a path is just a long product of matrices
and the integral is set up to dive into that string of matrices and take deriv of one of them, which shows up as the tau inserted into the sequence, and to move along systematically doing that.

And Aai(x) is just a number----the coordinate of the i-th tau----so it is &tau;i that gets inserted.
Taking deriv w/rt A boils down to
&delta;/&delta;Aai(x)

Originally posted by marcus
we were working through Rovelli/Upadhya and were about up to section D (page 3) "The operator E(&Sigma;)"

Now Rovelli/Upadhya invoke equation (38) to state their equation (5) which is "a standard result". This is an integral almost too long for me to write out. It is the derivative of holonomy with respect to the connection and obviously crucial to the whole section:

&delta;/&delta;Aai(x) U(&gamma;, A) = &int; ds dxa(s)/ds &delta;3(&gamma;(s),x) U(&gamma;(0,s), A) &tau;i U(&gamma;(s,1), A)

...the path &gamma; is being divided into a (0, s) part and a (s, 1) part by a breakpoint s. And the integral is over all possible breakpoints s.

and the &tau;i is inserted at the breakpoint

I am thinking that (5) is the crux and that (6) is just a mechanical extension to higher orders. Still have to discuss (7) thru (11)

jeff
Originally posted by Hurkyl
All right, so my guess as to the meaning of the triads was correct; they pull ordinary tangent vectors into a representation of su(2)/so(3). So now it's clear that eai encapsulate the idea of local coordinate axes.

Okay, but what's going on physically?

Originally posted by Hurkyl
I use matrix notation because, IMHO, it's cleaner and neater than indices whenever I don't need to work with individual elements or rows/columns)

It's standard practice in physcs to display indices because they indicate the group theoretic properties that define and characterize theories.

jeff

Originally posted by marcus
we were working through Rovelli/Upadhya and were about up to section D (page 3) "The operator E(&Sigma;)"

several nice things will happen right away
(1) we get to choose the tau basis of su(2), written &tau;, consisting of -i/2 times pauli matrices, and (2) we get to do a functional derivative---as in variational calculus we get to take derivative with respect to the connection!
By analogy with ordinary phase space
if A is like "x" or position then E is like "p" or momentum

So to start out with this section, xa are local coordinates in &Sigma; with indices a,b,c = 1,2,3 and i,j,k are other indices = 1,2,3, used just for expressing elements of the Lie algebra su(2) in the &tau;i basis. These i,j,k are sometimes called "internal indices".

And right off we can use the &tau; basis to write the connection A,
at a point x there are three directions you can go in the manifold and each one corresponds to an infinitesimal rotation. At the point x, A is a 1-form with values in su(2) and written A = Aadxa. This means Aa is a matrix in the Lie algebra for a = 1,2,3 and we can write it in the &tau; matrix basis

Aa = Aai &tau;i

So Rovelli/Upadhya write the connection at a point x this way:

Aa(x)dxa = Aai(x) &tau;idxa

Now we have quantum state functions &Psi;&xi;
where &xi; is a spin network----Rovelli/Upadhya use symbol s for it but this gets confused with an integration variable s so I am saying &xi;

The most exciting thing about this section, and equations (4) thru (11) is that we get to take the derivative of functions &Psi;&xi; with respect to a connection A. Well &Psi;&xi; is defined on a space of functions A: M --> su(2).
So if we take derivative of &Psi;&xi;(A) it has to be &delta;/&delta;A

And this is where we really need the &tau; basis of su(2), because at some point in the ceremony of taking the derivative we have to shake things down to the level of numbers.

The first step in doing this functional derivative is to consider the holonomy U(&gamma;, A) which is an element of su(2) gotten by going along the path &gamma;. They refer us to equation (38) in the appendix C on holonomies:

dU(s)/ds + d&gamma;a/ds Aa(&gamma;(s))U(s)..............(R/U equation 38)

that is just the defining equation for U(s) which is an element of SU(2)----redoing this, I would say SO(3)----for each s. It is what is meant by the holonomy U(&gamma;(s), A) of A along &gamma;.

Now Rovelli/Upadhya invoke equation (38) to state their equation (5) which is "a standard result". This is an integral almost too long for me to write out. It is the derivative of holonomy with respect to the connection and obviously crucial to the whole section:

&delta;/&delta;Aai(x) U(&gamma;, A) = &int; ds dxa(s)/ds &delta;3(&gamma;(s),x) U(&gamma;(0,s), A) &tau;i U(&gamma;(s,1), A)

That is probably enough for one post. Note the presence of the delta-function in the integral. Also that the path &gamma; is being divided into a (0, s) part and a (s, 1) part by a breakpoint s. And the integral is over all possible breakpoints s.

Hmm.

Last edited:
Hurkyl
Staff Emeritus
Gold Member
Equation 5 confused me for a bit, until I figured out what &gamma(a, b) meant.

Equation (8) troubles me a bit; I presume that in equation (7):

&Psi;(s-&gamma)lm(A)

is a quantity dependant on A; how do they justify pulling it outside of the functional derivative when plugging into E(&Sigma;)?

Anyways, (9) is just plugging in (6), but in going to (10), it is very nice how all of the partials line up to allow the change of variables.

Okay, but what's going on physically?

I don't know what you're aiming at. What I see, atm, is a particular choice of bases for the tangent spaces with the interesting property that in the new coordinates, contraction with the metric qab has been replaced with contraction with &delta;ab. Our freedom to rotate our choice of (orthonormal) coordinate axes is directly expressible as an SO(3) transformation of the basis vectors, and from your post I imagine we can also SU(2) rotate these things as well, though I haven't had an opportunity to create a toy example to see for myself.