Signs LQG has the right redefinition (or wrong?)

  • Thread starter Thread starter marcus
  • Start date Start date
  • Tags Tags
    Lqg
Click For Summary
The 2010 redefinition of Loop Quantum Gravity (LQG) emphasizes a framework devoid of a smooth manifold, focusing instead on a network of geometric information represented by graphs and 2-complexes. This approach draws parallels to Quantum Electrodynamics (QED) and Quantum Chromodynamics (QCD), suggesting that space can be understood through finite chunks rather than continuous structures. The redefinition aims to align with the principles of quantum mechanics, prioritizing observable relationships over assumptions about spacetime. Discussions also highlight the evolving nature of LQG and its potential compatibility with the principle of relative locality, emphasizing the need for empirical testing of both theories. Overall, the conversation reflects a deep engagement with the implications of LQG's reformulation and its future directions in theoretical physics.
  • #61
@Tom
The post #35 which Atyy just now quote was one of the most cogent (convincing) ones on the thread. It is balanced and nuanced, so I want to quote the whole, as context. I think I understand how, when you look at it in the entire context, you can say that verifying some limit is a project of minor stature compared with postulating a QFT which is not "derived" from classic by traditional "tried-and-true" methods
tom.stoer said:
... I don't want to criticize anybody (Rovelli et al.) for not developping a theory for the cc. I simply want to say that this paper does not answer this fundamental question and does not explain how the cc could fit into an RG framework (as is expected for other couplings).

---------------------

We have to disguish two different approaches (I bet Rovelli sees this more clearly than I do).
- deriving LQG based on the EH or Holst action, Ashtekar variables, loops, ... extending it via q-deformation etc.
- defining LQG using simple algebraic rules, constructing its semiclassical limit and deriving further physical predictions

The first approach was developped for decades, but still fails to provide all required insights like (especially) H. The second approach is not bad as it must be clear that any quantization of a classical theory is intrinsically incomplete; it can never resolve quantization issues, operator ordering etc. Having this in mind it is not worse to "simply write down a quantum theory". The problem with that approach was never the correct semiclassical limit (this is a minor issue) but the problem to write down a quantum theory w/o referring to classical expressions!

Look at QCD (again :-) Nobody is able to "guess" the QCD Hamiltonian; every attempt to do this would break numerous symmetries. So one tries (tried) to "derive" it. Of course there are difficulties like infinities, but one has a rather good control regarding symmetries. Nobody is able to write down the QCD PI w/o referring to the classical action (of course its undefined, infinite, has ambiguities ..., but it does not fail from the very beginning). Btw.: this hasn't changed over decades, but nobody cares as the theory seems to make the correct predictions.

Now look at LQG. The time for derivations may be over. So instead of derived LQG (which by may argument explained above is not possible to 100%) one may simply postulate LQG. The funny thing is that in contradistinction to QCD we seem to be able to write down a class of fully consistent theories of quantum gravity w/o derivation, w/o referring to classical expressions, w/o breaking of certain symmetries etc. The only (minor!) issue is the derivation of the semiclassical limit etc.

From a formal perspective this is a huge step forward. If this formal approach is correct, my concerns regarding the cc are a minor issue only.

Postulating is the word you used. It may indeed be time to postulate a quantum understanding of space and time, rather than continue struggling to derive. After all I suppose one could say that Quantum Theory itself was originally "invented" by strongly intuitive people like Bohr and Heisenberg with the help of their more mathematically adept friends. It had to be invented de novo before one could say what it means to "quantize" some classical thing.

Or it may not yet be time to take this fateful step of postulating a new spacetime and a new no-fixed-manifold field theory.

So there is the idea of the stature of the problem. A new idea of spacetime somehow has more stature than merely checking a limit. If the limit is wrong one can often go back and fix what was giving the trouble. We already saw that in LQG in 2007. So it could be no big deal compared with postulating the right format in the first place. I can see the sense of your saying "minor".

αβγδεζηθικλμνξοπρσςτυφχψωΓΔΘΛΞΠΣΦΨΩ∏∑∫∂√±←↓→↑↔ ~≈≠≡ ≤≥½∞(⇐⇑⇒⇓⇔∴∃ℝℤℕℂ⋅)
 
Last edited:
Physics news on Phys.org
  • #62
tom.stoer said:
I think that the derivation of a certain limit is a minor issue compared to the problem that a construction of a consistent, anomaly-free theory (derived as quantization of a classical theory) is not available.

Yes, there is no need, in fact no reason, to go from classical theory to quantum theory. But isn't the semiclassical and classical limits very important? We seek all quantum theories consistent with the known experimental data. This is the same sort of concern that string theory should be shown to contain the standard model of particle physics. We ask if there is more than one such theory so that future experiments and observatoins can distinguish between them.
 
  • #63
I agree that deriving this limit is important, but if there is a class of theories they may differ only in the quantum regime (e.g. by operator ordering or anomlies which may vanish in the classical limit) and therefore this limit doesn't tell us much about the quantum theory itself.
 
  • #64
continuing on bit by bit with the project I mentioned earlier of understanding equation (4)
marcus said:
...
One thing on the agenda, if we want to understand (4) is to see why the integrals are over the specified number of copies of the groups----why there are that many labels to integrate out, instead of some other number. So for example you see on the first integral the exponent 2(E-L) - V. We integrate over that many copies of the group. Let's see why it is that number. E and V are the numbers of edges and vertices in the foam C. So E-L is the number of internal edges.

I try to use only regular symbols and avoid going to Tex, so I cannot duplicate the fancy script Vee used for the total valence of all the faces of the two-complex C.
That is, you count the number of edges that each face f has, and add it all up.
Naturally there will be overcounting because a given edge can belong to several faces.
So this number is bigger than E the number of edges.

I see no specially good symbol so I will make a bastard use of the backwards ∃
to stand for the total edges of all the faces, added up.

Now in equation (4) you see there is the second integral which is over a cartesian product of ∃ - L copies of the group SU(2). Namely it is a Haar measure integral over SU(2)∃-L

How to think about this? We look at the total sides ∃ of all the faces and we throw away the boundary edges, and we keep only the internal edges in our count. Now this goes back to equation (2)! "a group integration to each couple consisting of a face and an internal edge." So that is beginning to make sense. BTW anyone who wants to help talk through the sums and integrals of equation (4) is heartily welcome!
 
  • #65
Just as QED does not replace classical but simply goes deeper---we still use the Maxwell equations!---so the job of LQG is not to replace the differentiable manifold (that Riemann gave us around 1850) but to go deeper. That's obvious, but occasionally reminding ourselves of it may still be appropriate. The manifold is where differential equations live---we will never give it up.

But this equation (4) of http://arxiv.org/abs/1010.1939 is (or could be) the handle on geometry deeper than the manifold. So I want to "parse" it a little. "Parse" is what one learns to do with sentences, in school. It means to divide up into parts.

You see that equation (4) is preceded by four Feynman rules
I'm going to explain more explicitly but one brief observation is that in (4) the second integration and the second product over edges together implement Rule 2.

The other portions of (4) implement Rule 3.

Let's see if we can conveniently type some parts of equation (4) without resorting to LaTex.
Typing at an internet discussion board, as opposed to writing on a blackboard, is an abiding bottleneck.

(SU2)∃-L dhef

Remember that e and f are just numbers tagging the edges and faces of the foam.
e = 1,2,...,E
f = 1,2,...,F
and the backwards ∃ is the "total valence" of all the faces, the number of edges of each face, added up. The paper uses a different symbol for that, which I cannot type. So anyway ∃-L is the total internal valence of all the faces. What you get if you add up the number edges which are not boundary that each face has. Recall that L is the number of boundary edges (those bordering only one face, the unshared edges.)

So let's see how the integral looks. It is a part of equation (4) that helps to implement Rule 2.
================

Well it looks OK. The integral is over the group manifold
(SU2)∃-L
consisting of ∃-L copies of the compact group SU2. It seems to read OK. If anyone thinks it doesn't, please say.

Then what goes into that integral, to implement geometric Feynman Rule 2, is a product over all the edges e bordering a given face f.
I'll try typing that too.
αβγδεζηθικλμνξοπρσςτυφχψωΓΔΘΛΞΠΣΦΨΩ∏∑∫∂√±←↓→↑↔ ~≈≠≡ ≤≥½∞(⇐⇑⇒⇓⇔∴∃ℝℤℕℂ⋅)
 
Last edited:
  • #66
To keep on track, since we have a new page, I will copy the "business part" of my last substantive post.
==quote==

As I see it, the QG goal is to replace the live dynamic manifold geometry of GR with a quantum field you can put matter on. The title of Dan Oriti's QG anthology said "towards a new understanding of space time and matter" That is one way of saying what the QG researchers's goal is. A new understanding of space and time, and maybe laying out matter on a new representation of space and time will reveal a new way to understand matter (no longer fields on a fixed geometry).

Sources on the 2010 redefinition of LQG are
introductory overview: http://arxiv.org/abs/1012.4707
concise rigorous formulation: http://arxiv.org/abs/1010.1939
phenomenology (testability): http://arxiv.org/abs/1011.1811
adding matter: http://arxiv.org/abs/1012.4719

Among alternative QGs, the LQG stands out for several reasons---some I already indicated---which I think are signs that the 2010 reformulation will prove a good one:

  • testable (phenomenologists like Aurelien Barrau and Wen Zhao seem to think it is falsifiable)
  • analytical (you can state LQG in a few equations, or Feynman rules, you can calculate and prove symbolically, massive numerical simulations are possible but not required)
  • similar to QED and lattice GCD (the cited papers show remarkable similarities---the two-complex works both as a Feynman diagram and as a lattice)
  • looks increasingly like a reasonable way to set up a background independent quantum field theory.
  • an explicitly Lorentz covariant version of LQG has been exhibited
  • matter added
  • a couple of different ways to include the cosmological constant
  • indications that you recover the classic deSitter universe.
  • sudden speed-up in the rate of progress, more researchers, more papers

These are just signs---the 2010 reformulation might be right---or to put it differently, there may be good reason for us to understand the theory, as presented in brief by the October paper http://arxiv.org/abs/1010.1939.
...
...
[To expand on the point that in 1010.1939 form] it "looks like" QED and QCD, except that it is background independent and about geometry, instead of being about particles of matter living in fixed background. Somehow it manages to look like earlier field theories. The presentation on the first page uses "Feynman rules".

These Feynman rules focus on an amplitude ZC(h)
where C is a two-complex with L boundary or "surface" edges, and h is a generic element of SU(2) and h is (h1, h2,...,hL), namely a generic element of SU(2)L

The two-complex C is the "diagram". The boundary edges are the "input and output" of the diagram---think of the boundary as consisting of two separate (initial and final) components so that Z becomes a transition amplitude. Think of the L-tuple h as giving initial and final conditions. The symbol h is my notational crutch which I use to keep order in my head. Rovelli, instead, makes free use of the subscript "l" which runs from 1 to L, and has no symbol for h.

The central quantity in the theory is the complex number ZC(h) and one can think of that number as saying a quantum probability, a transition amplitude:

Zroadmap(boundary conditions)

==endquote==

I added some clarification and emphasis to the last sentence.
 
Last edited:
  • #67
OK so part of equation (4) is an integral of a product of group characters which addresses Rule 2 of the list of Feynman rules.

(SU2)∃-L dhefe ∈ ∂f χjf(hef)

where the idea is you fix a face in the twocomplex, call it f, and you look at all the edges e that are bordering that face, and you look at their labels hef. These labels are abstract group elements belonging to SU(2). But what you want to integrate is a number. So you cook the group element hef down to a number χjf(hef) and multiply the numbers corresponding to every edge of the face, to get a product number for the face, and then start adding those numbers. That's it, that's the integral (the particular integral piece we are looking at.)

But what's the superscript jf on the chi? Well a set of nice representations of the group SU(2) are labeled by halfintegers j, and if you look back in equation (4) you see that there is a sum running through the possible j, for each face f. So there is a sum over the possible choices jf. And the character chi is just the dumbed-down version of the jf-rep. The trace of the rep matrix.

It is basically just a contraption to squeeze the juice out of the apples. You pull the lever and squeeze out the juice and add it up (the adding up part is the integral.)

There is another part of equation (4) that responds to geometric Feynman rule 3. I will get to that later hopefully later this afternoon.

I really like how they get this number Z. This quantum probability number ZC (h)

αβγδεζηθικλμνξοπρσςτυφχψωΓΔΘΛΞΠΣΦΨΩ∏∑∫∂√±←↓→↑↔ ~≈≠≡ ≤≥½∞(⇐⇑⇒⇓⇔∴∃ℝℤℕℂ⋅ ∈ )
 
Last edited:
  • #68
I accidentally lost most of this post (#68) while editing and adding to it. What follows is just a fragment, hard to understand without the vanished context
=======fragment========

Going back to ∫(SL2C)2(E-L)-V dgev I see that the explanation of the exponent 2(E-L)-V is to look at Rule 1 and Rule 4 together.

Rule 1 says for every internal edge you expect two integrals dgev
where the v stands for either the source or the target vertex of that edge.

Well there are L boundary edges, and the total number of edges in the foam is E. So there are E-L internal edges. So Rule 1 would have you expect 2(E-L) integrations dgev over SL(2,C).

Simple enough, but then Rule 4 says at each vertex one integration is redundant and is omitted.
So V being the number of vertices, that means V integrations are dropped. And we are left with
2(E-L) - V.

Intuitively what all those SL(2, C) integrations are doing is working out all the possible gauge tranformations that could happen to a given SU(2) label hef on an edge e of a face f.

Now we need to look at Rule 3 and see how it is implemented in equation (4)

Rule 3 says to assign to each face f in the foam a certain sum ∑jf
the sum is over all possible halfintegers j, since we are focusing on a particular face f we are going to tag that run of halfintegers jf.
And that sum is simply a sum of group character numbers (multiplied by an integer 2j+1 which is the dimension of the vectorspace of the j-th rep). Here's the sum:
jf (2jf+1)χγ(jf+1), jf (g)

Now the only thing I didn't specify is what group element that generic "g" stands for, that is plugged into the character χ.


jf (2jf+1)χγ(jf+1), jf (∏e ∈ ∂f (gese hef gete-1)εlf)



=====end fragment===

Since the notation when lost is hard to recover, I am going to leave this as it is and not try to edit it.
I will start a new post.

Found another fragment of the original post #68!
==quote==
Let's move on and see how equation (4) implements geometric Feynman Rule 3.
Now we are going to be integrating over multiple copies of a somewhat larger group, SL(2,C)

(SL2C)2(E-L)-V dgev


As before we take a rep, and since we are working with a halfinteger jf this time it's going to be tagged by a pair of numbers γ(jf+1), jf, and we plug in a group element, which gives a matrix. And then as before we take the TRACE of that matrix, which does the desired thing and gives us a complex number.

Here it is:
χγ(jf+1), jf (g)

That's what happens when we plug any old generic g from SL(2,C) into the rep. Now we have to say which "g" we want to plug in. It is going to be a PRODUCT of "g"s that we pick up going around the chosen face. And also, meanwhile going around, integrating out every possible SL(2,C) gauge transformation on the edge labels. Quite an elaborate circle dance!

Before, when we were implementing Rule 2, it was simpler. We just plugged a single group element hef into the rep, and that hef was what we happened to be integrating over.

For starters we can look at the wording of Rule 3 and see that it associates A SUM TO EACH FACE.
So there down in equation (4) is the sum symbol, and the sum clearly involves all the edges that go around the face. So that's one obvious reason it's more complicated.

==endquote==

As I said above ,I am going to leave this as it is and start a new post.

αβγδεζηθικλμνξοπρσςτυφχψωΓΔΘΛΞΠΣΦΨΩ∏∑∫∂√±←↓→↑↔ ~≈≠≡ ≤≥½∞ ⇐⇑⇒⇓⇔∴∃ℝℤℕℂ⋅∈
 
Last edited:
  • #69
For anybody coming in new to this thread, at the moment I am chewing over the first page of what I think is the best current presentation of LQG, which is an October 2010 paper
http://arxiv.org/abs/1010.1939

Accidentally trashed much of my earlier post (#68) so will try to reconstruct using whatever remains.

In post #67 I was talking about how equation (4) implements Feynman Rule 2.

Now let's look at Rule 3 and see how it is carried out.

There's one tricky point about Rule 3--it involves elements g of a larger group SL(2 C).
This has a richer set of representation, so the characters are not simply labeled by halfintegers.

As before, what is inside the integral will be a product of group character numbers of the form χ(g) where this time g is in SL(2,C). The difference is that SL(2,C) reps are not classified by a single halfinteger j, but by a pair of numbers p,j where j is a halfinteger but p doesn't have to be a halfinteger, can be a real, like for instance the immirzi number γ = .274... multiplied by a half integer (j+1). Clearly a positive real number, not a halfinteger.

χγ(jf+1), jf (g)Rule 3 says to assign to each face f in the foam a certain sum ∑jf
the sum is over all possible halfintegers j, since we are focusing on a particular face f we are going to tag that run of halfintegers jf.
And that sum is simply a sum of group character numbers (multiplied by an integer 2j+1 which is the dimension of the vectorspace of the j-th rep). Here's the sum:
jf (2jf+1)χγ(jf+1), jf (g)

Now the only thing I didn't specify is what group element that generic "g" stands for, that is plugged into the character χ. Well it stands for a kind of circle-dance where you take a product of edge labels going around the face.

e ∈ ∂f (gese hef gete-1)εlf

And when you do that there is the question of orientation. Each edge has its own orientation given by its source and target vertex assignment. And each face has an oriention, a preferred cyclic ordering of the edges. Since edges are shared by two or more faces, you can't count on the orientations of edges being consistent. So what the epsilon exponent does is fix that. It is either 1 or -1, whatever is needed to make orientation agree.

===========================
Now looking at the first integral of equation (4),
namely ∫(SL2C)2(E-L)-V dgev ,
we can explain the exponent 2(E-L)-V by referring back to Rule 1 and Rule 4 together.Rule 1 says for every internal edge you expect two integrals dgev
where the v stands for either the source or the target vertex of that particular edge e so gev stands for either
gese or gete

Well there are L boundary edges, and the total number of edges in the foam is E. So there are E-L internal edges. So Rule 1 would have you expect 2(E-L) integrations dgev over SL(2,C).

Rule 4 then adds the provision at each vertex one integration is redundant and is omitted.
So V being the number of vertices, that means V integrations are dropped. And we are left with
2(E-L) - V.

Intuitively what those SL(2, C) integrations are doing is working out all the possible gauge tranformations that could happen to a given SU(2) label hef on an edge e of a face f.αβγδεζηθικλμνξοπρσςτυφχψωΓΔΘΛΞΠΣΦΨΩ∏∑∫∂√±←↓→↑↔ ~≈≠≡ ≤≥½∞ ⇐⇑⇒⇓⇔∴∃ℝℤℕℂ⋅∈[/QUOTE]
 
Last edited:
  • #70
I see I made a typo error on the page above. It should be εef not εlf.

That's enough parsing of equation (4). It is the central equation of the LQG formulation we're talking about in this thread. Consider it discussed, at least for the time being. The topic question is whether it is the right redefinition or not, of the theory. I think it is, and gave some reasons.
marcus said:
As I see it, the QG goal is to replace the live dynamic manifold geometry of GR with a quantum field you can put matter on. The title of Dan Oriti's QG anthology said "towards a new understanding of space time and matter" That is one way of saying what the QG researchers's goal is. A new understanding of space and time, and maybe laying out matter on a new representation of space and time will reveal a new way to understand matter (no longer fields on a fixed geometry).

Sources on the 2010 redefinition of LQG are
introductory overview: http://arxiv.org/abs/1012.4707
concise rigorous formulation: http://arxiv.org/abs/1010.1939
phenomenology (testability): http://arxiv.org/abs/1011.1811
adding matter: http://arxiv.org/abs/1012.4719

Among alternative QGs, the LQG stands out for several reasons---some I already indicated---which I think are signs that the 2010 reformulation will prove a good one:

  • testable (phenomenologists like Aurelien Barrau and Wen Zhao seem to think it is falsifiable)
  • analytical (you can state LQG in a few equations, or Feynman rules, you can calculate and prove symbolically, massive numerical simulations are possible but not required)
  • similar to QED and lattice GCD (the cited papers show remarkable similarities---the two-complex works both as a Feynman diagram and as a lattice)
  • looks increasingly like a reasonable way to set up a background independent quantum field theory.
  • an explicitly Lorentz covariant version of LQG has been exhibited
  • matter added
  • a couple of different ways to include the cosmological constant
  • indications that you recover the classic deSitter universe.
  • sudden speed-up in the rate of progress, more researchers, more papers

These are just signs---the 2010 reformulation might be right---or to put it differently, there may be good reason for us to understand the theory, as presented in brief by the October paper http://arxiv.org/abs/1010.1939...

So can you think of any reasons to offer why the new formulation is NOT the right way to go? If you gave some arguments against this formulation which then got covered over by my struggling with the main equation, please help by bringing those arguments/signs forward here so we can take a fresh look at them.
 
Last edited:
  • #71
Another sign: LQG defined this way turns out to be a generalized topological quantum field theory (TQFT).

==quote page 2 section III "TQFT on manifolds with defects" ==
...
If C is a two-complex bounded by the (possibly disconnected) graph Γ then (4) defines a state in HΓ which satisfies the TQFT composition axioms [27]. Thus the model formulated above defines a generalized TQFT in the sense of Atiyah.
==endquote==

αβγδεζηθικλμνξοπρσςτυφχψωΓΔΘΛΞΠΣΦΨΩ∏∑∫∂√±←↓→↑↔ ~≈≠≡ ≤≥½∞ ⇐⇑⇒⇓⇔∴∃ℝℤℕℂ⋅∈ ⊗ ⊕
 
Last edited:
  • #72
Continuing to hit the key points of http://arxiv.org/abs/1010.1939
The hilbertspace HΓ of LQG is essentially squareintegrable complexvalued functions on the L-fold cartesian product SU(2)L.
Now a generic L-tuple of SU(2) elements is what I was writing h. And the equation (4) defines a function ZC of h.

The spin networks form a basis for the quantum states HΓ. To have sufficient understanding of the subject matter, I should be able to write any spin network also as a function of h. See equation (15) on page 3 of the paper. I'll try typing what a spin network
{Γ, jl, in: l=1,...,L and n=1,...,N}
looks like as a complexvalued function of h

Here it is (following equation 15)

⟨⊗ldjlDjl(hl) | ⊗ninΓ

"where Djl (hl) is the Wigner matrix in the spin-j representation and ⟨·|·⟩Γ indicates the pattern of index contraction between the indices of the matrix elements and those of the intertwiners given by the structure of the graph. A G-intertwiner, where G is a Lie group, is an element of a (fixed) basis of the G-invariant subspace of the tensor product ⊗lHjl of irreducible G-representations —here those associated to the links l bounded by n. Since the Area is the SU2 Casimir, the spin jl is easily recognized as the Area quantum number and in is the Volume quantum number."

αβγδεζηθικλμνξοπρσςτυφχψωΓΔΘΛΞΠΣΦΨΩ∏∑∫∂√±←↓→↑↔ ~≈≠≡ ≤≥½∞ ⇐⇑⇒⇓⇔∴∃ℝℤℕℂ⋅∈ ⊗ ⊕
⊂ ⟨·|·⟩
 
Last edited:
  • #73
I've listed ten* indications that the current LQG formulation is the right one. No one seems able to provide countervailing evidence.

I also get the impression that the LQG research community has swung over to the new version, or if not entirely yet is not putting up much resistance. (e.g. look at the makeup of the QG school that starts one month from now at Zakopane.)

https://www.physicsforums.com/showthread.php?p=3110549#post3110549

*see posts #70 and #71

=============================
Hi Atyy, thanks for your opinion!

The indication of a de Sitter universe is just that, an indication. Physicists are always doing calculations to first order approx and then gradually improving the accuracy. It's great they got deSitter at first order. The day is young on that one. :biggrin:

I don't see how you can say "probably" divergent. Are you such a great expert that you can put probability measures on the future of research. The arguments in the literature are that the theory is NOT UV divergent. As Tom has said, the prospect of IR divergence doesn't worry him much. It's a common ailment that other theories have learned to live with.

It's not a high priority to address the IR divergence issue, I think. But ways to fix that have been proposed as well. Someone will get around to studying that eventually.

=====================

Meanwhile, Atyy, doesn't it seem as if the string community is casting around for 4D/nonstring alternatives?

Horava's 4D skew gravity
Verlinde's kinky polymer vision of entropic gravity
Nima's quantum polytopes (his Pirsa talk was about scattering but he hinted at work on gravity in progress)

It wouldn't surprise me if Nima comes up with something on quantum polytope geometry/gravity that is 4D, non-supersymmetric, and looks like a cousin of Rovelli and Rivasseau reformulation of LQG GFT, where quantum polytopes have been coming up frequently as well!
==================

Careful, your information is out of date. There has been an abrupt increase of interest, research activity, and number of researchers just in the past 3 years. Also the formulation has changed radically. You may not know what is going on because you are interested in your own ideas and wish to dismiss the QG realworld.
==================

Atyy, that's interesting! What is the "X" divergence (your name for it). I need a page and paragraph reference so I can see what you are quoting of Rovelli in context. Eyes get tired scanning over page after page looking for quotes. Point me to it and I will be glad to look!
 
Last edited:
  • #74
It is based on probably divergent series, and the indication of a de Sitter universe removes the higher order terms by ignoring them.
 
  • #75
marcus said:
I've listed ten* indications that the current LQG formulation is the right one. No one seems able to provide countervailing evidence.
I think it is more accurate to say that nobody really cares anymore after 25 years.
 
  • #76
I say probably divergent because Rovelli says so.

There are 3 sorts of divergences in Rovelli's classification.

1) UV - not present
2) IR - present but not a problem
3) X (my nomenclature) - probably present, and probably a problem.
 
  • #77
atyy said:
I say probably divergent because Rovelli says so.

3) X (my nomenclature) - probably present, and probably a problem.

I asked for a page reference in my initial response https://www.physicsforums.com/showpost.php?p=3111122&postcount=73 to this post, and you have not offered one.
I assume this is because you cannot find anywhere that Rovelli says "probably present and probably a problem" about some kind of divergence.

So far, if we cannot get a handle on it and discuss it, this "X" is just a mystifying "Atyyism" :smile:
Please give some concrete substance to your comment!
 
  • #78
marcus said:
I asked for a page reference in my initial response https://www.physicsforums.com/showpost.php?p=3111122&postcount=73 to this post, and you have not offered one.
I assume this is because you cannot find anywhere that Rovelli says "probably present and probably a problem" about some kind of divergence.

So far, if we cannot get a handle on it and discuss it, this "X" is just a mystifying "Atyyism" :smile:
Please give some concrete substance to your comment!


Please quote the page request explicitly.
 
Last edited:
  • #79
marcus said:
I've listed ten* indications that the current LQG formulation is the right one. No one seems able to provide countervailing evidence.

I also get the impression that the LQG research community has swung over to the new version, or if not entirely yet is not putting up much resistance. (e.g. look at the makeup of the QG school that starts one month from now at Zakopane.)

https://www.physicsforums.com/showthread.php?p=3110549#post3110549

*see posts #70 and #71

atyy said:
I say probably divergent because Rovelli says so.

There are 3 sorts of divergences in Rovelli's classification.

1) UV - not present
2) IR - present but not a problem
3) X (my nomenclature) - probably present, and probably a problem.

marcus said:
Atyy, that's interesting! What is the "X" divergence (your name for it). I need a page and paragraph reference so I can see what you are quoting of Rovelli in context. Eyes get tired scanning over page after page looking for quotes. Point me to it and I will be glad to look!

marcus said:
I asked for a page reference in my initial response https://www.physicsforums.com/showpost.php?p=3111122&postcount=73 to this post, and you have not offered one.
I assume this is because you cannot find anywhere that Rovelli says "probably present and probably a problem" about some kind of divergence.

So far, if we cannot get a handle on it and discuss it, this "X" is just a mystifying "Atyyism" :smile:
Please give some concrete substance to your comment!

atyy said:
Please quote the page request explicitly.

OK, done. I can't tell whether you are just playing games or whether you are really confused about a type of very large-scale (cosmological) divergence that R. mentioned.

If I knew exactly what you meant by "X" divergence, maybe I could help clarify.
 
  • #80
The request appears to be after my post mentioning X, not before.
 
  • #81
atyy said:
The request appears to be after my post mentioning X, not before.

I've asked you for page refs several times. It's an ongoing problem. Not giving pointer can (in some people) be associated with inaccurate paraphrase or quotes out of context that seem to mean something else. You must surely be aware of this. In this case I did ask for specific pointer AFTER your comment about "X" divergence.

Lets not quibble over trivia. I'm interested to know what you think is this X that Rovelli says "probably divergent and probably a problem" about. Or if he actually did not say that then what is this X that YOU think is probable and probably a problem?

I'm interested to know! It could be a type of divergence which might arise if you include the whole universe (with no cosmological event horizon) in the analysis. So if the universe is infinite you get bigger and bigger spinnetworks, growing in size without limit. That would be interesting to discuss and to think of how it might be handled. But since you don't say what you mean by "X" I am unable to be sure what you think is a problem! :smile:
 
  • #82
marcus said:
I've asked you for page refs several times. It's an ongoing problem. Not giving pointer can (in some people) be associated with inaccurate paraphrase or quotes out of context that seem to mean something else. You must surely be aware of this. In this case I did ask for specific pointer AFTER your comment about "X" divergence.

Good. And it appeared in a post preceding my mention of X. That's ok. But in that case, if I don't provide the page reference, it's because I haven't seen it, not because it doesn't exist.

http://arxiv.org/abs/1010.1939 p6

UV "There are no ultraviolet divergences, be cause there are no trans-Planckian degrees of freedom.

IR "However, there are potential large-volume divergences, coming from the sum over j"

X "The second source of divergences is given by the limit (26)."
 
  • #83
To keep on track, since we have a new page, I will copy the "business part" of my last substantive post.
==quote==
As I see it, the QG goal is to replace the live dynamic manifold geometry of GR with a quantum field you can put matter on. The title of Dan Oriti's QG anthology said "towards a new understanding of space time and matter" That is one way of saying what the QG researchers's goal is. A new understanding of space and time, and maybe laying out matter on a new representation of space and time will reveal a new way to understand matter (no longer fields on a fixed geometry).

Sources on the 2010 redefinition of LQG are
introductory overview: http://arxiv.org/abs/1012.4707
concise rigorous formulation: http://arxiv.org/abs/1010.1939
phenomenology (testability): http://arxiv.org/abs/1011.1811
adding matter: http://arxiv.org/abs/1012.4719

Among alternative QGs, the LQG stands out for several reasons---some I already indicated---which I think are signs that the 2010 reformulation will prove a good one:

  • testable (phenomenologists like Aurelien Barrau and Wen Zhao seem to think it is falsifiable)
  • analytical (you can state LQG in a few equations, or Feynman rules, you can calculate and prove symbolically, massive numerical simulations are possible but not required)
  • similar to QED and lattice GCD (the cited papers show remarkable similarities---the two-complex works both as a Feynman diagram and as a lattice)
  • looks increasingly like a reasonable way to set up a background independent quantum field theory.
  • an explicitly Lorentz covariant version of LQG has been exhibited
  • matter added
  • a couple of different ways to include the cosmological constant
  • indications that you recover the classic deSitter universe.
  • LQG defined this way turns out to be a generalized topological quantum field theory (see TQFT axioms introduced by Atiyah).
  • sudden speed-up in the rate of progress, more researchers, more papers

These are just signs---the 2010 reformulation might be right---or to put it differently, there may be good reason for us to understand the theory, as presented in brief by the October paper http://arxiv.org/abs/1010.1939.
...
...
[To expand on the point that in 1010.1939 form] it "looks like" QED and QCD, except that it is background independent and about geometry, instead of being about particles of matter living in fixed background. Somehow it manages to look like earlier field theories. The presentation on the first page uses "Feynman rules".

These Feynman rules focus on an amplitude ZC(h)
where C is a two-complex with L boundary or "surface" edges, and h is a generic element of SU(2) and h is (h1, h2,...,hL), namely a generic element of SU(2)L

The two-complex C is the "diagram". The boundary edges are the "input and output" of the diagram---think of the boundary as consisting of two separate (initial and final) components so that Z becomes a transition amplitude. ...

The central quantity in the theory is the complex number ZC(h) and one can think of that number as saying a quantum probability, a transition amplitude:

Zroadmap(boundary conditions)

==endquote==
==quote http://arxiv.org/abs/1010.1939 page 2 section III "TQFT on manifolds with defects" ==
...
If C is a two-complex bounded by the (possibly disconnected) graph Γ then (4) defines a state in HΓ which satisfies the TQFT composition axioms [27]. Thus the model formulated above defines a generalized TQFT in the sense of Atiyah.
==endquote==

αβγδεζηθικλμνξοπρσςτυφχψωΓΔΘΛΞΠΣΦΨΩ∏∑∫∂√±←↓→↑↔~≈≠≡≤≥½∞ ⇐⇑⇒⇓⇔∃ℝℤℕℂ∈⊗⊕⊂ ⟨·|·⟩
 
Last edited:
  • #84
atyy said:
...
X "The second source of divergences is given by the limit (26)."

That problem goes away if the universe you are modeling has a finite size.
Would you like to have that explained?
 
  • #85
marcus said:
That problem goes away if the universe you are modeling has a finite size.
Would you like to have that explained?

Sure.

Rovelli says that for the IR divergence, but not for X.

IR "This is consistent with the fact that q-deformed amplitudes are suppressed for large spins, correspondingly to the fact that the presence of a cosmological constant sets a maximal distance and effectively puts the system in a box"."

X "Less is known in this regard, but it is tempting to conjecture that this sum could be regularized by the quantum deformation as well."
 
  • #86
atyy said:
That problem goes away if the universe you are modeling has a finite size.
Would you like to have that explained?
Sure.

we don't have to speculate about "quantum deformation". Sure R. mentioned it and it is interesting to think how it might affect the picture. But (26) is already not a problem if the U simply has finite size.

That is because LQG has a UV cutoff, effectively. It has a limit how fine resolution, how small you can measure. The "cell size" does not shrink below some scale.

(26) is about considering larger and larger foams, ordered by inclusion. U finite implies that process must terminate. So limit exists. That's all I was saying.
 
  • #87
marcus said:
we don't have to speculate about "quantum deformation". Sure R. mentioned it and it is interesting to think how it might affect the picture. But (26) is already not a problem if the U simply has finite size.

That is because LQG has a UV cutoff, effectively. It has a limit how fine resolution, how small you can measure. The "cell size" does not shrink below some scale.

(26) is about considering larger and larger foams, ordered by inclusion. U finite implies that process must terminate. So limit exists. That's all I was saying.



Then how can "summing = refining"?

http://arxiv.org/abs/1010.5437
 
  • #88
atyy said:
Then how can "summing = refining"?

http://arxiv.org/abs/1010.5437

Please say explicitly what you think the problem with that is.

You may be confused by the words. "Refining" here does not have a metric scale connotation. All it can mean is to add more cells to the complex.

You have to look directly at the math. What the objects are and how the limits are defined.
You can't just go impressionistically/vaguely by the words. I don't know what your source of confusion is, can only guess---unless you spell out what you are thinking.

But I know that there is no inconsistency between the two types of limit, as defined.
On the one hand summing over cell-complexes and on the other hand taking a cell complex and adding more and more cells to it.

Really it's fine! :smile:
 
  • #89
I'm taking issue with your interpretation that summing = size of the universe.

So a bigger and bigger universe means more and more refining?

The basic result in the summing=refining paper is "We have observed that under certain general conditions, if this limit exist, it can equally be expressed as the sum over foams, by simply restricting the amplitudes to those with nontrivial spins."

Are you saying this limit exists in a finite universe?
 
Last edited:
  • #90
atyy said:
I'm taking issue with your interpretation that summing = size of the universe.

So a bigger and bigger universe means more and more refining?

Forget the words Atyy, look at the actual math which is the meaning of the "s=r"
paper.

In what I said the U has a finite size. So don't be talking about bigger bigger U.
The U has some size. Say roughly hypersphere w radius of curvature 100 Gly. (a NASA WMAP lower bound estimate from around 2007 as I recall)

Say you start with a dipole spin network like this ([]) labeled to agree with that 100 Gly
(you've surely seen that dipole graph before in R papers, better drawn)
and you start refining. That means adding nodes and links

for the the next twenty gazillion years adding complexity to the graph DOES in fact correspond to the intuitive idea of refining.

But then the process has to terminate, because you got down to where every node has the min vol and every link has the min area.

You run into the finite resolution barrier. smaller is meaningless.

Better to actually look at what the math says than take issue with the words.

Could you be being a wee bit suspicious? and thinking everybody is trying to fool you because you don't understand something? :smile: Take it easy. That X is a nonproblem, pragmatically speaking.
 

Similar threads

  • · Replies 7 ·
Replies
7
Views
4K
  • · Replies 9 ·
Replies
9
Views
4K
Replies
12
Views
3K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 37 ·
2
Replies
37
Views
6K
  • · Replies 14 ·
Replies
14
Views
4K
  • · Replies 13 ·
Replies
13
Views
3K
Replies
4
Views
4K
  • · Replies 2 ·
Replies
2
Views
3K