How does the projective LQG approach shift the traditional formulation of LQG?

In summary, a major change in formulating Loop Quantum Gravity (LQG) is happening, as discussed in Suzanne Lanery's seminar at Perimeter 5 days ago and in a similar talk today at Nijmegen. This new development is based on the 11 November Lanery-Thiemann paper called Projective LQG, which can be found by googling "projective LQG arxiv". The projective approach describes quantum states as projective families of density matrices over a collection of smaller, simpler Hilbert spaces. This allows for a more balanced treatment of holonomy and flux variables and could lead to the development of more satisfactory coherent states. Lanery's talk at Perimeter and the papers on ar
  • #1
marcus
Science Advisor
Gold Member
Dearly Missed
24,775
792
A major change is happening in the way LQG is formulated.
Suzanne Lanery gave a seminar about this 5 days ago (10 Dec) at Perimeter, and today she gave a similar talk at Nijmegen (Renate Loll's seminar).

It's evident there is a lot of interest in this new development, which is based on the 11 November Lanery-Thiemann paper called Projective LQG
You can get the paper simply by googling [projective LQG arxiv].
For the moment, at least, the first hit, searching with that googlekey, is
http://arxiv.org/abs/1411.3592
Projective Loop Quantum Gravity I. State Space
Suzanne Lanéry, Thomas Thiemann
(Submitted on 11 Nov 2014)
Instead of formulating the state space of a quantum field theory over one big Hilbert space, it has been proposed by Kijowski to describe quantum states as projective families of density matrices over a collection of smaller, simpler Hilbert spaces. Beside the physical motivations for this approach, it could help designing a quantum state space holding the states we need. ..
...
If the gauge group happens to be compact, we also have at our disposal the well-established Ashtekar-Lewandowski Hilbert space, which is defined as an inductive limit using building blocks labeled by edges only. We then show that the quantum state space presented here can be thought as a natural extension of the space of density matrices over this Hilbert space. In addition, it is manifest from the classical counterparts of both formalisms that the projective approach allows for a more balanced treatment of the holonomy and flux variables, so it might pave the way for the development of more satisfactory coherent states.
81 pages, many figures
 
Last edited:
Physics news on Phys.org
  • #2
Basically instead of quantum states being density matrices defined on a single grand Hilbert space they live on a collection of finite dimensional Hilbert spaces ordered by coarse-graining "projection" maps. Tensor product factorizations mapping from finer-grained higher-dimension Hilbert spaces to products of coarser and lower. When you think about it, this makes physical sense. In a real physical situation one can only control a finite number of degrees of freedom, and make a finite number of measurements. There are no infinite dimensional Hilbert spaces, but one can refine and approximate by controlling more degrees of freedom, at finer scale, etc. So what we are really talking about the information in a family of "small" Hilbert spaces, rather than one large one. A family that is ordered and connected consistently by projection/factorization maps.

One good way to understand the import of this shift is to watch the video of Suzanne Lanery's 10 December talk at Perimeter. The googlekey for it, that currently works is
[pirsa lanery]
If you google with that key you get the list of her pirsa videos and the top one is
http://pirsa.org/14120011/

Extending the state space of LQG
Instead of formulating the state space of a quantum field theory over a single big Hilbert space, it has been proposed by Jerzy Kijowski to describe quantum states as projective families of density matrices over a collection of smaller, simpler Hilbert spaces. I will discuss the physical motivations for this approach and explain how it can be implemented in the context of LQG. While the resulting state space forms a natural extension of the Ashtekar-Lewandowski Hilbert space, it treats position and momentum variables on equal footing. This paves the way for the construction of semi-classical states beyond fixed graph level, and eventually for the derivation of LQC from full LQG.
10/12/2014
 
Last edited:
  • #3
On November 11, Suzanne Lanery and Thomas Thiemann posted FOUR good-sized papers on arXiv.
One was the "Projective LQG" paper I already linked at the start.

The other three are very interesting because they provide a pedagogical introduction to the math formalism that is needed---they do it in three steps: classical, quantum, and toy model examples. This projective approach is interesting as a new way to do Quantum Field Theory and other stuff, not limited to LQG. The people who first worked it out were in fact interested in QFT and other QG as well---they were Jerzy Kijowski (1977) and Andrei Okolow (2007, 2013).
The projective approach makes really good physical sense, so that makes the pedagogical introduction to it extra interesting---you can apply it to other things.

If you google [suzanne thomas arxiv] you get a listing of all four papers:
Projective Limits of State Spaces II. Quantum Formalism
Projective Limits of State Spaces I. Classical Formalism
Projective Loop Quantum Gravity I. State Space
Projective Limits of State Spaces III. Toy-Models

in that order.

I noticed that the 2nd introductory paper "quantum formalism" is the top hit, which rang a bell for me because that is the one I have found the most helpful and have printed out rather than just reading on the computer screen. That is the one I would recommend to someone coming new to the subject.
http://arxiv.org/abs/1411.3590
 
Last edited:
  • #4
I'll post the abstract for that one. It's really worth taking a look at!
http://arxiv.org/abs/1411.3590
Projective Limits of State Spaces II. Quantum Formalism
Suzanne Lanéry, Thomas Thiemann
(Submitted on 11 Nov 2014)
In this series of papers, we investigate the projective framework initiated by Jerzy Kijowski and Andrzej Okolow, which describes the states of a quantum theory as projective families of density matrices. After discussing the formalism at the classical level in a first paper, the present second paper is devoted to the quantum theory. In particular, we inspect in detail how such quantum projective state spaces relate to inductive limit Hilbert spaces and to infinite tensor product constructions. Regarding the quantization of classical projective structures into quantum ones, we extend the results by Okollow [Okolow 2013, arXiv:1304.6330], that were set up in the context of linear configuration spaces, to configuration spaces given by simply-connected Lie groups, and to holomorphic quantization of complex phase spaces.
56 pages, 2 figures

My comment is that the collection of "small" Hilbert spaces (which I think of as each corresponding to some definite experiment or set of observations) is ordered by factorization maps which map downwards in complexity and split the larger into a tensor product of two smaller spaces--one representing the degrees of freedom we care about and the other representing information we ignore. (that will be "traced over" when it comes to density matrices representing actual states.)
 
Last edited:
  • #5
One reason I'm reacting strongly to this development is that I find Lanery has a clear effective expository writing style, combined with what I see as good "mathematics" style. The mathematical formulation is efficient, using conceptual sophistication to make things simpler rather than more complicated. That's just my impression. Another reason is that the formulation seems to me (IMHO) to correspond to operational (relational) physics reality. The necessary finiteness of each set of observations, and the assumption of consistency.
I'm remarking also that Lanery gave a seminar talk at Perimeter on 10 December and at Radboud (Nijmegen) on 15 December on the same topic ("Extending the LQG state space", by this projective/inductive ordering of finite experiments if you want to look at it that way). And also a talk with the same title at the EFI Winter School in February 2014. So I want to check to see if there is an EFI Winter School again in early 2015.
http://www.gravity.physik.fau.de
http://www.gravity.physik.fau.de/events/tux2/tux2.shtml
There's a scenic village in Austrian alps, called Tux---it's a ski resort for part of the year. EFI stands for "Emerging Fields Initiative" IQG stands for Institute for Quantum Gravity (at the Erlangen-Nürnberg university where Thomas Thiemann's group is based) and the EFI and IQG sponsor/organize a winter school at Tux.
Just now saw the announcement for the third EFI winter school "Tux3" in February 2015. I was a little surprised to see that Lanery was not listed as a participant, but it is only a partial list which will probably be added to.
http://www.gravity.physik.fau.de/events/tux3/tux3.shtml

And maybe I'm wrong. Maybe this "projective LQG" development, with a new way of formulating the theory, does not have as much momentum as I think. In any case here are the slides from that February 2014 talk at Tux:
http://www.gravity.physik.fau.de/events/tux2/lanery.pdf

BTW another thing to check out:
http://www.gravity.physik.fau.de/events/lqp34/lqp34.shtml
 
Last edited:
  • #6
Thanks for this thread Marcus, looks like I've got some interesting reading to look forward to! I've been working on a series of papers on semiclassical and asymptotic wigner 6j symbols, the Askey scheme and caustics so haven't picked up on Lanery' s work as much as I should have by the sound of it.
 
  • #7
I'm kind of reminded of something said in the book "Approaches to quantum gravity". D. Oriti says:

"I would be careful in distinguishing the "definition of the theory", given a partition function (or its transition amplitudes), and the quantities that, in the theory itself, corresponds to physical observables and are thus answers to physical questions. The partition function itself may be defined, in absence of a better way, through its perturbative expansion in Feynman diagrams, and thus involve an infinite sum that is most likely beyond reach of practical computability, and most likely divergent. However, I do believe that, once we understand the theory better, the answer to physical questions will require finite calculations."

on GFT.
 
Last edited:
  • #8
David Horgan said:
Thanks for this thread Marcus, looks like I've got some interesting reading to look forward to! I've been working on a series of papers on semiclassical and asymptotic wigner 6j symbols, the Askey scheme and caustics so haven't picked up on Lanery' s work as much as I should have by the sound of it.
As often happens, I have to rely to considerable extent on the verbal account in the introduction and conclusion sections of the Lanery Thiemann papers. I urge you not to get bogged in their long dense mathematical middles but to help me understand in general terms (when you have time to look at the papers) what is going on.

I have two thoughts, one is that although the work is impressive, it is primarily just tying up loose ends
in the theory without changing the derivable phenomena.

I think they are shifting focus away from spin networks to more generalized label sets and this may not be a big change after all (labeling both edges and patches of area) but it is a different labeling scheme and seems to provide a little more freedom and the balance they point out between homonym and flux variables.
It seems they are setting up a mathematical formalism which is more convenient both for expressing semiclassical states and for showing the correct (classical/ continuum) limits.

I don't want to slight the importance of convenience in mathematics, of having a supple adept language. I'm impressed with what I suspect are the possibilities here. But on the other hand in light of the recent piece in Nature by Joe Silk and George Ellis I want to be conscious that there may be a community wide shift of emphasis towards testability and phenomenology.

Carlo Rovelli emphasized that possible shift back in February 2014 with the "Planck star" initiative. In effect, let's look for QG effects that produce stuff we might be able to observe in the sky.

But wait. It could be that Lanery Thiemann work provides a better basis for building standard model matter into Loop spacetime geometry and this could have observable phenomenological consequences. Just casting about trying to understand. When and if you have time and inclination I hope to learn your view of these developments.
 
  • #9
julian said:
I'm kind of reminded of something said in the book "Approaches to quantum gravity". D. Oriti says:

"I would be careful in distinguishing the "definition of the theory", given a partition function (or its transition amplitudes), and the quantities that, in the theory itself, corresponds to physical observables and are thus answers to physical questions. The partition function itself may be defined, in absence of a better way, through its perturbative expansion in Feynman diagrams, and thus involve an infinite sum that is most likely beyond reach of practical computability, and most likely divergent. However, I do believe that, once we understand the theory better, the answer to physical questions will require finite calculations."

on GFT.

Exactly! Thanks for pointing that out! There is a good piece in this week's Nature by prominent cosmologists Joe Silk and George Ellis that sounds that theme and calls for more testability, deriving more observational predictions, more Quantum Gravity phenomenology, as opposed to pure theory.

I think that shift towards QG phenomenology is going to an important part of the LQG "Second Act" that I am trying to conceive of here.

Bee Hossenfelder, whose specialty is QG pheno, has posted a (fierce) essay amplifying on Silk and Ellis, calling for serious support for QG pheno . There are people who do it and she has helped organize several conferences on empirical exploration for QG effects. I think the effort is due to ramped up.

Conceivably the Lanery Thiemann work has relevance to this shift in emphasis if it can make it more mathematically CONVENIENT to combine standard model matter with Loop QG and also to calculate observable effects. I have yet to get an inkling of what you can do with this "generalized label set" idea that seems to go beyond spin network labeling (it just looks good to me, so I want to find out more about it.)
 
  • #10
Trying to grok the first paper...

I could use a cartoon for this notion (p.2 It seems like an important bit of context)

"More specifically, we will be interested in applications to Loop Quantum
Gravity (LQG, [1, 31]), and to the construction of semi-classical and related states in this context.
There seems indeed to exist serious obstructions [30, 16, 10] to find such states within the AshtekarLewandowski
Hilbert space used in LQG [2], arising from the intrinsic asymmetry in the role played
by the configuration and momentum variables (ie. the holonomies and fluxes, see eg. the discussion
in [5]). This asymmetry can be traced back to the fact that the formalism is build on a vacuum which
is an eigenstate of the flux observables (thus having maximal uncertainties in the holonomies). The
states are then obtained as discrete excitations around this vacuum. The trouble is that, no matter
how many discrete excitations are piled up on top of the vacuum, this will never be sufficient to
mask this initial bias."

Is there a metaphor that captures the relationship between holonomies and fluxes, configuration and momentum. Are we just talking about the "state" (or "position") and "momentum" attributes of a system, and the reciprocal/inverse (whatever) uncertainty relation is just the one implied by Heisenberg's Uncertainty Principle. And/or is the "maximal uncertainties in the holonomies" just the condition of Schroedinger's cat before we check on him?

Also, can anyone save me having to research why the whole question of vacuum QM starts with an "infinite dimensional" theory. Is that just because we don't want to add the 3+1 assumption a-priori, if we can avoid it. In other words it's trying to derive quantum vacuum space-time from a maximally general algorithm of set algebra. I thought that I might be misunderstanding, and that the infinite dimensionality of the vacuum QM field just means that any real quanta is a thing that pops into reality with attributes valued by the specific coordinate of whatever the quantum vacuum "space-time" field is. But then this seems to imply that the field is a labeling system, (and all quanta are unique) not an invariant tensor... so that didn't seem right, or at least seemed a lot different.

Also is the notion of "coarse-graining" smaller higher-dimensional Hilbert Spaces into fewer lower- dimensional Hilbert Spaces, the same thing as "re-normalization" (at least for purposes of basic understanding).

Usually Wikipedia is pretty awesome but I'm having a hard time finding a good description of what "projective" means. I have a cartoon of two different sized squares one of which is a magnification of the other, but that seems overly weak.

Hey Marcus. I got my copy of the Smolin-Unger book. So far so good but I'm only just into the Unger section. Interesting, but it cracks me up how philosophers sometimes write... You could delete 3 out of every ten words, and end up with something easier to follow.
 
Last edited:
  • #11
Jimster, several of the things you are wondering about, as I understand it, have to do with a very interesting topic, namely something that comes up in topology and some other branches of mathematics which has not yet been utilized much in physics AFAIK. At least it has not percolated down to ordinary everyday physics---they may use this in algebraic QFT. It's basically simple enough, but unfamiliar. What I am talking about is so-called "direct limit" and "inverse limit".
Or there is the alternative wording "inductive limit" and "projective limit".

You know in ordinary calculus-level math you take limits of SEQUENCES OF NUMBERS. But in topology and some types of algebra it is defined how to take limits of collections of other types of objects, like topological spaces, or groups. It is very simple how they do it. What you need are MAPPINGS taking one object onwards to the next one, on down the line. You can read about this in Wikipedia under "direct limit" and "inverse limit" so I will just give a very rough overview. In the direct or inductive case, the mappings go forwards towards the limit, and the limit consists basically of the union of all the little pieces where you identify elements that the mappings say are equivalent.

In the inverse limit or projective limit case the mappings have to all be BACKWARDS away from the limit out into the crowd of little objects. This time the limit set is constructed by taking all possible chains of elements. The set consists of all backwards sequences of elements where you go back via the mappings (the "projections") down to smaller and smaller objects.

Admittedly it sounds unfamiliar but it is basically not all that complicated. It's basically a way of MERGING an ordered collection of small items into one large item of the same kind.

Like you have a paper cut-out of a human silhouette divided into a lot of pieces and mappings that order the pieces: fingers go into hands, hands map into lower arms, lower as map into arms, toes are part of feet (the mappings say) and feet are part of legs. So guided by the mappings of smaller parts onto larger you can reconstruct the silhouette of the whole figure. There's quite a bit of duplication, but you just lay duplicated parts on top of each other, whatever the mappings tell you to equate. That would be the DIRECT limit case. Maybe it should be called "construction"rather than "limit", direct or inductive construction.

And the inverse or projective limit goes in the opposite direction. The mappings say that hands project down to fingers, arms project down to hands...
You take all descending sequences, ordered by the mappings and .. have to go, back later

Looking at the Wikipedia articles might be actually better than what I've written here, ...

http://en.wikipedia.org/wiki/Direct_limit
http://en.wikipedia.org/wiki/Inverse_limit

Having all the pieces, and the projections showing what is part of what, enables you to FINESSE the whole figure, even though you were not given the whole figure to begin with. You express it as a set of interconnected parts----the Wikipedia article gives the set-theoretical expression for the inverse limit.

In Loop gravity, Abhay Ashtekar and Jerzy Lewandowski famously expressed the quantum state space of LQG as an inductive limit. In 2006 or so Andrei Okolow gave reasons why it might be better to construct it as a projective limit. This could be a break through, it is very interesting to consider how this one simple change could get rid of a lot of problems. This is basically the reason for my interest. I will get a link to the Okolow article, he explains his reasoning in the introduction.
Maybe I can just quote that short part and save people the trouble of looking at the whole article.
 
Last edited:
  • #12
Here is Okolow's 2006 paper (it is what he started here that Suzanne Lanery and Thomas Thiemann are working on)
http://arxiv.org/abs/gr-qc/0605138
==quote Okolow page 2==
"...Such an idea was presented by Jerzy Kijowski in his review [12] on this author’s Ph.D. thesis and was originally applied [1] more than 25 years ago as an element of a procedure of quantization of a field theory...
... Before we will describe the idea in details let us say briefly that it consists in using projective techniques to build the space of quantum states instead of inductive ones which are used in the compact case.

The goal of this paper is to apply the Kijowski’s proposal to quantization of a diffeomorphism invariant theory of connections with a non-compact structure group. However, because of the lack of any experience with quantization of theories of this sort we do not dare to apply it to GR right now. Instead we will find a very simple ’toy example’ of such a theory and will quantize it combining the Kijowski’s idea with the standard methods of LQG..."
==endquote==
So maybe Jerzy Kijowski was Okolow's PhD advisor. The idea has an odd history. And all this time the Ashtekar-Lewandowski inductive limit construction has been the standard LQG state space. There is even a uniqueness proof for it, given certain assumptions.
 
Last edited:
  • #13
Helpful Marcus. I will drill into the wiki. I'm a bit confused though because of the following:

I read a neat book called "Why Stock Markets Crash" by a geophysicist named Didier Sornette

https://www.amazon.com/dp/0691118507/?tag=pfamazon01-20.

The striking thesis (at least as I understood it) is that they crash due to the propagation of "pessimistic agreement" or coherent herd-panic states up through the hierarchy of investing agents, from the little guy to the major market players. As I remember it, he described the way the vast collection of all the small groups of interacting (communicating) investors gets re-normalized as input to the next level of the heirarchy (the brokerage houses). The brokerage houses then display the same "discrete scale invariant" herd like behavior - where there is either generally stable disorder (everyone has their own opinion and is looking out for their own interests in stable converging market) or there is shared panic. I think he made a lot of money building a model that could detect emerging panic as log-periodic (discrete scale invariant) patterns in price-index fluctuation.

When I google "re-normalization" I get wiki references to re-normalization in QFT, but I can't find anything quite so general as that picture.
I like the body metaphor - my "I think I'm smart voice" is saying - the body is literally what you say, a deeply layered log-periodic structure manifesting discrete-scale-invariance ( rather than just a metaphor). That said I had been picturing the diagrams from that book that described re-normalization as a high resolution screen of black and white pixels, getting down-sampled over and over into coarser and coarser screens, by averaging some n adjacent pixels into one representative black or white pixel - an example which I think qualifies as one direct-limit case or "Inductive Construction" as you describe it. Or am I missing it big?

W/respect to the paper, I was interpreting the reduction of dimensionality from the direct-limit process up from the vacuum, as being akin to the loss of information due to averaging in the screen-pixel case.

I'm still baffled by the notion of starting with "infinite dimensionality".

Just saw your last post. I will try to grok more. I'm even more confused now if they are trying to map down from whole to the finest grained constituents. I was picturing it the other way around.
 
Last edited by a moderator:

1. What is Loop Act Two: projective LQG?

Loop Act Two: projective LQG is a mathematical framework in quantum gravity that combines the principles of loop quantum gravity (LQG) and projective quantum geometry (PQG). It aims to solve the problems of singularities and non-renormalizability in general relativity by describing spacetime as a discrete, granular structure.

2. How does Loop Act Two: projective LQG differ from other approaches to quantum gravity?

Unlike other approaches, Loop Act Two: projective LQG does not rely on the concept of a continuous spacetime. Instead, it describes spacetime as a network of interconnected loops, which allows for a discrete and finite description of spacetime. It also incorporates the principles of projective geometry, which provides a more fundamental understanding of the structure of spacetime.

3. What are the main challenges in developing Loop Act Two: projective LQG?

One of the main challenges is developing a consistent and complete mathematical framework that can accurately describe the discrete nature of spacetime. Another challenge is reconciling Loop Act Two: projective LQG with other fundamental theories, such as quantum field theory and the standard model of particle physics.

4. How does Loop Act Two: projective LQG relate to experimental observations?

Currently, there is no direct experimental evidence for Loop Act Two: projective LQG. However, the framework is still in its early stages of development and many predictions have yet to be tested. Some researchers believe that future experiments, such as those involving gravitational waves, may provide evidence for Loop Act Two: projective LQG.

5. What are the potential implications of Loop Act Two: projective LQG?

If successful, Loop Act Two: projective LQG could provide a more complete and fundamental understanding of the structure of spacetime. It could also help resolve the long-standing problem of unifying general relativity and quantum mechanics. Additionally, Loop Act Two: projective LQG has the potential to make predictions about the behavior of matter and energy at the smallest scales, which could have practical applications in technology and engineering.

Similar threads

  • Beyond the Standard Models
Replies
2
Views
2K
  • Beyond the Standard Models
2
Replies
37
Views
5K
  • Beyond the Standard Models
Replies
24
Views
4K
  • Beyond the Standard Models
Replies
13
Views
3K
  • Beyond the Standard Models
Replies
1
Views
2K
  • Beyond the Standard Models
Replies
5
Views
2K
  • Beyond the Standard Models
Replies
2
Views
2K
Replies
8
Views
6K
  • Poll
  • Beyond the Standard Models
Replies
11
Views
3K
  • Beyond the Standard Models
Replies
1
Views
2K
Back
Top