Prospects of spin foam formalism in loop quantum gravity [Alexandrov, Roche]

In summary: Dirac equation. This theory is not just a linear extension of the BF theory, but it is a new theory which is consistent with the Dirac equation and can be quantized using the same quantization rules. The idea behind this model is that the gravity field is a function of the configuration of the spin foam. In summary, the authors discuss the shortcomings of spin foam models in relation to other approaches to the quantization of general relativity. They introduce two models which are based on the same idea but differ in the way the simplification constraints are imposed. The first model, [21] (ELPR), suffers from inconsistencies at the
  • #1
tom.stoer
Science Advisor
5,779
172
I would like to continue discussing SF (i.e. PI) models of LQG based on chapter 3 from

http://arxiv.org/abs/1009.4475
Critical Overview of Loops and Foams
Authors: Sergei Alexandrov, Philippe Roche
(Submitted on 22 Sep 2010)
Abstract: This is a review of the present status of loop and spin foam approaches to quantization of four-dimensional general relativity. It aims at raising various issues which seem to challenge some of the methods and the results often taken as granted in these domains. A particular emphasis is given to the issue of diffeomorphism and local Lorentz symmetries at the quantum level and to the discussion of new spin foam models. We also describe modifications of these two approaches which may overcome their problems and speculate on other promising research directions.

In chapter 3 Alexandrov and Roche discuss spin foam models which may suffer from related issues showing up in different form but being traced back to a common origin (secondary second class constraints, missing Dirac’s quantization scheme, …).

In a nutshell, LQG is supposed to give an Hamiltonian picture of quantum gravity based on the use of specific variables (connections), whereas spin foam models are certain type of discretized path integral approach to the quantization. A priori these are different approaches using different methods and leading to different results. Of course, in the best case their predictions should coincide and they should be just equivalent quantizations. But at present such an agreement has not been achieved yet.

First Alexandrov and Roche start with a new perspective based on SFs

Most of the constructions of SF models of 4-dimensional general relativity heavily rely on the Plebanski formulation and translate the classical relation between BF theory and gravity directly to the quantum level. In other words they all employ the following strategy: 1. discretize the classical theory putting it on a simplicial complex;
2. quantize the topological BF part of the discretized theory;
3. impose the simplicity constraints at the quantum level.
Thus, instead of quantizing the complicated system obtained after imposing the constraints, they first quantize and then constrain. This strategy is behind all the progress achieved in the construction of 4-dimensional SF models. However, at the same time, this is a very dangerous strategy and, as we believe, it is the reason why most of these models cannot be satisfactory models of quantum gravity. As we will show, it is inconsistent with the Dirac rules of quantization and is somewhat misleading.


They introduce the models [21] (ELPR) and [20] (FK)

Although the models of [21] (ELPR) and [20] (FK) are in general different from each other and obtained using different ideas, they have several common inputs. First, they both rely on the idea allowing to effectively linearize the simplicity constraints

They discuss the BF model, why it fails to provide a quantization of GR, and they discuss how BF serves as a basis for the new model; they show that some shortcomings of the BF models are not resolved in [21] (ELPR) and [20] (FK)

But in fact (3.19) is stronger because it excludes the topological sector of Plebanski formulation. Thus, the linearization solves simultaneously the problem of the BC model that it does not distinguish between the gravitational and the topological sectors. The new constraint leads directly to the sector we are interested in. ... Second, both models suggest to quantize an extension of Plebanski formulation which includes the Immirzi parameter. This results in crucial deviations from the results of the BC model already at the level of imposing the diagonal simplicity constraint.

The construction of [21] (ELPR) and [20] (FK) contains some 'quantization ambiguities' as expected from chapter 2.

Then, as has been noted already in [131], the constraint (3.21) does not have solutions except some trivial ones. However, appealing to the ordering ambiguity, the authors of the model [21] adjusted the operator in (3.21) so that the constraint does have solutions.

In the following they stress deviations from well-established standard qunatization procedures used to quantize & modify BF leading to the new models

The main suggestion of this model, [EPRL] which distinguishes it from the BC model and was first realized in [128], is that the simplicity constraints should be imposed only in a weak sense that is instead of imposing the constraints on the allowed states [annihilating states] one only requires [vanishing of their expectation values sandwiched between physical states!] This is justified by noting that after identification of the bivectors Bf with generators of the gauge group or a combination thereof (3.20), the simplicity constraints become non-commutative and imposing them strongly leads to inconsistencies, as is well known for any second class constraints. This does not concern the diagonal simplicity constraint which lies in the center of the constraint algebra and therefore can still be imposed strongly leading to the restriction (3.21) on the allowed representations

[in FK] one starts again from the partition function for BF theory (3.12) where the simplicity constraints should be implemented as restrictions on the representation labels. However, before doing that one makes a refinement of the decomposition (3.12) using the coherent state techniques developed in [19] Here we concentrate on the Euclidean case. Although the Lorentzian case was also considered in [20], the corresponding construction is much more complicated and even the Immirzi parameter has not been incorporated in it so far.


They doubt that the SFs and canonical LQG agree in the kinematical Hilbert space; the 'proofs' are mostly based on unphysical i.e. non-gauge invariant quantities subject to quantization anomalies.

Due to this fact it was claimed that the boundary states of the new models are the ordinary SU(2) spin networks [128] and it is now widely believed that there is a perfect agreement between the new SF models and LQG at the kinematical level [10]. However, it is easy to see that this is just not true. First of all, the states induced on the boundary of a spin foam are not the ordinary spin networks, but projected ones considered in section 2.2.2

However, on one hand, there is no any fundamental reason to perform such a projection. And on the other hand, this relation shows that the kinematical states of LQG and the boundary states of the EPRL model are indeed physically different and the agreement between their labels is purely formal. The claimed agreement is often justified by comparison of the spectra of geometric operators, area [21] and volume [138]. By appropriately adjusting the ordering, the spectra in spin foams and LQG can be made coinciding. However, the operators, which are actually evaluated in these papers, are not the standard ones, but shifted by constraints.

But in the EPRL approach the boundary states are supposed to be integrated over these normals so that the operator corresponding to (3.39) is simply not defined! On top of that, even if one drops the integration over xt, as we argue below, and gets a well defined operator on a modified state space, we see that the quantization of the geometric operators is not unique. To get the coincidence with LQG requires ad hoc choice of the ordering and of the classical expression to be quantized.


After their discussion of EPRL and FK they focus on the imposition of constraints. To understand these issues one has to be familiar with the Dirac quantization procedure (second class constraints, Dirac brackets instead of Poisson brackets)!

All models presented in the previous section have been derived following the strategy of section 3.1.2: first quantize and then constrain. Now we want to reconsider the resulting constructions taking lessons from the canonical approach. As we showed in the previous section, the spin foam quantization originates in Plebanski formulation of general relativity. The canonical analysis of this formulation has been carried out in [139, 140, 141] and turns out to be essentially equivalent to the Lorentz covariant canonical formulation of the Hilbert–Palatini action [18] once eijkBjk is identified with ~ Pi. The Immirzi parameter is also easily included and appears in the same way. Thus, the canonical structure to be quantized can be borrowed from section 2.2.1. In particular, the role of the simplicity constraints is played by the constraints (2.30).

In section 3.4.1. Alexandro and Roche show based on a toy model how the two quantization strategies
1) a la Dirac and
2) 'first quantize using Poisson - then constrain'
may lead models which look equivalent at first sight but are definately inequivalent of one carefully inspects their details.

I do not go into the details here but I expect that everybody aiming to understand Alexandrov's reasoning will carefully follow his arguments and will understand in detail Dirac's constraint quantization procedure! The analysis of secondaray second class constraints modifying the symplectic structure on phase space i.e. replacing Poisson with Dirac brackets is key to the whole chapter 3!

They compare the two quantization strategies in the toy model:

Comparing the results of the two approaches, one observes a drastic discrepancy: gamma is either quantized or not. In the former approach for a non-rational gamma the quantization simply does not exist, whereas there are no any obstructions in the latter ... Taking into account that the second approach represents actually a result of several possible methods, which all follow the standard quantization rules, it is clear that it is the second quantization that is more favorable. The quantization of ? does not seem to have any physical reason behind itself. In fact, it is easy to trace out where a mistake has been done in the first approach: it takes too seriously the symplectic structure given by the Poisson brackets, whereas it is the Dirac bracket that describes the symplectic structure which has a physical relevance. It is easy to see that this leads to inconsistency of the first quantization. For example, the Hamiltonian H, ... is simply not defined on the subspace spanned by linear combinations of (3.54) ...

The next topic is relevant as response to Rovelli's "we don't need a Hamiltonian"

However, one can take a “minimalistic” point of view and do not require the existence of a well defined Hamiltonian on the constrained state space. (We thank Carlo Rovelli for discussion of this possibility.) After all, spin foam models are designed to compute transition amplitudes. Therefore, we are really interested not in the Hamiltonian itself, but in its matrix elements and the latter can be defined by using the Hamiltonian and the scalar product on the original unconstrained space

However, this expectation turns out be wrong. As is clear from the derivations in [20, 132] and has been explicitly demonstrated in a simple cosmological model [142], the vertex amplitude actually appears as a matrix element of the evolution operator. This requires to consider expectation values of higher powers of the Hamiltonian for which the property (3.62) does not hold anymore. This leads to deviations of results obtained by the spin foam strategy from those which are based on the well grounded canonical quantization

Let us summarize what we learned studying the simple model (3.42):
• The strategy based on “first quantize, then constrain” leads to a canonical quantization which is internally inconsistent as the Hamiltonian operator is ill-defined on the constrained state space.
• The origin of the problem as well as the quantization of the parameter ? can be traced back to the use of the Poisson symplectic structure which does not take into account the presence of the second class constraints.
• Besides, this approach completely ignores the presence of the secondary second class constraint which is crucial for suppressing the fluctuations of non-dynamical variables and producing the right vertex amplitude in discretized theory.
• An attempt to interpret the results of such quantization only as an approach to compute transition amplitudes using (unphysical) Hamiltonian (3.61) does not work as they turn out to be incompatible with the results of the standard (path integral or canonical) quantization. As a result, the transition amplitudes computed in this way do not have any consistent canonical representation.


In our opinion, all these problems are just manifestations of the fact that the rules of the Dirac quantization cannot be avoided. This is the only correct way to proceed leading to a consistent quantum theory.

The example presented above explicitly reveals the main problems of the new SF models and their origin. All these models start from the symplectic structure provided by the simple BF theory, which ignores constraints of general relativity. In particular, they all use the usual identification of the B-field with the generators of the gauge group, or its gamma-dependent version (3.20), when the constraints are translated into quantum level. But this identification does not agree with the symplectic structure of general relativity

Alexandro and Roche pay attentio to the simplicity constraint which is key to the SF formalism and which seems to be the weakest point. It is this constraint which is requred to turn BF into GR - and it is this constraint which is quantized in the wrong way!

In fact, a special care which is paid to the diagonal simplicity, when it is imposed strongly whereas the cross simplicity constraints are imposed only weakly, results from another common confusion. As we explained in section 3.3.1, this is done because the diagonal simplicity is in the center of the non-commutative constraint algebra of all simplicity constraints and thus interpreted as first class. But this classification would be correct only if there were no other constraints to be considered. It completely ignores the presence of the secondary constraints. The latter do not commute with all simplicity and in particular with the diagonal simplicity. As a result, all these constraints are second class and should be quantized via the Dirac bracket. Given all this, we expect that the new SF models suffer from inconsistencies which we met in the previous subsection. They can be summarized by saying that the statistical models defined by the SF amplitudes do not have a consistent canonical quantization picture, where the vertex amplitude appears as a matrix element of an evolution operator determined by a well defined Hamiltonian. In particular, there is no reason to expect that the new models may be in agreement with LQG or any of its modifications. Note that this incompatibly with the canonical quantization manifests itself in the issues involving the Hamiltonian. This is why one does not see it in a semiclassical analysis or in any investigation restricting to the kinematical level. It should be stressed that this critics is not just about face or edge amplitudes, which depend on details of the path integral measure but can be found in principle from consistency on the gluing of simplices [135]. In fact, the ignorance of the secondary second class constraints has much more profound implications and, what is the most important, it affects the vertex amplitude (see the next subsection). The standard prescription that the vertex is obtained by evaluating the boundary state of a 4-simplex on a flat connection is a direct consequence of the employed strategy, which starts by quantizing the topological BF theory, and should be modified to take into account all constraints of general relativity. Of course, the SF models are still well defined as statistical models. But, in our opinion, this is not enough to consider them as candidates for quantum gravity. A good candidate should allow a quantum mechanical representation in terms of wave functions, Hamiltonian, etc., especially if one hopes to find a viable loop quantization of gravity. The point we are making here is that the SF models derived using the strategy “first quantize and then constrain” do not satisfy this requirement.

As I stressed a couple of times the Lagrangian PI is a derived object which cannot be seen as a fundamental entity. The main problem is the construction of the measure taking into account the second class constraints. It is this step where the SF construction seems to fail up to now; it is this step where some weak points from the BF model do show up agains

The SF representation of quantum gravity can be seen as an outcome of a Lagrangian path integral for discretized Plebanski formulation of general relativity. However, the Lagrangian or a configuration space path integral is a derived concept. A more fundamental one is the path integral over the phase space. Its measure can be rigorously derived and in particular it contains d-functions of all second class constraints. On the other hand, the Lagrangian path integral can be obtained from the canonical one only at certain very special circumstances.

Therefore, ..., if one wants to calculate transition functions as one does in SF models, one must use the canonical path integral.The main consequence of this conclusion is that, as we mentioned above, the secondary second class constraints should appear explicitly in the integration measure. We believe that this is an important point missed by the present-day SF models.


Alexandro and Roche stress that all these defects of the new models may be invisble in the semiclassical sector. That means that reproducing GR in the IR is not sufficient as a test for successfil quantization. Tis is trivial as there are several inequivalent models having the same IR limit (this applies to any quantum theory, not only to GR)

One might argue that since the secondary constraints appear as stability conditions for the primary ones and the latter are imposed in the path integral at every moment of time, the secondary constraints should follow automatically and need not to be imposed explicitly. For example, in SF models based on Plebanski formulation one could expect that all set of simplicity constraints ensures the simplicity of bi-vectors at all times and thus it is enough. However, this argument works only at the quasiclassical level where the equations of motion are satisfied. Off-shell the quantum fluctuations of degrees of freedom fixed classically by the secondary constraints are not suppressed if the constraints are not inserted in the path integral.

It is also not seen at the quasiclassical level since the missing constraint is obtained on mass shell anyway. Therefore, it is not in contradiction with the fact that the semiclassical asymptotics of the EPRL and FK amplitudes reproduce the Regge action [147, 148, 149], i.e., the correct classical limit. The problem is that the secondary constraints are not imposed strongly at the quantum level and as a result one might expect the appearance of additional quantum degrees of freedom in the new models
 
Physics news on Phys.org
  • #2
Alexandrov and Roche conclude that all these shortcomings are related to the previously discussed problems of the canonical approach. Therefore using the weaker PI formalism and focussing on the semiclassical limit does not help. It's the construction of the theory, not the derivation of its limiting cases and physical results that has to be re-investigated.

Summarizing our analysis of the spin foam models, we see that already the proper consideration of the diagonal simplicity constraint shows that all attempts to include the Immirzi parameter following the usual strategy lead to inconsistent quantizations. All problems of the EPRL and FK models can be traced back to that they quantize the symplectic structure of the unconstrained BF theory and impose (a half of) the second class constraints after quantization. So we conclude that the widely used strategy “first quantize and then constrain” does not work and should be abandoned. ... However, a concrete implementation of the secondary constraints both in the boundary states and in the vertex amplitude (3.73) remains problematic. This is precisely the same problem which supplies the canonical approach. In particular, it is closely related with the non-commutativity of the connection. Thus, although it is possible to obtain a picture similar to the loop quantization, the goal, which is to provide a credible spin foam model and, in particular, a vertex amplitude correctly taking into account all constraints of general relativity, is far from being accomplished.

So much for today ...
 
  • #3
The thing is, with spin-foam transition amplitude (at least as defined in Rovelli's Zakopane lectures) on a *given graph*, there is no ambiguities because everything is finite --- it's just a summation (though there is an infinite summation over representations of SU(2), but I think that's fine essentially because it's just a Fourier transform of an L^2 norm). So there's nowhere for things to hide. More subtle to me is if things stay good in the limit of refinement, but I think that's precisely the question which marcus' latest darling in FGZ is begging at the end --- it's just as open a question if such a refinement limit is well-defined for the classical theory.

I think one reason I don't buy the need for constrained quantization to really work out is because that theory, although brilliantly worked out by Dirac, was fundamentally about systems with a decent choice of time, and therefore time evolution. It's not clear to me that's true at all for GR --- just because it admits ADM does not mean that quantum mechanically one can still do it --- the limits might not commute! If that is the case, then the canonical approach is actually doomed to fail. (Then again, perhaps my canonical (classical) knowledge is just poor, and there's in actually some very elegant higher-category theoretic reason that it must work...)
 
  • #4
The "problem of time" in the hamiltonian formalism shows up in any system where you have general covariance. This is not just restricted to gravity at all! In fact, you can make any system generally covariant by suitably parametrizing it, and the same problem will be manifest.

The difficult issue is in 'deparametrizing' the theory, as well as finding the reduced phase space.

In general for a complicated enough Hamiltonian with suitable choices of canonical variables, creating the reduced phase space is technically daunting!

So the problem isn't with the procedure perse, its with actually solving it!
 
  • #5
The ambiguities arise in the definition of spin networks(coloring, vertex, ...). It's by no means clear that there is one unique spin network. What the paper says is that the ambiguities in the connection show up as ambiguities in the definition of the spin network, i.e. that there is a class of spin networks instead of simply one.

In addition it is not clear how to define the measure. The SF is something like a Lagrangian PI and is therefore a derived object. The fundamental object is the canonical PI - which is not known due to the ambiguities in the Hamiltonian. Therefore the measure is not known.

What Alexandrov says is that during the construction of the PI there are several choices how to proceed. Looking at one specific result there is no obviously visible ambiguity. But this is due to restrictions and choices during the construction.
 
  • #6
genneth said:
I think one reason I don't buy the need for constrained quantization to really work out is because that theory, although brilliantly worked out by Dirac, was fundamentally about systems with a decent choice of time, and therefore time evolution. It's not clear to me that's true at all for GR --- just because it admits ADM does not mean that quantum mechanically one can still do it --- the limits might not commute!
You can construct simple models where not using Dirac's approach simply fails classically. The Poisson structure generates the wrong time evolution, i.e.it does not respect second class constraints.

But there is no good reason why an approach that fails classically should work in quantum theory.
 
  • #7
tom.stoer said:
You can construct simple models where not using Dirac's approach simply fails classically. The Poisson structure generates the wrong time evolution, i.e.it does not respect second class constraints.

But there is no good reason why an approach that fails classically should work in quantum theory.

What does wrong time evolution and not respecting second class constraints mean? Does it mean an inconsistent quantum theory, or one that does not have the right classical limit?
 
  • #8
Not respecting a constraints means that there is a constraint C=0, but that the time evolution {H,C} = -dC/dt fails to respect this (or some other) constraint.

Assume you have a particle living on a sphere S². Instead of using spherical coordinates you start with cartesian coordinates in R³ plus a constraint r² = const. Not respecting the constraints would mean that quantum mechanically the particle could fluctuate around the sphere violating r² = const. This is definately a different quantum system! It's not necessarily an inconsistent one, but it's different.

http://en.wikipedia.org/wiki/Second_class_constraints

There is no reason why these problems should be visible in the semiclassical limit. The anomalies could very well be suppressed by some powers of hbar which means that the different theories have the same semiclassicallimit.

There are some differences between first and second class constraints which requires different procedures regarding their implementation. For first class constraints their Poisson brackets and their time evolution do only generate linear combinations of constraints and therefore the two approaches 'first quantize, then constrain C|phys> = 0' and 'first contrain, then quantize' are equivalent. For second class constraints the first approach does not work b/c the Poisson bracket of two constraints is something which cannot be set to zero consistently.

Quantizing first and constrain afterwards would mean that you quantize and then implement C|phys> for all contraints, i.e. you select a physical Hilbert space from a larger one. But that means that all expressions constructed from constraints f(C) would have to vanish as well, i.e. f(C)|phys> = 0. Looking at the particle on sphere S² with radius R there are contraints from which you can construct something like f(C) = r² which means that there would be a condition r²|phys> = 0 which is nonsense!
 
Last edited:
  • #9
I would like to summarize some points where Alexandrov and Roche suspect problems

  • the problems of the EPRL and FK models can be traced back to that they quantize the symplectic structure of the unconstrained BF theory [Poisson brackets] and impose (a half of) the second class constraints after quantization.
  • So we conclude that the widely used strategy “first quantize and then constrain” does not work and should be abandoned.
  • As a result, all these constraints are second class and should be quantized via the Dirac bracket.
  • These inconsistencies ... can be summarized by saying that the statistical models defined by the SF amplitudes do not have a consistent canonical quantization picture, where the vertex amplitude appears as a matrix element of an evolution operator determined by a well defined Hamiltonian [obtained in correspondence with Dirac's quantization procedure therefore taking into account second class constraints properly]
  • In fact, the ignorance of the secondary second class constraints ... affects the vertex amplitude [and the measure].
  • Of course, the SF models are still well defined as statistical models. But, in our opinion, this is not enough to consider them as candidates for quantum gravity.

I think the last remark can be seen as a response to genneth's post; within the given context the new SF models seem to be complete and unique, but Alexandrov and Roche question the validity of the approach, not only individual results based on the approach.

The paper is from Sept. 2010, so I guess there should be some response from the community. I will check Thiemann's papers from May 2011 whether he addresses these issues - either explicitly or implicitly
 
Last edited:
  • #10
They are taking it seriously!

Today Benjamin Bahr, Rodolfo Gambini, Jorge Pullin released a paper on arxiv addressing the issues raised by Alexandrov at al.

http://arxiv.org/abs/1111.1879
Discretisations, constraints and diffeomorphisms in quantum gravity
Authors: Benjamin Bahr, Rodolfo Gambini, Jorge Pullin
(Submitted on 8 Nov 2011)
Abstract: In this review we discuss the interplay between discretization, constraint implementation, and diffeomorphism symmetry in Loop Quantum Gravity and Spin Foam models. To this end we review the Consistent Discretizations approach, which is an application of the master constraint program to construct the physical Hilbert space of the canonical theory, as well as the Perfect Actions approach, which aims at finding a path integral measure with the correct symmetry behavior under diffeomorphisms.

In one section on the canonical approach they say

LQG is a canonical approach, in which the kinematical Hilbert space is well-understood, and the
states of which can be written as a generalization of Penrose’s Spin-Network Functions [14]. Although
the formalism is inherently continuous, the states carry many discrete features of lattice
gauge theories, which rests on the fact that one demands Wilson lines to become observables.
The constraints separate into (spatial) diffeomorphism- and Hamiltonian constraints. While
the finite action of the spatial diffeomorphisms can be naturally defined as unitary operators,
the Hamiltonian constraints exist as quantizations of approximate expressions of the continuum
constraints, regularized on an irregular lattice. It is known that the regularized constraints do
not close, so that the algebra contains anomalies, whenever the operators are defined on fixed
graphs which are not changed by the action of the operators [15]. If defined in a graph-changing
way, the commutator is well-defined, even in the limit of the regulator going to zero in a controlled
way [16]. However, the choice of operator topologies to choose from in order for the limit
to exist is nontrivial [17], and the resulting Hamiltonian operators commute [18]. Since they
commute on diffeomorphism-invariant states, the constraint algebra is satisfied in that sense.
Furthermore, however, the discretization itself is not unique, and the resulting ambiguities survive
the continuum limit [19]. In the light of this, it is non-trivial to check whether the correct
physics is encoded in the constraints.


Then they comment on Regge-like discretizations

Another problem arises within any attempt to build a quantum gravity theory based on
Regge discretizations: Generically, breaking of gauge symmetries within the path integral measure
leads to anomalies, in particular problematic in interacting theories [55]. In quantum
theories based on Regge calculus, the path integral will, for a very fine triangulation, contain
contributions of lots of almost gauge equivalent discrete metrics, each with nearly the same amplitude.
Hence the amplitude will not only contain infinities of the usual field theoretic nature,
but also coming from the effective integration over the diffeomorphism gauge orbit. This is in
particular a problem for the vertex expansion of the spin foam path integral, as advocated in
[56]. The triangulations with many vertices will contribute much more to the sum than the triangulations
with few vertices, so that, at every order, the correction terms dominate, rendering
the whole sum severely divergent.


Their conclusion is not very positive

On the other side, in chapter 3 we have demonstrated in which sense the diffeomorphism
symmetry is broken in the covariant setting, in particular in Regge Calculus. We have described
how one might hope to restore diffeo symmetry on the discrete level, by replacing the Regge
action with a so-called perfect action, which can be defined by an iterative coarse-graining
process. On the quantum side, this process resembles a Wilsonian renormalization group flow,
and it has been shown how, with this method, one can construct both the classical discrete
action, as well was the quantum mechanical propagator, with the correct implementation of
diffeomorphism symmetry, in certain mechanical toy models, as well as for 3d GR. Whether
a similar construction works in 4d general relativity is still an open question, being related to
issues of locality and renormalizability, which are still largely open in this context.
 

1. What is the spin foam formalism in loop quantum gravity?

The spin foam formalism is a mathematical framework used in loop quantum gravity (LQG) to describe the quantum nature of spacetime. It involves representing the geometry of spacetime as a collection of surfaces (known as spin networks) and assigning quantum states to these surfaces. This formalism allows for the quantization of gravity, which is necessary for a unified theory of quantum mechanics and general relativity.

2. How does the spin foam formalism relate to other approaches in LQG?

The spin foam formalism is one of the main approaches used in LQG, along with the canonical quantization approach. While both approaches aim to quantize gravity, the spin foam formalism has the advantage of being more closely related to the properties of spacetime, such as curvature and topology. It also allows for the inclusion of matter fields, making it a more comprehensive approach to quantum gravity.

3. What are the prospects of the spin foam formalism in LQG?

The spin foam formalism has shown promising results in providing a consistent and mathematically rigorous framework for describing quantum gravity. It has also been successful in recovering key features of general relativity, such as the Einstein equations and the area law for black hole entropy. However, further research and development are still needed to fully understand and test the potential of the spin foam formalism in LQG.

4. What are some challenges in implementing the spin foam formalism?

One of the main challenges in implementing the spin foam formalism is the complexity of the mathematical calculations involved. This can make it difficult to obtain concrete predictions or solutions for specific physical scenarios. Additionally, there are still open questions and debates surrounding the interpretation and physical significance of the spin foam amplitudes, which are essential in this formalism.

5. How can the spin foam formalism be tested or validated?

Currently, there are several proposed ways to test or validate the spin foam formalism, such as using numerical simulations and experiments, and comparing its predictions to observations and data from astrophysical phenomena. Additionally, further theoretical developments and collaborations with other fields, such as quantum information theory, may also help in validating the spin foam formalism and its prospects in LQG.

Similar threads

Replies
13
Views
2K
Replies
2
Views
2K
  • Beyond the Standard Models
Replies
9
Views
458
  • Beyond the Standard Models
Replies
7
Views
1K
Replies
26
Views
8K
  • Beyond the Standard Models
Replies
6
Views
3K
  • Beyond the Standard Models
Replies
4
Views
1K
Replies
1
Views
2K
  • Beyond the Standard Models
Replies
5
Views
2K
  • Beyond the Standard Models
Replies
17
Views
4K
Back
Top