Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

EPRL quantization found to be completely inconsistent.

  1. Apr 16, 2010 #1

    MTd2

    User Avatar
    Gold Member

    http://arxiv.org/abs/1004.2260

    The new vertices and canonical quantization

    Sergei Alexandrov
    (Submitted on 13 Apr 2010)
    We present two results on the recently proposed new spin foam models. First, we show how a (slightly modified) restriction on representations in the EPRL model leads to the appearance of the Ashtekar-Barbero connection, thus bringing this model even closer to LQG. As our second result, we however demonstrate that the quantization leading to the new models is completely inconsistent since it relies on the symplectic structure of the unconstraint BF theory.
     
  2. jcsd
  3. Apr 16, 2010 #2
    page 12

    "thus the new spin foam models (i.e EPRL) have no chance to describe quantum gravity"


    "the quest for the right model is to be continued"

    Ouch. Guess it's back to the drawing board?

    "the relation between LQG and SF states is not so direct"

    Perhaps it's futile to try to connect LQG to SF, and develop a SF with correct semiclassical limit without regard to LQG kinematical results.
     
  4. Apr 16, 2010 #3

    marcus

    User Avatar
    Science Advisor
    Gold Member
    Dearly Missed

    Rovelli already answered Alexandrov implicitly in his recent paper.
    see page 1

    I doubt you will see any "return to the drawing board" prompted by the Alexandrov result at this point :biggrin:

    (essentially because the new formulation by Bianchi is too interesting.)
     
    Last edited: Apr 16, 2010
  5. Apr 16, 2010 #4

    atyy

    User Avatar
    Science Advisor

    "... with the only exception of the FK model without the Immirzi parameter by reasons to be explained below."
     
  6. Apr 27, 2010 #5

    tom.stoer

    User Avatar
    Science Advisor

    marcus,

    unfortunately this is a different thread, but you are mentioning Rovelli's remarks on page 1 of his latest paper.

    Wich remarks??? I cannot see anything addressing Alexandrov's issues.

    Alexandrov is - as far as I can see - not cooperating with the majority of the LQG guys but working somehow isolated. Is this true? He has published over the years a couple of papers always adressing weaknesses of the mainstream approach. Now he is claiming again that he has identified a serious loophole in the standard SF quantization procedure, namely the wrong treatment of second class constraints. I was working on constraint quantization myself and my conclusion was always that second class constraints are a hint to identify other variables in terms of which the constraints become first class (so do not use the variables which generate second class constraints plus Dirac brackets; instead use different variables with standard Poisson brackets and canonical commutation relations). Is this something the LQG/SF community has already done but we (Alexandrov) are not aware of? Or did they really miss something?

    If Alexandrov is right this is not simply a "different quantization" or a "non-standard approach" or anything like that. If he is right the SF quantization is simply wrong!

    Can it really be that nobody has recognized that the simplicity constraints - together with other constraints - are second class and must be treated differently? Really nobody? I can't believe it.

    What is strange is that nobody seems to care about his findings. There are no answers from Rovelli, Bianchi, Lewandowski, Thiemann, Ashtekar, ...
     
  7. Apr 27, 2010 #6

    f-h

    User Avatar

    The second class constraints are treated differently in the EPRL model (via expectation values or Master Constraint type considerations), his complaint is IIRC that the symplectic structure is wrong.

    Anyhow Spin foam "quantisation" is not a quantisation in any standard meaning of the word anyway.

    Furthermore when is a quantisation wrong? Or even inconsistent? What's the criterium? I'm really asking. I'm not an expert on quantisation techniques but it seems to me that each technique comes with its own "natural" choices. The mathematical rigours of Stone von Neumann don't apply, nor do the niceties of geometric quantisation, in fact we know there are infinitely many inequivalent quantisations of even the nicest of theories, so what makes this flat out wrong in your opinion?

    And BTW Alexandrov speaks aat the conferences frequently and people do listen and reply. The problem is, he doesn't have a way to do it "better", so in the absence of alternatives we take what we can get.

    Either way, too me the way to motivate spin foams is not from a quantisation procedure at all but from TQFTs, and these often don't have a (known) path integral formulation anyway.
     
  8. Apr 27, 2010 #7

    tom.stoer

    User Avatar
    Science Advisor

    Regarding the quantization a la Dirac:

    Suppose you have an action S and a derived Hamiltonian H which may contain unphysical (gauge) degrees of freedom plus "wrong coordinates". What happens?

    First you have a set of constraints

    C1n ~ 0 (weakly zero).

    One example is the (missing) conjugated momentum for A0 in electromagnetism.
    You have to require that this constraint is a conserved quantity (that the system stays on the constraint surface in phase space). That means you have to calculate

    [H, C1n]

    and require ~ 0 again.

    If this commutator is exactly zero everything is fine. If it is a linear combination of constraints, that means

    [H, C1n] = ank C1k

    it's fine, too. But sometimes you get something new, a new constraint for which ~ 0 is a non-trivial condition. An exampleis the Gauss law which is generated by the above mentioned constraint.

    That means that you have generated a new constraint

    [H, C1n] = C2n

    You call this a secondary constraint.

    You can continue until (hopefully) the procedure terminates with either =0 or ~0 automatically. In electromagnetism the Gauss law does nozt generate a new constraint so you have essentially two, namely

    "canonical conjugate momentum for A0" ~ 0
    "Gauss law" ~ 0

    These two constraints are "pure gauge"; they essentially kill two polarizations of the photon which gives 4-2 = 2 physical ones.

    Up to now we talked about so called first class constraints. That means they form a (weakly) closed algebra:

    [Cm, Cn] = amnk Ck ~ 0

    The Gauss law in non-abelian gauge theories is a good example. It is nothing else but a "local" su(N) algebra.

    The basic rule is that for first-class constraints
    - the algebra of constraints closes weakly
    - they are implemented as constraints on the physical states C|phys> = 0
    - these constraints are "gauge" and eliminate unphysical degrees of freedom
    - the canonical communtation relations remain unchanged

    Now we have to talk about second-class constraints. Here the problem is that you may have something like

    [Cm, Cn] = amnk Dk

    D is something new. It is not due to the symplectic structure and the time evolution, but mostly due to "wrong coordinates" on phase space. In many cases you must not implement them at all, not even on physical states, because D may be a c-number!!!

    What has happened?

    You have chosen the wrong variables to be quantizeand - or you have used the wrong commutation relations. The procedure a la Dirac is to use different variables (which you may identify if you are lucky :-) or to use so-called Dirac brackets which is explained in Alexandrov's paperfor his toy model. This allowes you to use the same variables as before, but you have to change the commutation relations in such a way that they generate a (weakly) closed algebra again. There is a unique way how to construct the Dirac brackets from the Poisson brackets which has been described by Dirac.

    Lessons learned?

    For first-class constraints there is some freedom how to implement them: "quantize first - solve later" a la LQG (the old approach) or "solve first - quantize later" a la old-fashioned QED with explicit solution for the Gauss constraint or "work with unphysical degrees of freedom and let them be killed by ghosts" a la Faddev-Popov / BRST in perturbative QCD.

    For second-class constraints my feeling is that they essentially mean that something went wrong! You can cure that by using Dirac brackets, but I prefer to identify the correct degrees of freedom and use the Poisson brackets instead.

    Now in LQG the problem is that all constraints are treated differently:
    - Gauss law is solved as constraint on physical states
    - Diff. inv. is implemented later
    - for H refer to Nicolai's paper; I doubt that the whole story is well defined

    Now it seems that SFs have similar problem, namely that you have new constraints and (again) treat them differently. You introduce BF theory + simplicity constraints on top of the "old fashioned" LGQ constraints.

    My conclusion is that - if Alexandrov is right - SFs do not fix the quantization problems we know from old-fashioned LQG, but avoid to talk about them, to postpone them ...

    I agree with Rovelli that there is no quantization procedure known which is "axiomatic"; it is not possible to derive a quantum theory uniquely from a classical theory (because the quantum theory is "more" than the classical theory). And I agree with you that quantizing a TQFT is again something different because you reduce the degrees of freedom to a finite set. But "new LQG" is not a TQFT because it is not BF but BF + constraints. It is interesting that these additional constraints on top of BF "constrain" the theory in a way that "kills the topological structure" and results in GR with more degrees of freedom than you would naively expect from BF / TQFT. In a sense you do not constrain the degrees of freedom but the constraints :-)

    But: once you chose to work with a constraint algebra and you end up with second class constraints there is no way out; you have to respect what the maths gives you. There are now excuses. My problem is that I can hardly follow the new BF approach and it may very well be that I overlook some basic facts that are rather clear for the insiders. What bothers me is that - as you explain - Alexandrov is an insider and - nevertheless - identifies such a basic issue.
     
  9. Apr 27, 2010 #8
    It is my understanding that the "new LQG" is not in any way derived from classical considerations. It is simply a quantum theory which is mathematically well-defined, and has dynamics which approximate GR in the semi-classical limit. It is obviously free from UV divergences because it is not a field theory (only countable degrees of freedom). I'd think that if they can couple matter to it, and show the standard model in the weak gravitation limit then we'd pretty much be done (in the sense of we have *a* theory, now let's go test with experiments).

    Caveat: no phase transitions lurking.
     
  10. Apr 27, 2010 #9

    tom.stoer

    User Avatar
    Science Advisor

    You are right; it's not strictly a derivation, but at least it takes the derivation and the results from "old LQG" into account.

    Nevertheless: if Alexandrov is right and there is a system with second class constraints then taken them not into account correctly is simply wrong; this does in no way depend on any other related approach; 2+2= 5 is wrong; you do not have to develop a new formalism (or compare this to an old formalism) which shows that it's 4.

    But perhaps Alexandrov is not right and all these classifications regarding first- and second-class do not apply in LQG. Then there have to be some argments showing what is really different .

    btw.: the same discussion applies to the Hamiltonian constraint in "old LQG" which generates an algebra that only closes on-shell; and it applies to SUGRA which relies on on-shell closure of local SUSY.

    It is not clear to me if Rovelli's intention to start from some quantum theory w/o a strict derivation is viable. I agree that this approach is sufficient in order to understand the whole class of SF models, but in order to select one (and only one) one should insist on mathematical rigour. So I would say that these questions will come back sooner or later.

    (nobody cared about Gribov ambiguities in QCD as long as they were only interested in DIS; but as soon as they wanted to understand confinement they had to solve these issues)
     
  11. Apr 27, 2010 #10
    Actually, thinking about this a bit more, I suspect that's why Rovelli does not give a Hamiltonian formulation, and in fact lists that as one of the priority topics for future work. I think you are completely correct that something is wrong about the treatment of the Hamiltonian constraint, but it might be that the answer is that the problem is with Hamiltonian mechanics in a covariant setting, rather than that the present theory is wrong. After all, there's no fundamental reason to prefer a canonical treatment, apart from that it's worked pretty well so far.

    So in summary: we have a quantum theory, which is UV defined, with dynamics --- we can calculate transition amplitudes --- and which approximates GR in the (a?) semi-classical limit. Clearly, there is a lot of work to be done on the question of what exactly is this theory and exactly what its low energy (whatever that might mean) dynamics are --- the paper points out some good directions that people have looked at in terms of perturbative expansions. I am of the opinion that the failure to understand it as a standard field theory is a good thing, not bad; after all, if it was possible to do it as a field theory, we'd probably have found it a lot quicker. I hope that one day we'll have a better understanding of various quantum theories such that it becomes obvious why any field theoretic approach to an almost-topological-but-really-not-quite theory was doomed.
     
  12. Apr 27, 2010 #11

    tom.stoer

    User Avatar
    Science Advisor

    I don't think so; the canonical approach is more involved if one wants to prove covariance (regardless which theory you are talking about), but it works - iff it is applied correctly.

    I agree; that's remarkable.

    It is already far away from any other "standard field theory".

    One of the biggest problems is that we cannot test it. The only accessable regime is the semiclassical one, where different approaches seem to converge. It is like developping QCD with access restricted to the energy range of atomic spectra.
     
  13. Apr 27, 2010 #12
    What I mean is that true diffeomorphism invariance is going to do serious violence to time variables. It is not obvious that a pure canonical approach is compatible --- which is why Rovelli uses a more direct definition of transition amplitudes, without defining a Hamiltonian, i.e. a generator of time translation.

    As far as accessibility goes, I agree completely. On the other hand, we have only *one* theory currently that works. Of course, now that we do, it probably isn't hard to come up with irrelevant perturbations which make no difference at the semi-classical limit. On the other hand, it seems like that would be special pleading.

    Frankly, I'm excited. If they (since I'm a complete outsider on this) can couple matter in such a way as to get the standard model out at low energies, I'd probably die a happy man. :wink:
     
  14. Apr 28, 2010 #13

    tom.stoer

    User Avatar
    Science Advisor

    Using the canonical approach one has to assume a spacetime foliation like M4 = R*M3; R represents the real line. You can then use the canonical approach, provided diff.-inv. is implemented w/o anomaly. There is no need to identify R with physical time. It's nothing else but a direction perpendicular to something ...

    As far as I can see there is no restriction in coupling matter. So if they can put the standard model in, the result will be the standard model coupled to quantum gravity. I do not see how restructions regarding the matter content / gauge symmetry can emerge from LQG/SF.

    What I would like to see is something like "Sundance Bilson-Thompson ribbons" or "SF preons". But I am afraid this is wishful thinking.
     
  15. Apr 28, 2010 #14
    But that's precisely the point. It seems that there are real obstacles to completing that quantization program rigorously. On the other hand, we clearly have a well-defined theory, but which might not fit into the canonical framework.

    And in so far as matter goes, they can't *just* put a field theory on top --- no spacetime on which to do QFT. It might transpire that there's a trivial (in the trivial bundle sense) of just slapping gauge degrees of freedom on the network ala lattice QCD, but fermions might still be a bit of a problem. There's then the problem that interactions might cause some phase transition.
     
  16. Apr 28, 2010 #15

    tom.stoer

    User Avatar
    Science Advisor

    I would say that it can be done via the canonical formalism, the only question is how. I see no fundamental reason why the canonical formalism should not work.
    In addition I am convinced that it must be done via the canonical formalism, simply because (strictly speaking) the path integral is derived from the Hamiltonian, not the other way round. So either it works in LQG as well, then one should do it in order to secure results from SF; or it does not work, then it's very important to understand why. This last step is missing. Currently it looks like "they are not smart enough todo it" - and this cannot be the end of the story.

    Look at QCD: they told us for decades that QCD cannot be quantized in the canonical formalism and one has to use the PI. They managed to quantize QCD via PI - but only because they neglected non-perturbative issues, large gauge transformations, Gribov ambiguities etc. In the late eightees and early ninetess I was active in group focussing on non-perturbative quantization of QCD in the canonical framework, and it worked! But - and this is essential - one had to solve exactly the issues that had been neglected in the PI formalism.

    Now I understand whyt you mean. You are addressing the problem that quantizing a field theory in the usual sense requires a spacetime manifold in order to define the field operators, propagators etc. If this is missing and if one expects back-reaction from the fields to the dynamical spacetime manifold (or whatever its replacement will be) then the standard field theory approach is in trouble. In addition I do not believe that SF+fields is similar to lattice (gauge) theory. They are both discrete, but on the lattice there is a well-defined continuum limit whereas in SF there is not even a lattice spacing. A link has a length defined by the spectrum of an SU(2) operator and there is no continuum limit. I agree that fermions may cause additional problems.

    Again the canonical approach (now abandoned) seemed to be a framework to derive LQG + additional fields in one context. The canonical approach required some "discretization" = identification of "chunks of space" represented by diff.-inv. quantities (cylinder functions => spin networks). If you abandon this approach of "deriving" LQG you are no longer able to derive the matter coupling in parallel. You have to put matter on top of the SF w/o any hint where you came from. So again this is a strong hint that the canonical formalism may provide additional insights.

    But I should come back to my main point: if there are second-class constraints, then they have to be taken into account properly, regardless which formalism you are using. I hope there will be some response to Alexandrov's latest paper on arxiv quite soon.
     
  17. Apr 28, 2010 #16
    That's why I can't see any potential in SF for true (non-ad hoc) unification of gravity with matter. That's another good reason to do string theory, which requires extra matter to be present for consistency.
     
  18. Apr 28, 2010 #17

    tom.stoer

    User Avatar
    Science Advisor

    I agree with this conclusion. LQG/SF as of today will definitly not provide any hint towards a true unification. "Sundance Bilson-Thompson ribbons" and "SF preons" are wishful thinking, I am afraid.

    I think you are not surprised that here I don't agree :-)
     
  19. Apr 28, 2010 #18
    So what do you think is most promising -- noncommutative geometry? :)
    )
     
  20. Apr 28, 2010 #19

    tom.stoer

    User Avatar
    Science Advisor

    believe it or not: LQG! learn how to quantize gravity consistently and then check how it can be applied to other theories heading for unification.
     
  21. Apr 28, 2010 #20
    I've wondered whether it's time to "dump" historical LQG, based on Ashketar's variables, and create a Hamiltonian constraint that gives GR in semiclassical limit, along with physical inner product and off-shell closure, with spin networks as its kinematics, which may or may not match LQG area and volume operators. What kind of hamiltonian constraint would give rise to GR in low energy regime, and then work from there. I call this a top down approach, start with a hamiltonian reproduces GR, and work from there, towards its planck scale properties, as opposed to Ashketar's bottom up - start with GR, rewrite variables and then quantize it, and then find GR.

    What do you mean by unification? A larger symmetry group ala SU(5) SO(10) GUT at 10^15 Gev scale that is then spontaneously broken to SM gauge group? If so, why not just go with string theory?
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: EPRL quantization found to be completely inconsistent.
  1. Quantized Space (Replies: 13)

  2. Is time quantized? (Replies: 49)

Loading...