tom.stoer said:
But: once you chose to work with a constraint algebra and you end up with second class constraints there is no way out; you have to respect what the maths gives you. There are now excuses. My problem is that I can hardly follow the new BF approach and it may very well be that I overlook some basic facts that are rather clear for the insiders. What bothers me is that - as you explain - Alexandrov is an insider and - nevertheless - identifies such a basic issue.
You missed my point. I know all this. First of all look at the EPRL papers. The second class constraints are NOT implemented as C|phys> = 0 but as <phys|C|phys> = 0. This is another of the standard techniques to deal with them. Or rather in the papers from a few years back several options are explored. They turn out to do essentially the above.
It can be argued that in the BC model they were imposed to strongly. This misimplementation of the constraints was actually a starting point for Rovelli to investigate EPRL.
This is not Alexandrovs objection which is related to the symplectic structure involved. Otherwise he couldn't argue that FK without \gamma is not affected, as far as implementing the contraints goes, it's doing something extremely similar to EPRL.
Now again, while we can take hints from standard quantisation techniques Spinfoam models are NOT one of them. They do NOT follow the Dirac approach or the Faddeev Poppov approach or any other approach. At most they are a sort of Lattice-path integral quantisation. So the Dirac algorithm is not really the appropriate tool at all (which is why the BC model never bothered with it). As far as I understand you might be worried that you obtain the wrong meassure in the path integral but then I again ask: Wrong by what standard? It's certainly not wrong by the 2+2=5 standard, that is, inconsistent. You haven't given any consistency requirements after all. Thus your call for mathematical rigour is empty.*
To me the "quantisation" methods to arrive at the EPRL model are unconvincing anyways. It is however an extremely natural model from the point of view of representation theory and TQFT in the state sum approach. That's why it's interessting to work on for me. The BF+constraints thing is motivational. If it doesn't make a good motivation for you ignore it.
Now if you are married to Dirac style quantisation and the cannonical formalism by all means knock yourself out. That's old school LQG. Thomas Thiemann is working on this. Christina Giesel, Hanno Sahlmann and a good group of others, too. The problem there is how to interprete the theory. (And believe me, I worked on this, it's hard. Deeply fundamentally hard. Basically due to the introduction of the 3+1 split.) But if you can solve that, they have a mathematically wonderfully rigorous framework to sell you.
But that's NOT what spin foams are. Even if there is some overlapp to LQG techniques. Take the most succesfull spin foam/state sum model to date: the Turaev Viro model. Its relation to a pathintegral is extremely obscure. In fact there are many state sum TQFTs for which neither a Lagrangian nor a Hamiltonian formulation is known. Furthermore these theories naturally live on manifolds that don't have \sigma \times R topology.
As an aside: From the spin foam perspective this all doesn't matter. The believe that Dirac quantisation is somehow the only true way is suspect to me anyways. I apply the mathematicians criterium: Is the resulting theory interesting and free of contradictions? Dirac quantisation will not solve your conceptual proiblems for you, in fact it will make them harder. And the most crucial conceptual question of all is the one about the phase transition. Until we know how to ask and answer this question we don't know what classical theory we are looking at.
*As a matter of fact even in the cases where "rigorous" quantisation is well defined it is never unique (c.f. Haag's theorem for example and many many other results on quantisation ambiguities as well).