It seems to me that the UV divergences in a quantum theory of gravity could be canceled by just requiring the distance between any two points be finite. I know loop quantum gravity basically addresses this through the quantization of area and volume, but you don't even need loop quantum gravity to do this. Not to mention, I hear there there are a lot of problems when you try to create a perturbative theory to LQC. If you postulate the distance between two points is finite, I think you can create a quantum theory of gravity devoid of UV divergences. What are the problems of doing this?
I should say the only problem I see is that such a postulate is ad hoc due to the lack of experimental evidence supporting the quantizing of spacetime
ALL postulates are ad hoc when it comes to quantum gravity. The goal is to minimize the number of postulates. Lorentz Invariance? You have a hard cutoff in your phase space, and then you do a Lorentz Boost. When you write in a hard momentum cutoff, you lose Lorentz Invariance. But I guess it's a matter of what you think is important in a theory of everything. Personally, I think that Lorentz Invariance is pretty fundamental, others would disagree. For example, one of the quantum gravity papers I've seen talks about a running number of dimensions---I take this to mean that the number of dimensions of space-time is a function of energy, just like the coupling constants. This should violate lorentz invariance when the dimension isn't a whole number, I think. As to the quantization of area and volume operators in LQG, I still haven't been explained WHY this doesn't violate lorentz invariance. marcus will no doubt be along shortly to cut and paste an abstract to a paper whose arguments I don't unerstand. (I've already seen the paper.) Either way, string theory skirts this issue by having a minimum string length. Probing lengths smaller than the planck scale is impossible because as you add more energy, you just create oscillator modes. The OTHER argument which I have heard (but don't fully understand) is that lattice QED (not QCD) doesn't work so well. QED, like gravity, is defined in the IR at weak coupling. If you try to define QED at strong coupling by putting it on a lattice, you get terrible results when you take the continuum limit---it is not well defined and you can't recover anything that looks like QED in the IR. Again, I don't fully understand this argument---it was told to me by Nima Arkani-Hamed, and I have probably hopelessly mangled it.
"QED, like gravity, is defined in the IR at weak coupling. If you try to define QED at strong coupling by putting it on a lattice, you get terrible results when you take the continuum limit" Yea, there is evidence for a chiral phase transition and very heavy monopole condensation. This wreaks all sorts of havoc on the numerical integration and is what causes the technical problems (along with the usual lattice problems people try to avoid, like fermion doubling). There is also the question whether or not it even makes sense in principle (due to the Landau ghost). Jacques Distler has a nice blurb about this and (lattice gravity) here: http://golem.ph.utexas.edu/~distler/blog/archives/000713.html
Using my axiomatization in gr-qc/0205035, it is possible to derive the EEP in a theory with preferred frame. Sounds like the Landau pole. It does not seem to be important because the lattice spacing where this happens is much smaller than Planck length. It should be noted that we have to distinguish two concepts: First, we introduce a lattice as a mathematical method to handle infinities, with the intention to do the limit h->0, with the belief that the true theory is continous. In this situation, something like in QED is dangerous. In the second case, we think there is, really, a finite distance where everything is different, and the lattice theory is considered to be a (may be simplified) model of the real world. In this scenario, a limit h->0 is not necessary. But that does not mean that we do not have to consider renormalization. It is necessary to find out the connection between the parameters of the lattice model and the observable constants etc. And there may appear problems, especially we may be forced to postulate some conspiracy between the parameters of the lattice model to explain that some of the observable constants are very small in comparison with others. Typical examples of such problems are the smallness of the cosmological constant, of the neutrino masses, of the CP violation, or the masses of W and Z in comparison with other gauge bosons of GUTs.
This is true for QED. It is not true for gravity---the lattice spacing where problems occur is at the Planck length, where you would actually hope for NO problems to occur. Then Lorentz Invariance isn't fundamental. Again, this is a taste thing---you can violate Lorentz Invariance at the Planck scale, the experiments allow for that. I feel that Lorentz Invariance is a fundamental concept, and would have to see some other (lots of other) successes in a theory to accept that Lorentz Invariance is emergent and not fundamental. Absent some mechanism to naturally keep some parameters small, this is a generic feature of a low energy effective field theory. We have satisfactory symmetries in all of these cases except the first: seesaw, Peccei-Quinn, SUSY... Are you saying that you have to cancel these things by hand in your approximations?
In lattice simulations one is usually forced to tune physical parameters with the details of the lattice spacing/model. This allows for tremendous (non physical) freedom when model building, which is why people don't take it too seriously unless you can show some sort of progress or control on the continuum limit (eg a check that all the extra junk drops out and the residual symmetries are sufficient to satisfy the desired result). In the case of gravity, if you insist on not taking the continuum limit and instead keeping a lattice as fundamental, the usual problem (other than LI breaking) is you end up with crumpled nonphysical spacetimes. Only in recent years have people managed to get something that looks flat numerically for pure gravity. The problem of course is that the solution is unstable to the inclusion of matter. Absent knowing everything all the way down to the Planck scale, you are almost guarenteed that your man made choices for the details of the lattice is going to return nonsense upon tweaking the matter content slightly. Ergo the need for more fundamental principles from somewhere else.
I prefer to hope for solvable problems Especially the classical problems of lattice gauge theory have helped me a lot. With species doubling, my model needs 8 times less fields on each lattice node. And the regularization problem of chiral gauge theory has helped me to identify the chiral gauge fields on my lattice. (But I don't claim to have solved these problems in their usual understanding.) For me, the conceptual problems with unification of relativity and much QT are too large. Not even the wave function is relativistic. If Lorentz symmetry is fundamental, you have to give up realism (in the EPR sense). On the other hand, we have nice hidden variable theories for QM (Bohm, Nelson) - with preferred frames. And then, there is my approach (ilja-schmelzer.de/ether/ether.pdf). What are you waiting for? I have a simple concept which allows to derive relativistic symmetry, even in the case of general relativity (EEP). The main objection against theories with preferred frame is rejected. I see no reason to consider Lorentz invariance as fundamental, given all the related problems. Correct. I have tried to answer also the original question "Why not put gravity on a lattice and be done with it". No. I have not done renormalization in my approach yet. I have done only the kinematics, and without symmetry breaking. A Hamiltonian I have only for pure fermions, without gauge fields. Once in my approach there is no place for living for other than the observed gauge fields (except a few diagonal fields which do not lead to particle decays) I don't have to explain a large difference between the masses of observed and yet unobserved gauge fields. The special role of the right-handed neutrinos (association with the direction of translation) promises to give some explanation for the small mass of neutrinos. For strong CP I have the following idea: I will have some symmetry breaking connected with a background lattice. As far as it is regular, it distinguishes directions in space, but does not violate CP (which is geometric P in my approach). Now, the background lattice may be distorted too, that's natural in a region of transition between two vacuum states. This can lead to a small violation of CP. Of course, that's speculation, not backed up with any math yet. I obtain some new problems - a few diagonal gauge fields are allowed. Even if they do not give additional particle decays, their interaction constant should be sufficiently low, and I hope that renormalization gives this.
Ok, this makes sense to me. See if I got this right. I'm reading this book on renormalization theory, and it suggests that using a lattice cuts off the high momentum terms which I believe it does since configuration space and phase space are separated by a Fourier transform. If you insist that space is discrete, you do not have the problem of the continuum limit. Well let me rephrase that last sentence. The problem with the continuum limit is that we are using models, and in doing so it is rather contrived which model you pick. Now, depending on which model you pick taking the continuum limit may result in anomalies due to the model, but, in my opinion, this is due to the fact that as of yet we do not know a fundamental way of picking the right lattice structure. To paraphrase Weinberg, non renormalizable theories provide useful expansions in powers or energy, they inevitably lose all predictive power at energies of the order of the common mass scale M that characterizes the various couplings. At this scale, he insists, one of two things can happen something like asymptotic safety or new physics. A lattice theory at the Planck scale would definitely be new physics, but in light M. Reuter's paper, which I think is correct, there is something I don't understand. How does a fixed RG point imply asymptotic safety, and if asymptotic safety does exist, does it imply that there is no need for new physics.
To answer the second part of your question, it CAN mean that there is no need for new physics but asymptotic safety is also compatible with there being new physics at planck scale. This is discussed clearly and at some length in a new paper by Roberto Percacci called Asymptotic Safety. He talks about this very thing at the end in a Q/A section prepared by the editor of the book where the article will be published. He can see A.S being compatible EITHER with some kind of discreteness at small scale or with smooth. he also talks about ways the A.S. programme can fail. It is a darn good paper IMHO. You may have the link already but I will get it in case you dont or someone else wants. I think Reuter's 2007 papers are also worth reading to help get up to date on this, but I will just give Percacci links here: http://arxiv.org/abs/0709.3851 Asymptotic Safety R. Percacci To appear in "Approaches to Quantum Gravity: Towards a New Understanding of Space, Time and Matter", ed. D. Oriti, Cambridge University Press (Submitted on 24 Sep 2007) "Asymptotic safety is a set of conditions, based on the existence of a nontrivial fixed point for the renormalization group flow, which would make a quantum field theory consistent up to arbitrarily high energies. After introducing the basic ideas of this approach, I review the present evidence in favor of an asymptotically safe quantum field theory of gravity." Can't resist tacking on this beautiful one, testing the existence of the fixed point using a 6th degree polynomial in the Ricci curvature. One of a series of papers where they try to make the fixed point fail and it doesnt. The critical hypersurface where the fixed point is an attractor seems to have dimension 3. That means that you specify 3 experimentally determined parameters and it is predictive from there onwards. Great if that turns out to be the case. Anyway here is that other Percacci one: http://arxiv.org/abs/0705.1769 Ultraviolet properties of f(R)-Gravity Alessandro Codello, Roberto Percacci, Christoph Rahmede 4 pages (Submitted on 12 May 2007) "We discuss the existence and properties of a nontrivial fixed point in f(R)-gravity, where f is a polynomial of order up to six. Within this seven-parameter class of theories, the fixed point has three ultraviolet-attractive and four ultraviolet-repulsive directions; this brings further support to the hypothesis that gravity is nonperturbatively renormalizable." Glad you asked about this Jim Kata. Good luck finding out about it!
Jim, now about the first part of your question That will also be answered in Percacci's paper. he gives clear conditions for A.S. and says what it means, and why it makes the theory predictive. The idea is you are not using a perturbation series but you still only have a finite number of parameters to determine experimentally. Rather than paraphrase good source material I think I will just let you read at the bottom of page 5 "The existence of such a FP is the first requirement for asymptotic safety..." and then the middle paragraph of page 7 "We can now state the second requirement for asymptotic safety... ... ... Thus, a theory with a FP and a finite dimensional UV critical surface has a controllable UV behavior, and is predictive." It is important to understand the critical surface or more exactly the critical hypersurface. this is where the RG flow carries you in towards the FP. Once your action functional is on that hypersurface it is smooth sailing, you are safe. You home in on the FP and nothing blows up. In general the space of action functionals is infinite dimensional! So this second requirement is important because it says that it only takes getting some finite number of parameters right (like three ) and that puts you on the good hypersurface. Then things run by the flow (the beta functions) and there is a correct action for every momentum-scale k. As k -> infty the action runs to the FP. So you are in complete control and can use it to predict. Those pages 5-7 in Percacci should be adequate, but suppose someone wants a more INTUITIVE presentation. with more pictures. Then I would suggest watching the slides and listening to the audio of Martin Reuter's talk that he gave in June 2007 at Loops 07. It is a great talk and gives a lot of intuition so in some sense quicker than reading Percacci, just to get the motivation and the basic ideas. I will get the Loops 07 link too. For all the Loops 07 talks: http://www.matmor.unam.mx/eventos/loops07/plen_abs.html Reuter slides http://www.matmor.unam.mx/eventos/loops07/talks/PL3/Reuter.pdf Reuter audio http://www.matmor.unam.mx/eventos/loops07/talks/PL3/Reuter.mp3 Great stuff! Glad you asked.