friend said:
...Or is it the case that the underlying spacetime is continuous, but any observable that involves the metric is quantized?
You hit the nail on the head! That is how LQG is constructed. It is based on a continuous manifold without any metric specified. So it is initially limp, shapeless, without geometry. Then, instead of metrics, there are defined quantum states of geometry, a hilbertspace of these. Observables are operators on that hilbertspace and some of the geometric observables turn out to have discrete spectrum.
===================
But keep in mind that it is not essential in quantization to divide something up into little bits.
The leading approaches to quantum gravity are LQG and CDT (Loll Triangulation approach) and neither of them divides space up into little bits. In LQG some discreteness comes in at the level of measurement, so you can say that there is a minimal nonzero length that you can get as read-out from a measurement--or at least a minimal nonzero area. This means that space
as an observer measures it appears to have a certain graininess.
But in CDT, the other approach we hear most about, there is not even that kind of apparent graininess! The mathematical construction is based on a limp continuum, like for example topologically it could be the 3-sphere cross the real line: S
3 x R. The continuum is without geometry, it is formless, because no metric distance function is defined on it. Then one specifies a quantum rule by which it can triangulate itself in millions of different ways.
Analogous to how a particle can get from point A to point B in millions of different ways in a Feynman path integral.
Each path will take the universe from an initial to a final state and it will have an amplitude. A path is a spacetime geometry. The amplitude-weighted average can be taken.
So they can produce millions of sample universes in the computer and study each one's properties (dimensionality, how radii and volumes are related, correlations over time etc) and they can also sum up analogous to a path integral.
Now the CDT theory says
let the size of the triangles go to zero. So you see there is finally no discreteness! There is no minimal length.
And the entire construction is still based on a topological continuum----like the three-sphere cross R that I mentioned earlier.
The triangulations are simply a regularization which allows the quantum path integral to be computed. The method is discrete only in the same sense as the Feynman path integral is discrete because it uses polygonal paths, piecewise linear paths, to approximate curved paths, at a certain stage in the calculation. No one pretends that Feynman's particle travels along a path made of straightline segments. And no one should pretend that the Loll quantum continuum is made of little triangles

. The triangles could as well be squares or any other tile shape, their size is taken to zero and what shape they are doesn't matter.
Have a look at the Loll SciAm article. It's excellent. There is also a growing technical literature for CDT available on arxiv, but I recommend the SciAm article. It gives a good idea of what is likely to come out of the current multipronged research into quantum gravity. There are a number of approaches and signs that they may have begun to converge. Space doesn't necessarily get broken up into little chunks, but it may reveal a more chaotic, less smooth structure at very small scale. At the micro level it may have the geometric Heisenberg jitters.
Remember too, that whatever continuum we come to define and use will always be merely a mathematical model. Nobody should confuse it with reality. At present almost all physics is done on some sort of differential manifold---a thing invented around 1850 by Riemann. A thing which generalized classic Euclidean space by allowing internally measureable curvature, among other things. Just because that model of space works well and is typically what is used does not mean it corresponds to reality. Most likely it doesn't! Most likely Riemann gives a very bad picture of space at microscopic scale. (And this could be at the heart of physicists' unrenormalizable divergence pains---they use a continuum which is vintage 1850 and totally unrealistic at small scale.)
How then can spacetime itself be quantized? Have we developed a new quantization procedure that does not use underlying continuous parameters (a.k.a. spacetime)?
Well I've tried to suggest how
geometry is quantized in the two leading approaches LQG and CDT. They don't actually quantize spacetime itself. They quantize the geometry. Gravity = geometry so quantizing geometry is the name of the game.
And there is an underlying continuum in both cases. Neither space nor spacetime is broken up into little chunks. So, in your sense, we continue to use continuous parameters. I think the answer to your second question "Have we developed...", if I understand it right, is no.
Because we don't need any revolutionary new proceedure---the geometry being defined on a continuum. (in those two cases)
I would urge you to read the Loll SciAm article on CDT. Here is the link:
http://www.signallake.com/innovation/SelfOrganizingQuantumJul08.pdf
The link is also in my sig.
CDT is easier to grasp than LQG, at intro level, and in certain respects it is currently more complete.