# Two World-theories (neither one especially stringy)

1. Dec 19, 2004

### marcus

The two that look most promising to me are Lorentzian DT and Loop.
To look at the raw numbers---sheer quantity of research papers written per year---you'd say LQG was growing rapidly and DT was flat.

Lorentzian DT was first proposed in 1998 (a paper by Ambjorn and Loll), here are some preprint numbers:

http://arxiv.org/find/grp_physics/1...gravity+AND+Lorentzian+quantum/0/1/0/1998/0/1

http://arxiv.org/find/grp_physics/1...gravity+AND+Lorentzian+quantum/0/1/0/1999/0/1

http://arxiv.org/find/grp_physics/1...gravity+AND+Lorentzian+quantum/0/1/0/2000/0/1

http://arxiv.org/find/grp_physics/1...gravity+AND+Lorentzian+quantum/0/1/0/2001/0/1

http://arxiv.org/find/grp_physics/1...gravity+AND+Lorentzian+quantum/0/1/0/2002/0/1

http://arxiv.org/find/grp_physics/1...gravity+AND+Lorentzian+quantum/0/1/0/2003/0/1

http://arxiv.org/find/grp_physics/1...gravity+AND+Lorentzian+quantum/0/1/0/2004/0/1

LORENTZIAN DT (etc.) PREPRINTS
Code (Text):

1998   3
1999   3
2000   5
2001   4
2002   6
2003   4
2004   4

Numberwise, DT doesn't look like much is happening.
Loop has been going longer, at least since the early 1990s. Here are output numbers for Loop and allied QG approaches.

Year 1994:
http://arXiv.org/find/nucl-ex,astro...m+AND+OR+triply+doubly+special/0/1/0/1994/0/1
Year 1995:
http://arXiv.org/find/nucl-ex,astro...m+AND+OR+triply+doubly+special/0/1/0/1995/0/1
Year 1996:
http://arXiv.org/find/nucl-ex,astro...m+AND+OR+triply+doubly+special/0/1/0/1996/0/1
Year 1997:
http://arXiv.org/find/nucl-ex,astro...m+AND+OR+triply+doubly+special/0/1/0/1997/0/1
Year 1998:
http://arXiv.org/find/nucl-ex,astro...m+AND+OR+triply+doubly+special/0/1/0/1998/0/1
Year 1999:
http://arXiv.org/find/nucl-ex,astro...m+AND+OR+triply+doubly+special/0/1/0/1999/0/1
Year 2000:
http://arXiv.org/find/nucl-ex,astro...m+AND+OR+triply+doubly+special/0/1/0/2000/0/1
Year 2001:
http://arXiv.org/find/nucl-ex,astro...m+AND+OR+triply+doubly+special/0/1/0/2001/0/1
Year 2002:
http://arXiv.org/find/nucl-ex,astro...m+AND+OR+triply+doubly+special/0/1/0/2002/0/1
Year 2003:
http://arXiv.org/find/nucl-ex,astro...m+AND+OR+triply+doubly+special/0/1/0/2003/0/1
Year 2004:
http://arXiv.org/find/nucl-ex,astro...m+AND+OR+triply+doubly+special/0/1/0/2004/0/1

LOOP (etc.) PREPRINTS
Code (Text):

1994    61
1995    83
1996    72
1997    70
1998    67
1999    76
2000    89
2001    98
2002   121
2003   139
2004   178

The 2004 figures are up through 19 December, which is close enough to yearend so one gets an idea.

I have been reading nothing but DT papers this morning. the approach has some unique and impressive advantages working in its favor. I would like to be able to compare these two quantum spacetime theories on an equal footing.
Their most noticeable disagreement is apt to concern the area and volume operators. As yet no indication that in DT these will have discrete spectra.

I would like to know why the research output in DT is essentially flat. Given its apparent promise and the recent (2004) success, why arent more people getting into DT?

Last edited: Dec 19, 2004
2. Dec 20, 2004

### marcus

I will quote some about Lorentzian path integral from
the most pedagogical paper I know----Renate Loll
http://arxiv.org/hep-th/0212340 [Broken]

----quote from "A Discrete History"----
The desire to understand the quantum physics of the gravitational interactions lies at the root of many recent developments in theoretical high-energy physics. By quantum gravity I will mean a consistent fundamental quantum description of space-time geometry (with or without matter) whose classical limit is general relativity. Among the possible ramifications of such a theory are a model for the structure of space-time near the Planck scale, a consistent calculational scheme to compute gravitational effects at all energies, a description of (quantum) geometry near space-time singularities and a non-perturbative quantum description of four-dimensional black holes. It might also help us in understanding cosmological issues about the beginning (and end?) of our universe, although it should be said that some questions (for example, that of the “initial conditions”) are likely to remain outside the scope of any physical theory.
---end quote---

That was the headline on Regge's 1961 paper that set things up for Renate Loll and friends------"General Relativity Without Coordinates".

they can consider the space Geom(M) of all spacetime geometries on some manifold-----each geometry is described by listing interconnections between uniformsized simplexes, some kind of computer data structure.

that is a point in Geom(M), it is real elementary barebones
there is no "gauge" or chaff of arbitrary choice (as when things are presented using coordinates)
and that barebones reality is what the quantum mechanics is about

I have always appreciated the spareness of LQG----it doesnt seem to have anything in it that isn't needed to describe a quantum theory of 4D spacetime. But to get started, LQG does employ a differentiable manifold and connections thereon. That takes in a batch of arbitrary mathematical equipage (physically meaningless "gauge" accessory) which then has to be factored out later. But I thought that LQG kept gauge to a bare miniumum. After all, how could one ever get started without an underlying smooth manifold?

Systems of coordinates are an arbitrary physically meaningless choice but how do you get started without them?

Well the framework for Lorentzian path integral, or DT, is even more stripped down nitty. No coordinate system. It seems right. have to go, will try to get back to this later.

Last edited by a moderator: May 1, 2017
3. Dec 20, 2004

### nightcleaner

Code (Text):

0                                            1
1

1     1                                     1    1
0     1

2     2     2                             1      2       1
0     1     2

3     3     3     3                    1        3        3        1
0     1     2     3

4     4     4     4     4           1         4        6         4       1
0     1     2     3     4
n pick k                                pascal

Last edited by a moderator: Dec 20, 2004
4. Dec 20, 2004

### marcus

bravo Cleaner
I will go over to the other thread and do an example
============
since the essential thing about space is relations (next-to, between, around) so that space is in some sense a compendium of all those spatial relations then it is intuitive to me that a basic piece of space would be a simplex.

and it seems reasonable that global geometry would consist of saying HOW THEY ARE GLUED

but the minimal element of space, I could see, might be a tetrahedron---basically just 4 points

and for a path integral describing the evolution of space one would want to build it of the fivepointer analog (the simplex with 5 points and 5 tetrahedral walls)

OR, if that one would get a quantum model of spacetime by TAKING THE LIMIT with smaller and smaller simplices.

I mean that space is not actually to be imagined as diced up into little simplices, because maybe there IS no minimal distance. Maybe we just THINK that planck length indicates some fundamental minimal length and it really doesnt! But even so it could be that the right approach to quantizing is to divvy up into simplexes and then make the simplexes smaller and smaller. Because the simplex approximation is a good approximation to how space behaves.

Last edited: Dec 21, 2004
5. Dec 21, 2004

### marcus

two kinds of simplexes in 4D

suppose we go along with Loll and Ambjorn and we say OK
simplexes are basic
and we are going to have a "path integral"

then we are going to get to recognize two kinds of fivepoint simplexes
or, if you'd rather, two types of orientation.
that is because CAUSALITY layers spacetime
a simplex can stand like a pyramid with 4 points in one spatial layer and the remaining 5th point upstairs in the next layer
or it can be upside down with the 5th point in the prior layer
(that is really the same kind)

in this case there are 6 spacelike edges and 4 timelike edges (connecting the 5th point to the other 4)

but there is another kind of fivepointer you maybe did not expect that may be thought of as dual to this one----it has 4 spacelike edges and 6 timelike!
this kind has 3 points on the ground and 2 points upstairs in the next layer (or turning it over) downstairs in the prior layer.

In Ambjorn and Loll's path integral approach each 4simplex has a piece of MINKOWSKI space in it. What could be a nicer material for them to be made of? All the simplexes are chunks cut out of the familiar 4D flat space of 1905 special relativity.

The two types of 4simplexes----call them (4,1) and (3,2) and remember there are flipped versions (1,4) and (3,2) that are so similar to the first two that we dont make a point of distinguishing----are two ways that Minkowski space can be oriented so as to sit in the simplex.

when these little lego-bricks are glued together to make a spacetime PATH (from some initial to some final geometry of space)
then the GLUING HAS TO RESPECT the lightcones in each block. The fitting of face to face has to respect the way Minkowski space sits in each simplex.

that is why Renate Loll tells us about the two types of 4simplex. So we wont forget and try to stick two faces together in a way that disrespects the Minkowski causality, or lightcones, in the two neighbor pieces.

the two types are shown in her picture Figure 5 on page 11
of
http://arxiv.org/hep-th/0212340 [Broken]
this paper I esteem more and more because of its
occasional kindergartenness
I just wish it were that way all the time
the simpler the better. amen.

Last edited by a moderator: May 1, 2017
6. Dec 21, 2004

### marcus

All Geometry Is In The Gluing

On page 10:
"... no local curvature degrees of freedom are suppressed by fixing the edge lengths; deficit angles in all directions are still present, although they take on only a discretized set of values. In this sense, in dynamical triangulations all geometry is in the gluing of the fundamental building blocks. This is dual to how quantum Regge calculus is set up, where one usually fixes a triangulation T and then “scans” the space of geometries by letting the li's run continuously over all values compatible with the triangular inequalities..."

This is just a reminder that after all spacetime is nothing but a PATH between two geometries of space---the way it is now and the way it will be later (or was earlier)

In the Feynman path integral spirit, one says that the path a particle follows in getting from here to there DOES NOT EXIST. the path does not exist and there is no unique path that it follows!
It just somehow gets from here to there, and to calculate a quantum mechanics amplitude of it doing so, we make a weighted sum over all the paths. An integral that mooshes together all the paths from here to there, even crazy ones.

OK now SPACETIME DOES NOT EXIST EITHER
there is just the way space was shaped before
and the way it is shaped now
and there are LOTS OF PATHS of geometry to connect from then to now.
And we have to be prepared to average---to take a weighted sum including all the paths even ones that seem quite unlikely
this is the Feynman path integral philosophy (which has a pretty good track record so probably isnt totally out of step with nature)

Now what Ambjorn and Loll need is a machine that will generate a random geometry path from spatial shape A to spatial shape B.
And all geometry is in the gluing this means the machine
has to be able to output a random 4D TRIANGULATION that gets from shape A to shape B
which means it has to find ways of gluing uniformsize fivepointer simplexes
of those two orientation types together, so as to connect from A to B (3D conditions of space before and after)----and do it in an orderly layered way.

the more I read of this explanation by Loll the more I think that this is actually what a quantum theory of gravity ought to look like.

I mean that it ought to provide a path integral in the space of geometries.
Or a measure on the space Geom(M) of 4D geometries.
I DONT CARE IF IT IS SIMPLICIAL or not. Simplexes and gluing is just one way of describing a point in Geom(M)-----just one way of specifying a 4D geometry-----i.e. a path from A to B.

If someone can find a general way of describing a 4D geometry that is less messy than with simplexes that would be great! however my experience with coordinates is that the minute you try to do it with coordinates and metrix and ten-sores and coneckshuns, in that moment you have opened the closet of the nineteenth century and it is very difficult to close the door back up.

Another nice thing is that it doesnt matter if the path is jagged and zigzag because its QUANTUM so it gets blurred with other paths.
this is a great thing, and it is reminiscent of the original Feynman path integrals with were zigzag piecewise linear jagged and thus completely unrealistic paths---- the real particle wouldnt behave like that but it DOESNT MATTER you still calculate good results because the jagged things are blurred together in the weighted average. Well the same thing happens here: the Ambjorn Loll approach is intrinsically quantum because when you glue simplexes together, especially these uniformsized ones, you almost never get anything FLAT you get something which a PF poster has called a "broken glass" look. But all that averages out and the overall effect can be smooth.

Renate mentions that somewhere. i will try to find the page.

7. Dec 21, 2004

### nightcleaner

Maybe. But consider the Compton wavelength, where for example the Compton wavelenght of an electron is calculated from the energy of the electron. If a universe has a measurable amount of energy, then it should have a minimum Compton wavelength. Since more energy means smaller wavelength, the Compton wavelength of a universe should be the smallest length possible in that universe.

You could substitute "measurment" for "universe" in the case of any real world observational system.

If there is a minimal length, then there is a minimal time, given the maximum velocity c where c= minimum length/minimum velocity. All the other units, such as energy, power, voltage, current, resistance, etc can be calculated from these base unitis, see Wikipedia, Natural units.

http://en.wikipedia.org/wiki/Natural_units

I would like to challenge the assertion found in Kaku (Hyperspace p. 10) and elsewhere that humans cannot visualize in four dimensions. I base my proposition on my personal experience of discovering binocular vision at a delayed age, so that I can remember vividly the experience of "seeing" the world in 3d for the first time, when before I had "seen" only in the flat, monocular view. I can tell you it was a very exciting experience, given to me originally by my opthamologist through the careful selection of lenses and mirrors. I was sixteen years old.

Since I have had the distinct pleasure of "popping up" into the 3d vision from the 2d world, I see no reason why a further progression should not be possible from our usual and common 3d vision into a 4d vision. Or five or six or any required number of dimensions. It just takes careful thought and practice. I see regularly in 3d now, using both eyes at once, because once I had seen how beautiful it is, I practiced it until I could do it without mirrors and lenses.

Vision in four dimensions is not much more difficult. Anyone who can catch a baseball should have no trouble with it. A catcher has to know where the object is, where it was a moment ago, and where it will be by the time of interception. Moving across a field to catch a ball on the run is clearly a four dimensional activity. Why should we have difficulty seeing what we are already able to do?

Last edited by a moderator: Dec 21, 2004
8. Dec 21, 2004

### nightcleaner

Ok so by upstairs and downstairs here you are meaning in the instant next in the future or in the past? Please confirm this so I know I have gotten your point.

9. Dec 21, 2004

### nightcleaner

I have to challenge this view of the Feynman path integral. This interpretation depends on the idea that "really" the particle can only follow one path, no matter what the math says, no matter what the two slit experiment says. One particle, one path.

However, there is another interpretation, one which is fully consistant with the Feynman path intergral, as described in Quantum Electrodynamics. That is the many worlds interpretation of Everette, Deutsch and others, or in my idiosyncratic formulation, the many times interpretation. In this paradigm, the particle does indeed follow every path available to it, just as a single photon is shown to reflect off of every part of a mirror in QED, which is an unavoidable result. This is what the photon or the particle actually does. What we see in our 3d 1t vision is only one edge of the simplex which the object occupies.

This view makes all the path integrals real, as opposed to the method you have chosen, which says only one can be real, while all the others are some sort of statistical trick, a mathematical illusion. Why not use Occam? You already have huge evidence that the other dimensions are present, and that they have a definable geometry, and that our usual vision is limited to 3d in space and one in time.

Four pick three, Marcus. There are four dimensions in spacetime, of which we pick three to hold our view of space, leaving one of time. How many ways are there to do this? Four ways, Marcus. Four possible paths of time from any instant. You choose.

When we move through four dimensional spacetime, at any instant there will be three dimensions of choice (space) and immediately beyond them but still in our quantum view a fourth dimension, which is time. The fourth in this case is not unique, but one of four possible time dimensions. The fifth point of the simpex? You are standing upon it. You the observer, three space, one time, five points.

10. Dec 21, 2004

### marcus

yes.
I am picturing spacetime as (they often say) folliated
that is to say "leaved", or layered
like philo dough
this is still at an intuitive stage for me and I cant

but we do seem to be seeing it similarly
so on a visual level things are ok.

this folliation is a kind of representation of causality or
temporal ordering, as I see it, deeper down is into the past

11. Dec 21, 2004

### marcus

this is witty and entertaining but...
well and provocative too, but...
I probably am not going to respond because of a deeply engrained
intellectual laziness.
besides my wife is playing an Elvis Presley's greatest hits collection which she does whenever she sews (it helps regress back to the 1950s when women DID sew and it all seems to fit) and how can I think philosophy under the circumstances

12. Dec 21, 2004

### marcus

It is a little like the Zen experiment of dropping a drop of ink into
a glass of water and watching it until......

As I recall, the planck mass is 22 micrograms
so if the mass of the universe is a billion planck masses (of course it is really much more, since that is only 22 kilograms!)
then the compton of the universe is one billionth of the planck length.

13. Dec 21, 2004

### nightcleaner

This is what I have been trying to give you. Consider the Compton wavelength of the universe. Consider a sphere of radius one universal Compton wavelength (call it a Planck, it is shorter to spell and afaik it is the same thing). Consider a dense stack of these spheres. That is what 4d spacetime looks like.

Now consider the observer as if the observer could occupy a single sphere. I know we are too big to fit into a single sphere of that size, but suspend your disbelief on this point for a moment and I will try to remember to come back to it. For now, just accept that there are larger spheres which we do fit into, and they behave the same way as the Planck size spheres I am describing.

If the observer occupies one sphere, then there are twelve spheres around the observer. Each of these spheres is a next instant. In a sense, they make a layer around the observer, a layer of events that are infalling at the speed of light. The observer in the one sphere must wait until the next instant to know what is happening in the next layer. In a sense, the universe of the observer is growing one layer per instant.

But the observer is always moving. I can tell you why the observer has to move in a moment, but stay with me here. Because the observer is moving, there are some spheres which are left behind. No information from those spheres can catch up to the moving observer. Altho all twelve spheres are anext to any spacetime instant, the observer only "sees" the ones directly in the path of the movement. The others are left behind.

Hence, the observer seems to occupy a four dimensional spacetime universe in which there are three visable spatial dimensions and one time dimension. The observer follows a path, and sees all other objects following paths. The real higher dimensional structure is not seen, but only the path edges of the simplices. But it exists and we know it exists because 1. The math requires it (eg string theory); and 2. Observations in the laboratory confirm it (eg 2 slit wave particle duality experiments); and 3. Cosmological observations confirm it (eg, GR and dark energy/ dark matter).

Now for gluing simplices. Yes, you can build spaces by gluing simplices. Not all spaces that are possible in geometry are possible in our universal conditions. AJL seem to want to use the octet space formed by building entirely with tetrahedrons. I have explained elsewhere why this is not the optimal space for modeling our univerese. One must ask where the tetrahedrons come from? I have given a derivation from Planck lengths. It does not involve only tetrahedrons, but also includes cubes. It can be seen to hold both triangular simplices and tetrahedral ones, as well a cubic forms.

The isomatrix. The cubeoctahedron. Face Centered Cubic. Please look at the link.

nc

14. Dec 21, 2004

### nightcleaner

Hi Marcus. I just now saw that you were online and replying. By all means be with your family. We have to keep our priorities straight, and this stuff here is all starlight on a distant sea. I trust you will return again, refreshed and ready to trade points with me? Be well, on this longest of nights.

nc

15. Dec 21, 2004

### nightcleaner

The Planck mass is derived in a different way from the Planck length, but I value your attempt to quantify this, since I work easier with words and images than with numbers. I hope we can work through this so I can see if the ideas match up with the observations. At first glance you have provided a challenge. Thanks!

I have been studying this for a while now and lets see if I can get it right in one go.

The Planck length and Planck time were given in pretty much their current form by Max Planck about a hundred years ago. No one seems to know how he came on the right numbers, but the Planck length is about 10^-34 cm, and the Planck time is about 10^-43 seconds, which when divided should give the speed of light in cm per second at about c=10^9cm/s. I'll have to check and see if I got those numbers right.

The Planck mass is derived by considering how much mass can be crammed into a small space before it collapses into a Schwartzchilde singularity. I think if memory serves that the small space is a proton diameter. The mass that can be crammed into a proton diameter is about the mass of a small flea.

So, your estimate may not be based on first principles.

I am going to go look up the numbers and derivations. Maybe I'll even find out how to calculate the Compton Wavelength while I am at it.

Thanks,

nc

Last edited by a moderator: Dec 21, 2004
16. Dec 21, 2004

### nightcleaner

just have to answer this first. Jumping to the end of my line of reasoning without going over the middle ground, when we get to our current spacetime, the universe we find ourselves in is very nearly flat, and we are very nearly large. The past is in, the future is out, and we exist in a thin layer. The layer is very nearly regular, but has some flaws in it which come from the difference between the close pack face centered cubic form and the curvature of spacetime at our distance, quite a large distance, from the origin. This is a spacetime distance and is reflected in our observations of the cosmos, eg CMBE, but is not to be thought of as distant from us in space. The origin now and always is within. The flaws are matter and energy, the very regular areas are "vacuum" or empty space, and the vacuum fluctuations come about because of the uncertainty of position of the flaws. You could think of the Planck spheres as being very nearly perfectly densely packed, but not quite perfectly. The little bit of slop in the fit accounts for all the phenomena we observe. We never observe the spacetime directly.

nc

17. Dec 21, 2004

### nightcleaner

From this source:

http://en.wikipedia.org/wiki/Natural_units

I get this derivation:

<tr><td>'''[[Planck mass]]'''</td>
<td>[[Mass]] (M)</td>
<td>$m_P = \sqrt{\frac{\hbar c}{G}}$</td>
<td>[[1 E-8 kg|2.17645 × 10<sup>-8</sup>]] [[kilogram|kg]]</td>
</tr>

but I see it does not show the formula in this forum. I'll have to review my latex skills.

$$m_p=\sqrt{\frac{\hbar c}{G}$$

That was easy. Just replace math with tex and <> with []

so

$$m_p=\sqrt{\frac{\hbar c}{G}$$ = 2.17645 x 10^-8 kilogram

I guess that's about 2.2x10^-5 grams, or 2.2x10^-2 micrograms, or .02 micrograms? Anyway within a couple orders of magnitude.

Now to find out how to calculate Compton wavelength.

I find the Compton wavelength of the electron to be listed at :

http://en.wikipedia.org/wiki/Physical_constant

as

$$\lambda_e=h/m_e c$$

This is not helpful. I see that the Compton wavelength already assumes the value of the Planck length, h, is known. If we write

$$\lambda_U=h/m_U c$$

then lambda_U=h when m_U c =1. Since c is 1 in base units, then m_U is just 1 also. But what if we go back to CGS units? When is the mass of the universe times the speed of light equal one? When the mass of the universe is the inverse of the speed of light. c = 3x10^8 meters/second so 1/c = 3x10^-9 seconds per meter. What kind of a mass is that?

Lets try a different tack. I interpret Lambda_e as the radius of the region in which an electron is most probably found. (Since an electron is a point particle, this radius is really the radius of the area in which the electron is most likely to interact with photons via quantum fluctuations and virtual particles.) So Lambda_U would be the likely radius of the universe. We can get that from cosmological data. But should we use the inflation value of 78 billion light years, or the age of the universe data of 13 billion light years? Well they are only an order of magnitude or so apart.

Anyway the radius of the universe is equal to the smallest possible length in the universe divided by the mass of the universe times the speed of light. Sounds easy enough. Solving for h, h equals the radius of the unverse times the mass of the universe times the speed of light. Ooops. That looks like a big number.

But the extremely small is the inverse of the extremely large. Can we be justified to say then that the formula should be inverted on one side? Then we would have radius universe = mass universe times speed of light, divided by smallest possible length. So smallest possible length equals mass of universe times speed of light, divided by radius of universe.

$$\lambda_U=1/\lambda_p$$

$$\lambda_U=m_U c/h$$

$$h=m_U c/\lambda_U$$

Last edited by a moderator: Dec 22, 2004
18. Dec 22, 2004

### nightcleaner

ok I think I see it better now. A Compton wavelength is not a length at all, but a frequency, T^-1. So to recover a length from the Compton wavelength, it is necessary to divide a velocity by that frequency.

working on it. Have to sleep. nc

Last edited by a moderator: Dec 22, 2004
19. Dec 22, 2004

### marcus

that is the same as 22 x 10^-6 grams

that is the same as 22 micrograms

because a microgram is a millionth of a gram----that is, 10^-6 gram

I am glad to see you using Latex

in many discussions you will find h standing for a version of planck's constant (not planck length) and hbar standing for a reduced version of planck's constant, namely h divided by 2 pi.

the formulas defining the planck quantities customarily use hbar, as you did when you wrote:
$$m_p=\sqrt{\frac{\hbar c}{G}$$
but the formula for the Compton sometimes uses h and sometimes hbar, so what people call the compton can vary by a factor of 2 pi.
humans, imperfect as they are, sometimes waffle a bit in their notation
requiring tolerance and goodwill on everyone's part

Last edited: Dec 22, 2004
20. Dec 22, 2004

### marcus

montecarlo method: find the answer by random wandering

in the path integral approach to quantum spacetime
one has an integral which is an average over all geometries

(a weighted sum of 4D geometries that get you from one 3D condition of space to another one later on, but we have quantum uncertainty about what went on in between)

now Geom(M) the vast warehouse of all possible 4D geometries is a huge place to wander around in
and actually summing or integrating over all those possibilities (as a very dutiful consciencious person would do when asked to find the average) is next to impossible
(there are more degrees of freedom in the geometry of a whole spacetime than, for instance, in the mere path of a single particle going from here to there----so the vastness of the possibilities is vaster)

nevertheless that is the idea of quantizing, you have a bigspace of all possibilities and you define wavefunctions on the bigspace and you define quantum states and you integrate and so on----you have to be able to describe a blur of possibilities and an indefiniteness about how you got from one situation to another

so what to do? the Monty approach says to define a small set of MOVES which allow you to do a RANDOM WALK in the vast warehouse of geometries------and go for a walk, and get lost, and wander about AVERAGING AS YOU GO

this is very zen because it uses the vastness in order to overcome the vastness----because it is very huge you can wander randomly and be sure of not coming back or getting caught in a loop---and therefore you can make a RANDOM SAMPLE and the average of a random sample is a good estimate of the real average.

So Renate "the Fox" Loll defines what she calls the Monte Carlo moves, which are rearrangements of the simplex-gluings which get you from one geometry to another "nearby" geometry

these moves are socalled "ergodic" which means that if you do them enough you will eventually pass thru every configuration in the warehouse.
ergodic is an idea about mixing which means THOROUGH
the moves are very little but but they thoroughly stir the geometry
so if you do enough of these little moves you will completely stir things up.

Jan Ambjorn should be praised for this too. And it goes back to 1980s and 1990s when people applied it to "euclidean" (not lorentzian) path integrals and dynamical triangulations. but even though the Monty method did not originate with Ambjorn and Loll I find it admirable and they are the ones that finally applied this approach to the Lorentzian setup and finally, in 2004, made it work right.

Last edited: Dec 22, 2004