# Can you explain Emergence of a 4D World ?

1. May 10, 2004

### marcus

Can you explain "Emergence of a 4D World"?

Can anyone help explain the new paper
"Emergence of a 4D World from Causal Quantum Gravity"
by Ambjorn, Jurkiewicz, Loll?

http://arxiv.org./hep-th/0404156 [Broken]

ten pages
involves monte carlo simulations
(computer experiments)

Last edited by a moderator: May 1, 2017
2. May 10, 2004

### marcus

just to get some of the gruntwork out of the way
we need the idea of the Hausdorff dimension so
here is "mathworld" on the Hausdorff measure of a
subset of a metric space

"Let X be a metric space, A be a subset of X, and d a number. The d-dimensional Hausdorff measure of A, Hd(A), is the infimum of positive numbers y such that for every r > 0, A can be covered by a countable family of closed sets, each of diameter less than r, such that the sum of the dth powers of their diameters is less than y. Note that Hd(A) may be infinite, and d need not be an integer. "
http://mathworld.wolfram.com/HausdorffMeasure.html

and then mathworld defines Hausdorff dimension like this:

"Formally, let A be a subset of a metric space X. Then the Hausdorff dimension D(A) of A is the infimum of such that the d-dimensional Hausdorff measure of A is 0 (which need not be an integer)."
http://mathworld.wolfram.com/HausdorffDimension.html

wikipedia has essentially the same definition
http://en.wikipedia.org/wiki/Hausdorff_dimension

this is what Ambjorn et al mean by "effective dimension"
and they say in the introduction

"... a particular case of the more general truth, not always appreciated,
that in any nonperturbative theory of quantum gravity “dimension” will become a dynamical quantity, along with other aspects of geometry. (By “dimension” we mean an effective dimension observed at macroscopic scales.)..."

Last edited: May 10, 2004
3. May 10, 2004

### marcus

wikipedia's definition is well written and includes examples so I'll quote

----quote wikipedia---
The Hausdorff dimension (also: Hausdorff-Besicovitch dimension, capacity dimension and fractal dimension), introduced by Felix Hausdorff, gives a way to accurately measure the dimension of complicated sets such as fractals. The Hausdorff dimension agrees with the ordinary (topological) dimension on "well-behaved sets", but it is applicable to many more sets and is not always a natural number. The Hausdorff dimension should not be confused with the (similar) box-counting dimension.

If M is a metric space, and d > 0 is a real number, then the d-dimensional Hausdorff measure Hd(M) is defined to be the infimum of all m > 0 such that for all r > 0, M can be covered by countably many closed sets of diameter < r and the sum of the d-th powers of these diameters is less than or equal to m.

It turns out that for most values of d, this measure Hd(M) is either 0 or ∞. If d is smaller than the "true dimension" of M, then Hd(M) = ∞; if it is bigger then Hd(M) = 0.

The Hausdorff dimension d(M) is then defined to be the "cutoff point", i.e. the infimum of all d > 0 such that Hd(M) = 0. The Hausdorff dimension is a well-defined real number for any metric space M and we always have 0 ≤ d(M) ≤ ∞.

Examples

The Euclidean space Rn has Hausdorff dimension n.

The circle S1 has Hausdorff dimension 1.

Countable sets have Hausdorff dimension 0.

Fractals are defined to be sets whose Hausdorff dimension strictly exceeds the topological dimension. For example, the Cantor set (a zerodimensional topological space) is a union of two copies of itself, each copy shrunk by a factor 1/3; this fact can be used to prove that its Hausdorff dimension is ln(2)/ln(3) (see natural logarithm). The Sierpinski triangle is a union of three copies of itself, each copy shrunk by a factor of 1/2; this yields a Hausdorff dimension of ln(3)/ln(2).
-----end quote----

Ambjorn et al say

"Note that the dynamical nature of “dimensionality” implies that the Hausdorff dimension of the quantum geometry is not a priori determined by the dimensionality at the cut-off scale a, which is simply the fixed dimensionality d
of the building blocks..."

they give examples of where microscopic building blocks of some dimension d were used by the effective macroscopic dimension came out different from d---even infinite.

I dont yet understand why they do this but they seem to be postulating a 4D world down at Planck scale and letting an effective macroscopic dimension emerge---geometric observables including largescale dimension arise from a state sum

the state sums are computed using monte carlo techniques

(resorting to monte carlo is not unusual when people face giant integrals which they cant see how to compute explicitly)

Ambjorn et al make the point that the macroscopic dimension didnt have to come out 4, but it did.

at the end of the introduction they also say:
"...In what follows we will report on the outcome of the first ever Monte Carlo
simulations of four-dimensional causal dynamical triangulations. It differs radically from what was found in previous simulations of four-dimensional Euclidean dynamical triangulations. We will present strong evidence that the Lorentzian framework produces a quantum geometry which is both extended and effectively four-dimensional. This is to our knowledge the first example of a theory of quantum gravity that generates a quantum spacetime with such properties dynamically."

4. May 10, 2004

### Mike2

Yes, read the openning paragraph. The title seems to assume a valid version of quantum gravity to derive its results. I'm not aware that one version of quantum gravity has been validated.

Then the abstract reads "sum of geometries of a non-perturbative quantum gravity". I thought that sum of geometries (or paths) was exactly the definition of perturbation theory.

5. May 10, 2004

### sol2

Monte Carlo Effect

"We use the word membrane to mean a sheet-like 2 dimensional object, an object with area but very little or no thickness. Good examples are sheets of paper or a piece of plastic food wrap. Just like surfaces, membranes can be flat or curved; rough or smooth."

"What then are space and time like on a very very tiny scale? Nobody knows for sure, but it seems fairly certain that a theory describing the nature of space-time on a small scale must have some of the important aspects of quantum mechanics. It might be that the exact shape of space and time is rather indefinite on some small scale. Just like we sometimes have to think of an electron as a cloud-like entity that exists everywhere that an electron particle might exist, perhaps we have to think of space and time on some small scale as existing in a fuzzy state that is a combination of all the possible ways that might be curved. Such a theory is called a theory of quantum gravity."

You might want to check out Cubist Art and the Monte Carlo Effect in Michio's forum at one's leisure and on a rainy day with nothing else to do.

And that's all I got to say about that

Last edited by a moderator: Apr 20, 2017
6. May 11, 2004

### arivero

I am not sure if really they use Hausdorff' dimension or Minkoswki' dimension, which is also an old measure of fractality in a broad sense (not to be confused with minkowskian space-time, of course).

Following Mike2, I am also in doubt about which model of quantum gravity they validate.

7. May 12, 2004

### marcus

there was some reaction on SPR to the AJL paper today (or to Baez report of it) from Thomas Larsson. here's an exerpt from Larsson's post:

----quote----
> This trio of researchers have revitalized an approach called "dynamical
> triangulations" where we calculate path integrals in quantum gravity by
> summing over different ways of building spacetime out of little 4-simplices.
> They showed that if we restrict this sum to spacetimes with a well-behaved
> concept of causality, we get good results. This is a bit startling,
> because after decades of work, most researchers had despaired of getting
> general relativity to emerge at large distances starting from the dynamical
> triangulations approach. But, these people hadn't noticed a certain flaw
> in the approach... a flaw which Loll and collaborators noticed and fixed!

This is pretty exciting. It is sort of obvious that you can formally
put gravity on a lattice, but I always thought that there wasn't a
continuum limit. If the numerical evidence in this paper is true, and
it seems quite strong, then we see a new field open up here, perhaps
like when Wilson invented lattice gauge theory in 1974. A lot of
interesting things can be done, e.g. to apply standard techniques in
lattice models, introduce gauge and fermion fields, and try to find
different continuum formulations. I would not be surprised if this is
the next bandwagon and a lot of smart people will jump onto it.
--------end quote----

8. May 12, 2004

### marcus

On page 7 for example they say:
"A best fit for the spatial Hausdorff dimension....yields dh = 3.10 +/- 0.15..."

I supposed that their dh stands for dhausdorff
so I am curious as to why you were uncertain which measure of dimensionality they were using. Did you find some ambiguity at some other point in the paper?

Or are they perhaps in a situation where several possible ways to measure the dimension actually agree? The longer paper they say is in preparation [reference 25] might clarify this, I imagine.

9. May 12, 2004

### marcus

the way I'm currently picturing the AJL model of spacetime
is like this
(in the computer) they start with a heap of little blocks
from Figure 3 you can see that they played the game over and over again
starting with different numbers of blocks
either 46,000 blocks
or 91,000, or 184,000, or 371,000 blocks

and the blocks start out loose, not sticking to each other, and then
they throw a switch making all the blocks magnetic!

so the blocks all jump together and snap together (a bit like a third of a million identical "Lego" toy bricks assembling themselves spontaneously)
and make a spacetime

and that spacetime has an amplitude or a relative probability (versus the other possible ways the blocks could have jumped together and assembled themselves)

this weight function of each possible spacetime (related to the Einstein-Hilbert action) is described in their equations (2) and (3) on page 4

observables can be defined on the spacetimes, like an "average distance between points" which they call <r>. Then one can find the expected value of any observable by a (weighted) sum over all possible geometries.

Any such sum can be estimated by Monte Carlo methods-----choose a few geometries at random and evaluate the observable on each of them and average up.

Among other observables, the dimension of a spatial slice can be found by comparing <r>, the average distance between points in the slice, with the cube root of the volume of the slice.

There is something new about the AJL approach ( as I understand what Baez says) which is that AJL made a special rule about how the blocks
could jump together. The blocks are all identical 4-simplices and they had to
jump together in a spatially foliated way. They had to assemble themselves
into a series of spacelike slices
each slice joined to the next by 4-simplices
so the spacelike slices are ordered by a sort of time parameter
(the possibility of reparametrizing the "proper time" in the model is an issue)

anyway I picture the game AJL play like this: they start with a bag of identical 4-simplex blocks which are loose, not sticking to each other.
there are a 1/3 of a million of these little "Lego-like" blocks
they play a game which is to flip the stickyswitch or turn on the magnetism so that all the blocks jump together and assemble into a spacetime
(but they must form one with spatial slices, stacked like a deck of cards, foliated in other words, which lets us talk about causality, past future etc)
and then they score or weight these things

they score them in a way that, interestingly enough, seems to correspond both to their relative liklihood of happening (probability, amplitude) AND to
some expression derived from the Einstein-Hilbert action applied to this discrete situation.

and then they continue the game by calculating observables about this spacetime
and they can add up weighted sums of observable-values
and one set of observables tells you how the spacetime looks large-scale
and one of their discoveries is
that the spatial slices are extended and three dimensional

or that the spacetime is extended and four dimensional

then they say, on page 7 right before "Discussion"

"...As should be clear from the Introduction, reproducing this “classical” dimensionality dynamically from a fully nonperturbative formulation
of quantum gravity constitutes a highly nontrivial result."

the reason being that people have been trying this for years and it hasnt worked: the blocks would assemble themselves into curled up or fractalish
messes with the wrong dimensionality. so they are saying that they finally figured out how to make it work

it seems like a natural game to play and a very straightforward way to get a completely backgroundindependent and nonperturbative model of spacetime.
you start with a bag of planckscale dust and it assembles itself into a geometry---what could be more natural

Last edited: May 12, 2004
10. May 12, 2004

### MathematicalPhysicist

so this paper has connection to something in fractal geometry (i conclude it from hausdorff d.)?

11. May 12, 2004

### marcus

Hi loop! I do not see any connection to fractal geometry, but there might be.
...there may be fractal stuff in this discussion that I just am not aware of
so please keep on the look out for it and tell me.
-------------------
I am still sorting this Dynamical Triangulation stuff out, getting the research areas tagged and mapped.

Curiously enough the best map I've found so far is in a 1997 survey by Rovelli
http://arxiv.org/gr-qc/9803024 [Broken]
It was a survey talk at a 1997 General Relativity conference.

He sketched out all the lines of quantum gravity research starting with the two most popular String and Loop
...
Code (Text):

III Main Directions
A String
B Loop [b]
A Discrete Approaches
1. Regge calculus
2. DYNAMICAL TRIANGULATION
3. Ponzano-Regge

...

he describes all these things and compares and contrasts
and reports the weak points as well as the achievements

And as a source on Dynamical Triangulation he cites a 1996 set of lecture notes by Ambjorn.
http://arxiv.org/hep-th/96120069 [Broken]

Simplicial Quantum Gravity (specifically the DT approach) goes way back
and seems very sensible and looked like it would give the right largescale limit
but had for a long time shown intractible problems.
It is to Ambjorn's great credit that he stuck with it and
then had the luck to team up with Loll in 1998, while they were both
at MPI-Potsdam (AEI).

Last edited by a moderator: May 1, 2017
12. May 13, 2004

### marcus

John Baez has a brief explanatory essay about the AJL paper
at his website, in his Week 206
and I want to extract out and put here just what will help me (and anyone else in a similar situation) understand what Ambjorn, Jurkiewicz and Loll are saying.

----exerpt from Baez Week 206---------

...Given all this, I'm delighted to see some real progress on getting 4d
spacetime to emerge from nonperturbative quantum gravity:

3) Jan Ambjorn, Jerzy Jurkiewicz and Renate Loll, Emergence of a 4d world
from causal quantum gravity, available as http://www.arxiv.org/abs/hep-th/0404156.

This trio of researchers have revitalized an approach called "dynamical
triangulations" where we calculate path integrals in quantum gravity by
summing over different ways of building spacetime out of little 4-simplices.
They showed that if we restrict this sum to spacetimes with a well-behaved
concept of causality, we get good results. This is a bit startling,
because after decades of work, most researchers had despaired of getting
general relativity to emerge at large distances starting from the dynamical
triangulations approach. But, these people hadn't noticed a certain flaw
in the approach... a flaw which Loll and collaborators noticed and fixed!

If you don't know what a path integral is, don't worry: it's pretty
simple. Basically, in quantum physics we can calculate the expected value
of any physical quantity by doing an average over all possible histories
of the system in question, with each history weighted by a complex number
called its "amplitude". For a particle, a history is just a path in
space; to average over all histories is to integrate over all paths -
hence the term "path integral". But in quantum gravity, a history is
nothing other than a SPACETIME.

Mathematically, a "spacetime" is something like a 4-dimensional manifold
equipped with a Lorentzian metric. But it's hard to integrate over all
of these - there are just too darn many. So, sometimes people instead
treat spacetime as made of little discrete building blocks, turning
the path integral into a sum. You can either take this seriously or treat
it as a kind of approximation. Luckily, the calculations work the same
either way!

If you're looking to build spacetime out of some sort of discrete building
block, a handy candidate is the "4-simplex": the 4-dimensional analogue
of a tetrahedron. This shape is rigid once you fix the lengths of its 10
edges, which correspond to the 10 components of the metric tensor in
general relativity.

There are lots of approaches to the path integrals in quantum gravity
that start by chopping spacetime into 4-simplices. The weird special
thing about dynamical triangulations is that here we usually assume
every 4-simplex in spacetime has the same shape. The different spacetimes
arise solely from different ways of sticking the 4-simplices together.

Why such a drastic simplifying assumption? To make calculations quick
and easy! The goal is get models where you can simulate quantum geometry
on your laptop - or at least a supercomputer. The hope is that simplifying
assumptions about physics at the Planck scale will wash out and not make
much difference on large length scales.

Computations using the so-called "renormalization group flow" suggest
that this hope is true *IF* the path integral is dominated by spacetimes
that look, when viewed from afar, almost like 4d manifolds with smooth
metrics. Given this, it seems we're bound to get general relativity at
large distance scales - perhaps with a nonzero cosmological constant, and
perhaps including various forms of matter.

Unfortunately, in all previous dynamical triangulation models, the path
integral was *NOT* dominated by spacetimes that look like nice 4d manifolds
from afar! Depending on the details, one either got a "crumpled phase"
dominated by spacetimes where almost all the 4-simplices touch each other,
or a "branched polymer phase" dominated by spacetimes where the 4-simplices
form treelike structures. There's a transition between these two phases,
but unfortunately it seems to be a 1st-order phase transition - not the
sort we can get anything useful out of. For a nice review of these
calculations, see:

4) Renate Loll, Discrete approaches to quantum gravity in four dimensions,
available as http://www.arxiv.org/abs/gr-qc/9805049 or as a website at Living Reviews in Relativity,
http://www.livingreviews.org/Articl...e1/1998-13loll/

Luckily, all these calculations shared a common flaw!

[at this point Baez provides some technical background on complex amplitudes versus relative probabilities, and the "Wick rotation" proceedure,
replacing "t" by "it", available where there is a time parameter, and
the "Metropolis algorithm" for spotting highly probable cases and
thus speeding up the path integral summation. I omit this part of the essay.]

...People use Wick rotation in all work on dynamical triangulations.
Unfortunately, this is not a context where you can justify this trick... The problem is that there's no good notion of a time coordinate "t" on your typical
spacetime built by sticking together a bunch of 4-simplices!

The new work by Ambjorn, Jurkiewiecz and Loll deals with this by
restricting to spacetimes that *do* have a time coordinate. More
precisely, they fix a 3-dimensional manifold and consider all possible
triangulations of this manifold by regular tetrahedra. These are the
allowed "slices" of spacetime - they represent different possible
geometries of space at a given time. They then consider spacetimes
having slices of this form joined together by 4-simplices in a few
simple ways.

The slicing gives a preferred time parameter "t". On the one hand this
goes against our desire in general relativity to avoid a preferred time
coordinate - but on the other hand, it allows Wick rotation. So, they
can use the Metropolis algorithm to compute things to their hearts'
content and then replace "it" by "t" at the end.

When they do this, they get convincing good evidence that the spacetimes
which dominate the path integral look approximately like nice smooth
4-dimensional manifolds at large distances! Take a look at their graphs
and pictures - a picture is worth a thousand words....

----end quote----

Last edited: May 13, 2004