Self-organizing quantum universe explained in July SciAm feature

In summary, this article provides a good accessible introduction to one of the foremost quantum gravity approaches. It is written for general audience and communicates effectively.
  • #1
marcus
Science Advisor
Gold Member
Dearly Missed
24,775
792
This is a good accessible introduction to one of the foremost quantum gravity approaches.
http://www.scribd.com/doc/3366486/SelfOrganizing-Quantum-Universe-SCIAM-June-08

It is a feature article in the July 2008 print issue of SciAm----pages 42-49.
But SciAm put it out online already in June, so in this version it is dated June.

This "scribd" version is set up so you cannot print it. You can only read it on the screen.
But they make it easy. Click on "full screen" button, and on the enlarger "+" button to make it easy to read. The graphics help. The writing is for general audience and communicates effectively.

This is the most efficient brief explanation of Renate Loll's causal dynamical triangulations approach that I have seen so far.
 
Last edited by a moderator:
Physics news on Phys.org
  • #2
the article mentions the physics of flocking
in what I think is an enlightening analogy
some readers may remember the cover of this October 2007 issue of Physics Today
 

Attachments

  • flocking_cover.jpg
    flocking_cover.jpg
    22.9 KB · Views: 412
Last edited:
  • #3
Here is more on flocking, with lots more pictures
http://www.smc.infm.it/index.php?option=com_content&view=category&layout=blog&id=45&Itemid=103

http://www.smc.infm.it/index.php?view=article&catid=44%3APress+Release&id=63%3Astarflag-in-the-press&option=com_content&Itemid=98

The Physics Today article (that inspired the October 2007 cover) is free online
http://ptonline.aip.org/journals/doc/PHTOAD-ft/vol_60/iss_10/28_1.shtml
 
Last edited by a moderator:
  • #4
Marcus, use your imagination, please.

Using Loll's formalism, do you think it is possible to define a general even horizon? For example, since the dimensions are free to very, an external observer sees the horizon as a 2-sphere surface made of triangles, right?
 
  • #5
MTd2 said:
...an external observer sees the horizon as a 2-sphere surface made of triangles, right?

How so? What size would the triangles be? In the Loll picture, the triangles do not exist in nature and they have no minimum size. They explain this fairly well in the SciAm article, for general audience. I thought you had read several of their papers.
 
  • #6
marcus said:
How so? What size would the triangles be? In the Loll picture, the triangles do not exist in nature and they have no minimum size.

I was not talking about the sizes of any triangles or anything. I was thinking about the spectral dimention, on page 49. And outside observer sees the horizon of a black hole as a 2-sphere. Would that make the spectral dimention fall to 2 exactly on the horizon?

I am spaculating on new ideas.
 
  • #7
Oh, I reread my 1st post... I didn't mean to be aggressive, I just wanted to make an invitation for reflection... I'm sorry.
 
  • #8
My understanding of this is, that if the overall idea is to try to construct the optimum measure of expectations of possibilities based on given information. Choosing the right or the wrong construction is of utility or loss for the observer.

So we need

1) a general rating scheme (a logic of howto construct a predictive measure on possible new observations)
2) we need a way to consruct the set of possibilities in the first place
3) a way to assign some kind of weight to each possibility in relation to (1)

The feynmann superposition path integral supposedly is a possible idea on (1) + (3). Wether it's the fundamentally correct one, is still open as I see it. But this is not questioned in the CDT approach as I see it.

They do attacK 2 though, and they suggest that, unlike the euclidian style possibilities, only the set of possibilities that follow their construction of limit of the sets of - as per their reasoning - "causally glued triangulations". This consructions of theirs, generates in their argumentation the correct set of possibilities to apply the feynmann sum and EH action logic to.

While I think it's interesting, I think all three points above need fundamental addressing. The logic implicit in the feynmann path integrals can't possible be unquestionable.

Maybe in a spirit of their attempt to construct the set of valid possibilities (integration space), one could generalize this logic, and also find a way to construct from first principles a natural selection scheme and weighting to answer to (1) and (3)?

Similarly to the idea that there are constraints on the generation of validly possible geometries, there may be constraints of the generation of validly possible actions and weighting rules?

/Fredrik
 
  • #9
MTd2 said:
Oh, I reread my 1st post...I just wanted to make an invitation for reflection...

No problem! I appreciate your help. constructive comment, including critical, is essential. We need more dialog, not less!
 
  • #10
MTd2 said:
... I was thinking about the spectral dimention, on page 49. And outside observer sees the horizon of a black hole as a 2-sphere. Would that make the spectral dimention fall to 2 exactly on the horizon? ...

I know only one CDT black hole paper (Loll and Dittrich) and it does not get very far. Black holes is an area where the CDT progress is slow, compared to some others.

You are raising interesting questions.

I think there is a possible research paper to write here. The CDT team has gotten deSitter spacetime to emerge as an average out of quantum confusion. Why should they not also be able to get Schwarzschild spacetime to emerge, by changing some conditions?----by putting central matter into the picture for example.

So far they are doing their computer simulations of universes with only dark energy but no matter. They need to start doing computer runs with matter included in the picture. (Or maybe they have already started doing this but have not yet results to report.)

You asked about the event horizon. I do not remember if the Loll/Dittrich paper had a CDT representation of the event horizon. I think it may have. But the paper was several years back and I don't remember clearly.
 
  • #11
I may have a proposal of line of research to tackle this problem ... But I am afraid of developing this because I was called crackpot somewhere else... I promise that I am not trying to indulge in prepostorous ideas... I merely want to check the premises... :(

So, instead of showing the idea, I must ask something before:

Do you people, Marcus, et al., know a rigorous way to solve the Frozen Star paradox?

http://www.mathpages.com/rr/s7-02/7-02.htm

There is a thread here on this forum, https://www.physicsforums.com/showthread.php?t=132207&page=2 , but I am not convinced by anything posted there.

EDIT.: An other thread here.
 
Last edited:
  • #12
MTd2 said:
I may have a proposal of line of research to tackle this problem ... But I am afraid of developing this because I was called crackpot somewhere else... I promise that I am not trying to indulge in prepostorous ideas... I merely want to check the premises... :(

How about if you start from by formulating a question where most of us can find a common reference. Then argue what your view of the question is and your scientific strategy of howto solve it?

As I understand the rules here, there is no rules against discussing open questions if the reasoning is scientifically sound. What's banned as I understand, is publising or elaborating full blown solutions to problems (regardless of right or wrong) if those aren't already published where professional researchers usually publish.

So if you keep your full blown theories to yourself (or publish it elsewhere) and just discuss parts of your reflections that does connect to commonly acknowledged issues I don't see how that can not be allowed as it's part of the scientific, creative and educational process.

I didn't understand the premises and your question, are reasoning withing classical GR or what is your starting point, and how can you formulate the question relative to your starting point?

I think your probe for the event horizon is interesting too, and I have some personal reflections on this, similarly immature, but I am not sure if it's related to what your thinking about. My starting points is a relative information concept that's always centered around an observer that is responding and action to survive.

As I see it, I associate to the general question of accumulation and formations of mass. What is the logic behind an observer increasing it's mass? how is mass formed? I choose to ask, how is confidence formed? How can certainty spontaneously form, out of uncertainty? How does money grow on the bank? :) Somehow a black hole is an intuitive association here to an observer who doesn't need to compromise. It doesn't need to throw something out, to consume new info. It just eats it all and grows. The logic behind that is interesting but I think it's difficuly to find a consistent concencus.

Do you see a connection between this and CDT?

/Fredrik
 
  • #13
MTd2, you were asking about the CDT picture of a black hole and I want to get back to that. There is a paper by Loll and Dittrich:
http://arxiv.org/abs/gr-qc/0506035
Counting a black hole in Lorentzian product triangulations
B. Dittrich (AEI, Golm), R. Loll (U. Utrecht)
42 pages, 11 figures
(Submitted on 6 Jun 2005)

"We take a step toward a nonperturbative gravitational path integral for black-hole geometries by deriving an expression for the expansion rate of null geodesic congruences in the approach of causal dynamical triangulations. We propose to use the integrated expansion rate in building a quantum horizon finder in the sum over spacetime geometries. It takes the form of a counting formula for various types of discrete building blocks which differ in how they focus and defocus light rays. In the course of the derivation, we introduce the concept of a Lorentzian dynamical triangulation of product type, whose applicability goes beyond that of describing black-hole configurations."

CDT is still a fairly new approach---the first CDT paper was in 1998. They have just scratched the surface in a lot of areas. Only the first steps have been taken in studying black holes, as far as I can see. Regretfully, there is not much I can say in response to your question.
=========================

Something that MAY be of interest is their beginning to include matter in CDT models. A paper was just posted today on arxiv about that.
http://arxiv.org/abs/0806.3506
Shaken, but not stirred – Potts model coupled to quantum gravity

"We investigate the critical behaviour of both matter and geometry of the threestate
Potts model coupled to two-dimensional Lorentzian quantum gravity in
the framework of causal dynamical triangulations. Contrary to what general
arguments on the effects of disorder suggest, we find strong numerical evidence
that the critical exponents of the matter are not changed under the influence of
quantum fluctuations in the geometry
, compared to their values on fixed, regular
lattices. This lends further support to previous findings that quantum gravity
models based on causal dynamical triangulations are in many ways better behaved
than their Euclidean counterparts."
====================

I think the point here is that the paper serves to gauge progress in the CDT approach. The big recent news was December 2007 when they got deSitter universe to emerge at large scale out of a microscopic chaos. No geometry (no smooth metric manifold) is put in at the beginning, just a swarm of microscopic components each interacting locally with its neighbors. And an overall smooth classic spacetime emerges as a quantum average. This is one of the goals of any background independent approach to QG. And its achievement in the CDT context was something of a first. That's covered in the SciAm article we have a link for.

But notice that the work reported December 2007 and described in the SciAm has no matter in it. It is pure geometry, pure gravity. The deSitter universe is an ideal empty universe with nothing but "dark energy" in it, or in other words a positive cosmological constant. So the natural question was how are they going to follow it up by including matter?

Indications are that there are several more papers in preparation. This one is a clue to how things are going.
=====================
Before 1998, one of the prominent approaches to QG was something promoted by Stephen Hawking, among other people, called Euclidean QG. And this was linked to work with non-causal dynamical triangulations. The CDT researchers see what they are doing as Lorentzian QG-----the earlier Euclidean approach but with a Lorentzian causal structure.
The SciAm article goes into the history some, and explains this.

It can be confusing that Lorentzian QG has the same initials as Loop QG. Have to watch out for that. At one point in this paper they abbreviate Lorentzian QG as LQG.

Anyway the CDT group traces their history back to the approach used by Hawking and others in the 1980s and 1990s. They just found a way to make the earlier approach work better, in a sense.

So part of this paper is making that point. Matter in a CDT context behaves right, more like in a regular lattice (even though the geometry can be highly irregular, and better than it behaves in the earlier non-causal Euclidean dynamical triagulations attempts.
 
  • #14
She talks about 2d topology, but I think that is just a toy model to show the stability of the proposed space time, even without considering fixing a lattice. Above, I just meant that Hausdorff dimension would give you 2 dimensions, a 2 sphere, at the horizon, in a full 4d theory.
 
  • #15
I have been following the CDT work and have some questions:

1) It seems that they use on the order of 100 - 300K elements (simplexes) in their simulations and that were the scale of the simplexes very short then they don't see a much volume - maybe not as much as a Planck volume - and may not see emergent effects; and on the other hand if the scale is larger then it may be too coarse (too few elements over a larger volume) to see interesting emergent effects.

2) It doesn't seem like there is a thermodynamic element to the simulations that would represent the presumed cooling in the very earliest times of the universe that could exhibit any phase transitions. How would you associate a rate of interaction in an intrinsic manner with the CDTs?

3) The CDT papers are essentially computational experiments. Is there enough information in the published papers to allow one to replicate the experiments?

4) It has been mentioned on occasion that there is no matter in the CDT picture yet. This is somewhat puzzling. Isn't the idea that a picture of an emergent background of necessity includes the emergence of matter and the known forces and so on? In other words that the background and "stuff in the background" are just different facets of a single emerging phenomenon.

Hopefully these are not totally idiotic questions. I was intrigued by the previous claims of emergence of large-scale 4d structure and have been trying to understand what the claims amount to.

X
 
  • #16
In the SciAm article they mention that they need to include a cosmological constant in their simulation. Does anyone have a simple way to explain how a cosmological constant is implemented in CDT?
 
  • #17
xristy said:
Hopefully these are not totally idiotic questions.

Far from it, these are good questions!
In nonstring QG the term "background" often just refers to a geometric background consisting of a smooth manifold with a metric (a distance function). Some approaches are perturbative in the sense that one puts in a standard geometric background at the start (like flat spacetime) and studies small variations of geometry----slight ripples on that background.
So the answer to #4 is that in this, and most of the QG research I follow, "background" just refers to the geometric setup, it doesn't include specifying particles.

#3 is about replication. I don't see why Loll's group wouldn't be willing to share their computer code.

Loll's collaborators are spread out geographically----Athens, Tokyo, Reykjavik (Iceland), Crakow (Poland), Copenhagen,...
Several of the people who are credited with computer work---doing the Monte Carlo runs---are at other universities besides Utrecht. So my guess would be that it would be natural for the CDT codes to be running at a number of different places. Not just at Utrecht.

I don't know the answer for sure, though. Maybe someone knows and will tell us.

#2 is about cooling. This is hard to answer because the published work does not yet include matter. Or does so only in a preliminary fashion in lower dimension models.
The way the CDT researchers have proceeded in the past is to try every new result first in 2D and then work up in dimension, from 2D to 3D, from 3D to 4D.

I assume that the inclusion of matter will follow the same pattern. They will study it in 2D for a couple of years and then extend to higher dimensions. (The first CDT papers, in 1998, dealt with the simplest 2D case----3D came in 2001 if I remember correctly.)

Perhaps I'm wrong but meaningful results about cooling would seem to require the inclusion of matter.
(A 2D+matter paper came out yesterday, by the way. Anagnostopolous et al. He's the CDT guy in Athens.)

#1 is about scale. You can read about that yourself in Planckian Birth
http://arxiv.org/abs/0712.2485
They say the volume of their largest spacetimes is up to 173,000 Planck volumes.

The linear size would be up to about 28 Planck lengths.

Interestingly, they find evidence of semiclassical behavior already at linear scales of a few Planck lengths. If their models are right, then space continues to act in a somewhat conventional way---as we expect it to act----at least in the quantum average----even down to scales of a few (on the order of ten) Planck lengths!

I have to go and do some other things, so can't respond completely to this. But this scale issue is really interesting. The problem seems to be how to to push the simulation down to sub-planck scales where highly unclassical stuff might be revealed. Some discussion of this in their recent papers.
 
  • #18
marcus said:
In nonstring QG the term "background" often just refers to a geometric background consisting of a smooth manifold with a metric (a distance function). Some approaches are perturbative in the sense that one puts in a standard geometric background at the start (like flat spacetime) and studies small variations of geometry----slight ripples on that background. So the answer to #4 is that in this, and most of the QG research I follow, "background" just refers to the geometric setup, it doesn't include specifying particles.

I'm somewhat puzzled here. It seems there is more at stake than just the geometric setup without particles. Per Smolin (hep-th/0507235) dependent is
A theory that begins with the choice of a background geometry, among many equally consistent choices.

He goes on to say that in the independent view:
R2 The fundamental properties of the elementary entities consist entirely in relationships between those elementary entities. ... R3 The relationships are not fixed, but evolve according to law. Time is nothing but changes in the relationships, and consists of nothing but their ordering. ... Thus, we often take background independent and relational as synonymous.

In short, I thought that the over arching hypothesis is that there is effectively no distinction between the background and the particles emerging in the background. That is, particles and such are local configurations of the background. This is how I took the comment by Smolin and Wan (0710.1548):

There is an old dream that matter is topological excitations of the geometry of spacetime. Recently it was discovered that this is realized in the context of models of quantum geometry based on spin networks, such as those used in loop quantum gravity and spin foam models

And in Ashtekar & Lewandowski (gr-qc/0404018):

In this approach, one takes the central lesson of general relativity seriously: gravity is geometry whence, in a fundamental theory, there should be no background metric. In quantum gravity, geometry and matter should both be ‘born quantum mechanically’.

So this is the source of my wondering about the "absence" of matter in the CDT approach. As I understand the background independent quantum gravity program there should be an expectation that the CDT approach generates matter along with the spacetime itself. It seems that in the LQG approach and off-spring involving braids, ribbons and such there's some a priori content that isn't as minimal as CDT, but I fail to see how the CDT will give rise to anything more than a non-physical background unless the simplexes are endowed with a bit more content than just the causal primitive.
 
  • #19
Two very important accomplishments pointed out in the sciam article is they derive

- 4.02 spacetime dimensions under the
. Hausdorff–Besicovitch [fractal] definition

- a stable DeSitter space by inserting causality [arrow of time]
. and a cosmological constant

These results are not only astounding, but, robust. No 'background' is required [it is nonperturbative], few assumptions are made, and the assumption are consistent with observational evidence.
 
Last edited:
  • #20
xristy said:
So this is the source of my wondering about the "absence" of matter in the CDT approach. As I understand the background independent quantum gravity program there should be an expectation that the CDT approach generates matter along with the spacetime itself. It seems that in the LQG approach and off-spring involving braids, ribbons and such there's some a priori content that isn't as minimal as CDT, but I fail to see how the CDT will give rise to anything more than a non-physical background unless the simplexes are endowed with a bit more content than just the causal primitive.

I personally associate the issues of matter with dynamical actions. If the action is put in as a "background action" then it seems hard to see where relational actions will come from. If characterize particles in the way they interact, a particle might be seen as a kind of partly localized quasi stable action formation that responds as per a particular logic encoded in the action. These gives it's properties.

Perhaps the CDT people can find a way to not only doing random walks as per a given selection rule, but a random walk where the section rules are also emergent in the same spirit by random walks in the space of selection rules. until the point there further choices can't be distinguished.

Ie. they construct a sample space, and they take the path integral with the EH action, and they find emergent spacetimes that make sense.

Wouldn't the same logic be applied to the path integral and the EH action itself? Could the EH-action itself be found by a similar principle?

If we ask, how is spacetime constructed?
Why not also ask, how are the "construction rules" constructed?

Perhaps some construction rules, will imply emergent spacetimes, by the same logic some construction rules will also imply formation of particles? And if the construction of the construction rules find a common logic, so should spacetime and matter. And perhaps we could derived GR from even deeper first principles rather than put in manually the EH action.

Perhaps such a construction will generate natural corrections or complementes to classical EH action, and this corrections will be found to be idenfitied with the expected particle phenomenology? Something lhow I would expect the extension of their program to incorporate matter. IF that can be done I think it will be really beautiful.

/Fredrik
 
  • #21
Fra said:
Perhaps such a construction will generate natural corrections or complementes to classical EH action, and this corrections will be found to be idenfitied with the expected particle phenomenology? Something lhow I would expect the extension of their program to incorporate matter. IF that can be done I think it will be really beautiful.

I don't know what others think, but I think such a route would also possible provide a route to unification of programs - with a better understood "string theory" - one where the stringaction are not assumed fundamental, but rather seen as emergent from an even more fundamental theory. Where say dynamical states of a string, provides the effective indexing of another manifold, and where dynamics on one manifold can be seen as another dynamics on another manifold. As I see it, such logic is latent in the mentioned construction.

After all, it's not a far stretch to associate "a string" with a distribution over an index space. Then the set of strings may form a statistical manifold - which can also be seen as a set of measures/actions where string exitations might be seen as transitions between different actions measures, and transtion probabilities might be induced from the statistical manifold.

But if the index field manifold is a statistical manifold, then we have distributions indexed by states of other distributions. And if there is a logic of formation and evolution in this picture, there might be a logic to the spectrum of simplest observables.

And the formation of the manifold, could be maybe related to formation of mass in the sense that one the dynamics of the manifold becomes sufficiently predictable, it "condenses" into a new "index" or a larger more complex manifold. If you see this as a gaming strategy, the observer can conquer control from it's environment and grow. There is a very appealing logic here, but which is yet only hinted. I am far from clear on how this is goingto work out explicitly. But the hints already seen is enough to motivate me at least.

So it would treat states and processes on a more uniform basis. Smolin argued in his book that processes are more fundamental. But if one considers states of a process, or processes of processes one should expect some kind of uniform formalism.

In that view, I find it disturbing to argue for relational stats, but still put the processing rules in by hand?

/Fredrik
 
  • #22
I'm sorry, I just realized that I - without thinking of it myself - conceptually mix the current context with the context of the statistical manifold thread and not everyone reading this thread may have read the othre one. If someone wonders why I suddently talk about the statistical manifold, it was because of this thread https://www.physicsforums.com/showthread.php?t=241825.

Sorry about the confusion.

/Fredrik
 
  • #23
nrqed said:
In the SciAm article they mention that they need to include a cosmological constant in their simulation. Does anyone have a simple way to explain how a cosmological constant is implemented in CDT?

Didn't notice your question till now, so much going on.
Yes. the explanation is simple. Lambda appears in the classical EinsteinHilbert action.
IIRC SEH integrates the term (R - 2Lambda).

Loll's method uses the same action, reduced to counting simplices.
so if you look at Loll's SEH you will see it corresponds term-by-term to the classical action.

So there is a term in the action where a constant Lambda is multiplied by the volume, which is just the number of 4-simplices. In fact it is extremely simple how it is implemented!

Once the EH action is expressed combinatorially, in terms of counts of simplices of various dimensions, then they proceed to set up the path integral.

This is a sum over all possible spacetimes, each weighted by a factor

exp(iSEH)

So the setup including the cosmological constant is pretty straightforward. Now the complicated ingenious part comes---EVALUATING the path integral. Using a Monte Carlo approach.
wick rotation so that the weights are exp(-SEH)
and then a systematic way of picking RANDOM spacetimes with probability proportional to the exponential action weight! This involves shuffling moves which shuffle the simplices and are selected according to a set of probabilities which favor low-action and disfavor high-action paths (or in otherwords spacetimes)

bottom line:
a path (thru geometry space) is a spacetime
a path integral is a weighted sum of spacetimes
and the weights must include a positive cosmological constant for it to work
 
  • #24
xristy said:
I'm somewhat puzzled here. It seems there is more at stake than just the geometric setup without particles. Per Smolin (hep-th/0507235) dependent is ...
He goes on to say that in the independent view: ...
In short, I thought that the over arching hypothesis is that there is effectively no distinction between the background and the particles emerging in the background. That is, particles and such are local configurations of the background. This is how I took the comment by Smolin and Wan (0710.1548):...
And in Ashtekar & Lewandowski (gr-qc/0404018):..

So this is the source of my wondering about the "absence" of matter in the CDT approach. As I understand the background independent quantum gravity program there should be an expectation that the CDT approach generates matter along with the spacetime itself. It seems that in the LQG approach and off-spring involving braids, ribbons and such there's some a priori content that isn't as minimal as CDT, but I fail to see how the CDT will give rise to anything more than a non-physical background unless the simplexes are endowed with a bit more content than just the causal primitive.

I see what you are saying Xristy and all your quotes are right to the point.
There may be a misunderstanding about background independence, however.

Some quantum gravity programs attempt more and are more inclusive than others, as regards putting matter into the picture.

But the difference is not described by saying that one is background independent and one is not.

What people in the community mean by B.I. is that no prior geometric background needs to be specified----the setup does not depend on specifying a manifold with some particular metric.

So you can have a B.I. approach that is very ambitious, with the stated intention to include matter. now or eventually.
And you can have another B.I. approach that is more modest and proceeding by smaller steps, setting small goals----and not even talking about matter yet.
 
  • #25
Chronos said:
Two very important accomplishments pointed out in the sciam article is they derive

- 4.02 spacetime dimensions under the
. Hausdorff–Besicovitch [fractal] definition

- a stable DeSitter space by inserting causality [arrow of time]
. and a cosmological constant

These results are not only astounding, but, robust. No 'background' is required [it is nonperturbative], few assumptions are made, and the assumption are consistent with observational evidence.

I think you said it Chronos. Maybe more than any other post in this thread.
 
  • #26
The Sci. Am. article is somewhat inaccessible here --- can't be printed or saved, it seems and is very slow to repeatedly download on a GPRS link. So I've been reading the article "The Universe From Scratch" (arxiv Hep-Phys 0509010 V3) by Ambjorn et al. instead -- perhaps a bit dated, but an introduction to the CDT.

As I see it, physics often deals with the problem of how to predict the End from a knowledge of the Beginning, as it were --- to follow the dynamics of a developing physical system. In quantum mechanics this is done with a wave equation or, equivalently, by Feynman's path-integral method. In trying to appreciate the goal of the CDT method I find myself confused by what the method accomplishes. I need some help here. I'm sure I'm missing some serious points.

I find myself imagining the Beginning treated by Renate Loll et al. as a quantum chaos --- a situation in which space curvature fluctuations become ever more pronounced as the Planck scale is approached, as mandated by the uncertainty principle. It seems to me that the CDT method then models this rough (fractal?) chaos as a heaving and flexing Buckminster Fuller-like raft of flat-space triangles riding on a sea of spacetime, with the only topology to be considered is that which enables causality.

I then get stupidly confused as to what purpose the model serves. Their development of the path integral approach doesn't seem to predict an End --- the usual purpose of a wave equation or path integral calculation. Instead it seems to show that on a larger scale the quantum chaos (reality) will behave like our smooth (near-de Sitter) spacetime with 4 dimensions and a cosmological constant (another reality), albeit without matter (yet). It's clear that Marcus and Chronos fully recognise the importance of this result, but I'm left grovelling in the dust, so to speak.

I need picking up and dusting off.
 
  • #27
oldman said:
... Instead it seems to show that on a larger scale the quantum chaos (reality) will behave like our smooth (near-de Sitter) spacetime with 4 dimensions and a cosmological constant (another reality), albeit without matter (yet)...

That sounds right, oldman. You seem to be doing all right. I wouldn't say I am a lot better off, though I may have a faster internet connection making the link to the SciAm article usable.

I've read the 2005 paper "Universe from Scratch". It is fine, just a little old. I would also suggest looking at these three if you haven't already:
http://arxiv.org/abs/0711.0273 (emergence of spacetime---short paper presenting argument for the approach)
http://arxiv.org/abs/hep-th/0505154 (reconstructing the universe---long informative paper many figures & charts)
http://arxiv.org/abs/0712.2485 (planckian birth of quantum desitter universe---short paper presenting recent result)

Universe from Scratch may be perfect and meet your requirements, but just to know the alternatives I would say to glance at some of these others, may find they provide a supplement.

======================

you asked about the meaning of path-integral in this context. here is my take on it. In their computer work, they typically run the path integral to go from zero (or minimal) spatial state at the beginning back to zero at the end. That is because the computer is finite and only contains a finite number of simplexes. So the little simulated universe has to have a finite life. So what they get is a universe pops into existence, swells up, then shrinks down, and pops out of existence.

that is an oversimplification. for technical reasons which they explain but I don't understand, they use periodic time with the period much longer than the lifespan of the little universe-----so it is as if they had infinite time and somewhere along there the thing popped into and out of existence. Also for technical reasons the zero spatial state is the minimal number of simplexes that you can glue together so all the faces are covered. As I recall it takes a dozen or less. You want the minimum number of tets required to make something that is topologically a three-sphere S3.

the evolution equation allows a minimal spatial state either to just sullenly persist as minimal, or to abruptly take off and grow

they wouldn't have to always run the path integral from minimal state Initial to minimal state Final. They could presumably run it from Initial space geometry A to final space geometry B, and have A and B be extensive interesting shapes. But as I understand it they always run essentially from zero to zero, or rather from minimal to minimal.
======================

now what is in the computer at any given moment is a history of how spatial geometry could evolve from initial to final
and it is typically a kinky unsmooth history----a 4D story without much symmetry to it.
and this is in effect randomly chosen.
in a given computer run they may go through a million such randomly chosen 4D histories----paths in geometry-space so to speak, paths thru the plethora of possible spatial geometries which lead from Initial to Final.
they get this sample of a million possible paths---a million 4D histories---by a process of randomly modifying one of them to get the next, and modifying that to get the next.
=======================

so how should we think about this. well think about Feynman talking about the path that a PARTICLE takes to get from point A to point B. for him the classical trajectory doesn't exist. all that exists is the realm of all possible paths, which are mostly unsmooth and nowhere differentiable and rather kinkylooking----and each one has an amplitude----and nature makes the weighted average of all the paths and that is how she gets the particle from A to B

well Ambjorn and Loll could say that in the same way SPACETIME does not exist. it is something we imagine like the smooth classical path of a particle. what exists is this realm of possible spatial geometries----and all the possible kinky wacko paths thru this realm, that begin with geometry A and end with geometry B-----and each of these unsmooth 4D histories has an amplitude

and the spacetime which we think we observe is really nature averaging all these fluctuating 4D histories up in a weighted sum.
and so the sum over histories smooths out and looks classical to us. it is the average path

I think it may have been Hawking who popularized the phrase "sum over histories". It is a synonym for the spacetime Feynman path integral. But Hawking Euclidean quantum gravity didn't work. they tried to regularize using simplexes and for 10 years it messed up. then in 1998 Ambjorn and Loll got the idea how to fix it.
==================

Another thing is, when you do a Feynman path integral for a particle going from A to be, the particle goes along line segments---it is a polygonal path-----all zigzaggy. And then you let the size of a segment go to zero. That is not because Feynman claimed natures paths were zigzag polygonal. It is a regularization. which means that to make the problem finite you restrict down to some representatives.

So by analogy, Ambjorn and Loll are not saying that nature is playing with simplexes and tetrahedra! That is just a regularization.
The set of all possible paths is too big. We take a skeleton version of just representative paths (4D histories that pass only thru simplicial geometries). Averaging with a measure on all paths would be too much. We average using a representative sample---a regularization.

And in principle we could let the size of the building blocks go to zero.
 
Last edited:
  • #28
EDIT: I thought of some more things, to continue the previous post. Here is what I was saying earlier...continued after the double line

So by analogy, Ambjorn and Loll are not saying that nature is playing with simplexes and tetrahedra! That is just a regularization.
The set of all possible paths is too big. We take a skeleton version of just representative paths (4D histories that pass only thru simplicial geometries). Averaging with a measure on all paths would be too much. We average using a representative sample---a regularization.

And in principle we could let the size of the building blocks go to zero.
================
================
So again, how should we think of this path-integral approach?

close analog of the Feynman path of a particle

following Hawking idea of the 1980s of a sum-over-histories or path-integral where the 4D spacetime is itself the path from an initial to a final spatial shape.

an approach with a long history which has seemed reasonable to a lot of people to pursue----Ambjorn was one of many people who worked on the original Euclidean or pre-causal dynamical triangulations from 1990 to 1998, which didnt work. And then in 1998 they got the idea to organize it into layers
==================

what I especially like is that while they have one of these 4D spacetimes in the computer---one of the many unsmooth random ones they are averaging together----they can freeze it for a moment and go in and explore it. let something difuse. measure volumes. measure dimensionality around a point. do a random walk through it. I really like it that they have this possibility

so when other path-integral approaches to quantum spacetime arise, I am looking forward to their doing that with them and seeing if and how the experience of being inside the thing is different

dont know if this qualifies as helping up and dusting off---or if it is more like sitting down in the dust with---but either way meant to be companionable
 
  • #29
marcus said:
So by analogy, Ambjorn and Loll are not saying that nature is playing with simplexes and tetrahedra! That is just a regularization.
The set of all possible paths is too big. We take a skeleton version of just representative paths (4D histories that pass only thru simplicial geometries). Averaging with a measure on all paths would be too much. We average using a representative sample---a regularization.

So you are suggesting that the 4-simplices are the "regularization" in question-- i.e. infinitesimal chunks of space of simplex shape are representative of chunks of space of arbitrary shape?

Are there mathematical grounds for stating that this this is true (i.e., that 4-simplexes are indeed "representative")? Or is it an assumption?
 
  • #30
So, I read the SciAm article, it clarified a lot of things about CDT for me. Thanks for posting this!

I liked this little dig at the landscape:

When we vary the details in our simulations, the result hardly changes. This robustness gives reason to believe we are on the right track. If the outcome were sensitive to where we put down each piece of this enormous ensemble, we could generate an enormous number of baroque shapes, each a priori equally likely to occur -- so we would lose all explanatory power for why the universe turned out as it did.

I do have a question:

One of the most interesting things it seems about the recent CDT work is that claim they were able to provide a from-first-principles justification for the universe being 4D, something which no other theory I'm aware of can claim (string theory is sometimes described as claiming to derive the number of dimensions, but it seems to me that it doesn't exactly-- rather what string theory actually does is say "this theory is only consistent in N dimensions, therefore the universe must have N dimensions" without providing any causative reasoning as to how N dimensions were 'picked'*-- and of course, tragically, N>4...). Anyway the CDT justification for this claim is that when they run their simulations with randomly selected arrangements of space-bits and allow the random arrangements to smooth themselves out, they get a dimension (I assume this is a Hausdorff dimension?) of about 4. That is quite impressive, and when you first hear it seems like a great surprise.

However reading this article they outline something I hadn't realized before. They note that the space-bits they use in this simulation are, specifically, 4-simplices-- in other words a structure that if treated as a regular euclidean polygon would produce 4-space. In fact they imply that the way CDT came about was from looking at the fact that 4-simplex-based attempts to simulate 4D quantum gravity in the 80s failed, and in fact failed by producing universes that weren't even 4D; CDT's big accomplishment is thus that they were able to return these simulations to 4D by adding causal structure, something which is desirable anyway. Looking at this though it seems like there is some sense in which they didn't exactly calculate 4D from first principles after all; rather, they calculated that 4D is a consequence of some other parameter they picked, specifically the structure of the 4-simplex with causal structure. And the picking of the causal 4-simplex in this case was not a coincidence, it was something they picked specifically because it arose as the natural fit for describing the geometry we live in. It is still surprising and impressive that this works at all, but is somewhat different than if the prediction "we live in 4D space!" had popped out of the calculations ex nihilo.

Anyway what I'm trying to figure out is this-- let's say that they didn't pick 4-simplexes with causal structure, let's say they picked 2-simplexes or 3-simplexes or 5-simplexes with causal structure and then ran their simulations. Have they tried this? If they did, how many dimensions did these simulations produce-- would it be ~2D, ~3D, ~5D?

If there's anything I'm missing please let me know, thanks! :)

* (Although one interesting thing about string theory is that M-theory is able to provide a sort of causative explanation for why our apparent spacetime is a Minkowski metric space-- I.E. why spacetime is 3+1d and not 4d or 2+2d or something. All these different metrics can arise from the dynamics of branes. It seems like in CDT though that the Minkowski metric/"one timelike dimension" element was something effectively put in by hand, by adding the structure of the "time arrow". Does this sound about right?)
 
  • #31
Coin said:
Anyway what I'm trying to figure out is this-- let's say that they didn't pick 4-simplexes with causal structure, let's say they picked 2-simplexes or 3-simplexes or 5-simplexes with causal structure and then ran their simulations. Have they tried this? If they did, how many dimensions did these simulations produce-- would it be ~2D, ~3D, ~5D?

Yes they did try it with 2D and 3D simplices. they did that first. before 1998, if they used 2D simplices they would not get a 2D result. and with 3D simplices they would not get a 3D result. it could branch out feathery out or it could clump so the dimensionality could be to small or to large.

the initial success was getting a 2D result and then, by 2001 as I recall, a 3D result.

then in 2004 they found using 4D simplices they could get a 4D result.

So I don't think there is anything here that chooses the dimension of the universe. the universe could be any dimension it wants. and then in modeling it they would use that dimension simplex.

the success is more about getting the path integral method to work, by having a reasonable regularization that samples the possible geometries, and that you can express the Einstein Hilbert action combinatorially, by counting simplices of different orders----something resembling the Regge (simplicial) version of the E-H action

the idea is very simple and minimal, just do the most straightforward path integral you can.

what was hard was getting it to work.

BTW there are papers where they try different polygons besides triangles, different building blocks, including even mixtures of building blocks. it doesn't seem to make much difference. the approach doesn't depend essentially on using simplices.

you can even consider each simplex as a point and just formulate a set of rules for how that point should be allowed to connect with neighbors,

also there is a set of "moves" where you shuffle the points around and reconnect them differently. this is how things are randomized. there is a very helpful 2001 paper that shows pictures of these moves in both the 3D and the 4D cases.

it is the only paper I know that actually covers the nittygritty basics of the method
Here is that paper
http://arxiv.org/abs/hep-th/0105267
It has 14 pictures. I felt I understood how the randomization really works much better after reading that paper
Using millions of these "moves" they can take one 4D spacetime geometry and totally scramble it it get another 4D geometry
and so in a way they are doing a random walk in the realm of 4D geometries. like when you walk in the city and at each intersection you toss a coin to decide which way

except with them at each point in the spacetime they toss a coin to decide how to reconnect (or add or subtract) simplexes, and then they do that at many many points and finally they have a completely new spacetime

shuffle the deck, deal out a hand, shuffle the deck again, deal out another hand.

and so, in a Monte Carlo sense, one gets a measure on the set of all possible 4D geometries (within the limits of the computer, which can only deal with a finite number of building blocks)

have to go. glad you are interested!
 
Last edited:
  • #32
This is so interesting. They must be onto something to get the results they claim. What I'm wondering is, what kind of computing power is needed to do their simulations?
 
  • #33
First of all, thanks very much for your detailed reply and links. I'm a bit less dusty now.

Second, after the logjam of fundamental theory that has persisted for more than thirty years now, it is difficult not to get excited over the Utrecht group's work. It just might be the dynamite that the logjam so badly needs. I hope it is.

Third, two minor points.

1):In a recent note http://arxiv.org/PS_cache/arxiv/pdf/0806/0806.0397v1.pdf" it is enthusiastically concuded that
Ambjorn said:
Borrowing a terminology from statistical and complex systems, we are dealing with a typical case of “self-organization”, a process where a system of a large number of microscopic constituents with certain properties and mutual interactions exhibits a collective behaviour, which gives rise to a new, coherent structure on a macroscopic scale.3 What is particularly striking in our case is the recovery of a de Sitter universe, a maximally symmetric space, despite the fact that no symmetry assumptions were ever put into the path integral and we are employing a proper-time slicing [11], which naıvely might have broken spacetime covariance. There clearly is much to be learned from this novel way of looking at quantum gravity!

What "microscopic components" do they mean? Not their simplices, I hope, which are just a calculationional tool, as I read you:

marcus said:
... And then you let the size of a segment go to zero. That is not because Feynman claimed natures paths were zigzag polygonal. It is a regularization. which means that to make the problem finite you restrict down to some representatives.

So by analogy, Ambjorn and Loll are not saying that nature is playing with simplexes and tetrahedra! That is just a regularization...
... let the size of the building blocks go to zero.



2): In 1995 they said that they planned to tackle the problem of how to include mass in the scenario and said:

"...we are in the process of developing new and more refined methods to probe the geometry of the model further, and which eventuallyshould allow us to test aspects related to its local “transverse” degrees offreedom, the gravitons. We invite and challenge our readers to find such tests ina truly background-independent formalism of quantum gravity."


Do you know if there has been progress in this direction yet?
 
Last edited by a moderator:
  • #34
oldman said:
...What "microscopic components" do they mean?

I think they pretty clearly mean simplices. At each stage in the limit process that describes the continuum they consider a large swarm of microscopic building blocks. The swarm self-organizes.

Then they reduce the size or increase the number of blocks N, and repeat. And they compare results at finer resolution (more blocks) with results at coarser (fewer blocks). In their figures you often see overlays which exhibit consistency as N is increased, numerical evidence of convergence as the size goes to zero.

So you get the conventional idea of self-organizing at each stage. And they are also explicitly saying that they have found no indication of a minimal length and that the size in their model is not bounded away from zero.

===================================

I suppose one historical analogy would be the infinitesimals dx and dy in the differential and integral calculus. Leibniz notation.

You make statements, you manipulate expressions, derive stuff, and you let the resolution go to the limit.

Mathematics has a lot of things that are only defined through a limiting process---even the usual numbers. The vast majority of the socalled real numbers are rigorously defined only as classes of a type of sequence of rationals (fractions). Adding two real numbers actually means going back to the original representative sequences and adding successive terms to get a new sequence, which defines the sum (again by approximation).

The English language has not yet entirely assimilated this. In English and probably other natural languages, something is either discrete or continuous. The idea of being both discrete and continuous is dizzying. The mind reels.

But as I say math is full of stuff that is both.

All that Ambjorn and Loll are doing is defining a new kind of continuum, essentially. The old type was defined by Riemann around 1850, in a talk that Gauss asked him to give. He defined the smooth manifold. Differential geometry still uses this primarily.
The manifold that Riemann defined has a fixed dimensionality that is the same at every point and at every scale around that point. If you zoom in and look at finer and finer scale the dimensionality doesn't change.

Ambjorn and Loll are introducing a fundamentally different sort of continuum which is the limit of a series of discrete buildingblock approximations. It turns out that the dimensionality can change with scale.

===========================

There is a problem of how to talk about it in English or I would guess in any other common spoken language.
The continuum is approximated arbitrarily finely by a selforganizing finite discrete swarm of buildingblocks (like a flock of birds or a school of fish).
So it is indeed selforganizing. And approximable arbitrarily finely by a discrete swarm.

But on the other hand there is no minimal size. You can keep reducing the size, and increasing the number, of the birds---and the flock still looks the same and behaves the same.
So the continuum is indeed continuous.

How do you get this apparently contradictory message across to a general audience of SciAm readers?

Maybe fall back on the analogy of old Leibniz infinitesimal dx. :smile:
It may not matter though, what the spoken language description is, as long as the mathematics is sound.
===================

About 1995. What was the 1995 paper? As far as I know, Loll wasnt doing CDT in 1995. The first CDT paper (Ambjorn and Loll) that I know of was 1998.

About matter, progress appears slow but they came out with a paper earlier this year, treating a toy model: 2D rather than 4D.
http://arxiv.org/abs/0806.3506
 
Last edited:
  • #35
marcus said:
About 1995. What was the 1995 paper? As far as I know, Loll wasnt doing CDT in 1995. The first CDT paper (Ambjorn and Loll) that I know of was 1998.

About matter, progress appears slow but they came out with a paper earlier this year, treating a toy model: 2D rather than 4D.
http://arxiv.org/abs/0806.3506

I'm sorry to have written 1995 when I meant 2005. The paper was http://arxiv.org/abs/hep-th/0505154 "Reconstructing the Universe", top of page 14, which you had kindly told me about. Their toy model paper (your ref.above) is quite opaque to me at the moment, but I'll chew on it. Thanks.

I had feared that the " fundamental 'atoms'” or excitations of spacetime geometry (whose) interaction gives rise to the macroscopic spacetime we see around us and which serves as a backdrop for all known physicalphenomena", referred to in their "The Universe from Scratch" might be the simplices they use to render spacetime discrete for the purposes of evaluating path integrals, which you seem to confirm is indeed the case:

I think they pretty clearly mean simplices...

If this is so it looks to me as if it let's a lot of steam out of their approach. When one talks of atoms collectively give rise to emergent phenomena --- unexpected stuff like self-reproducing molecules (DNA) and all that jazz (you and I included!) ---- the atoms, the DNA and ourselves are all part of the physical world. Not so with mysteriously 'real' space and time, and simplices that are merely convenient figments of the imagination.

Your comments on mathematics and in particular the 'real' numbers:

...The vast majority of the socalled real numbers are rigorously defined only as classes of a type of sequence of rationals (fractions). Adding two real numbers actually means going back to the original representative sequences and adding successive terms to get a new sequence, which defines the sum (again by approximation).

I hadn't appreciated this at all. As you say, a natural language like English is woefully inadequate when it comes to careful quantitative description. Is that why mathematics is so effective a language in physics?

Finally, if
Ambjorn and Loll are introducing a fundamentally different sort of continuum which is the limit of a series of discrete buildingblock approximations. It turns out that the dimensionality can change with scale.

Is this the real importance of their approach?
 

Similar threads

Replies
6
Views
5K
  • Poll
  • General Discussion
2
Replies
45
Views
7K
Back
Top