Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Triangulations study group, Spring 2008

  1. Dec 31, 2007 #1


    User Avatar
    Science Advisor
    Gold Member
    2015 Award
    Dearly Missed

    Triangulations is a leading nonstring QG approach which is among the least known, where we have to most catching-up to do to follow it. The name is short for CAUSAL DYNAMICAL TRIANGULATIONS (CDT).

    If you want to join me in studying up on CDT in early 2008 then the articles to print off so you have hardcopy to scribble are:

    http://arxiv.org/abs/0711.0273 (21 pages)
    The Emergence of Spacetime, or, Quantum Gravity on Your Desktop

    "Is there an approach to quantum gravity which is conceptually simple, relies on very few fundamental physical principles and ingredients, emphasizes geometric (as opposed to algebraic) properties, comes with a definite numerical approximation scheme, and produces robust results, which go beyond showing mere internal consistency of the formalism? The answer is a resounding yes: it is the attempt to construct a nonperturbative theory of quantum gravity, valid on all scales, with the technique of so-called Causal Dynamical Triangulations. Despite its conceptual simplicity, the results obtained up to now are far from trivial. Most remarkable at this stage is perhaps the fully dynamical emergence of a classical background (and solution to the Einstein equations) from a nonperturbative sum over geometries, without putting in any preferred geometric background at the outset. In addition, there is concrete evidence for the presence of a fractal spacetime foam on Planckian distance scales. The availability of a computational framework provides built-in reality checks of the approach, whose importance can hardly be overestimated."

    http://arxiv.org/abs/0712.2485 (10 pages)
    Planckian Birth of the Quantum de Sitter Universe

    "We show that the quantum universe emerging from a nonperturbative, Lorentzian sum-over-geometries can be described with high accuracy by a four-dimensional de Sitter spacetime. By a scaling analysis involving Newton's constant, we establish that the linear size of the quantum universes under study is in between 17 and 28 Planck lengths. Somewhat surprisingly, the measured quantum fluctuations around the de Sitter universe in this regime are to good approximation still describable semiclassically. The numerical evidence presented comes from a regularization of quantum gravity in terms of causal dynamical triangulations."

    There are some three or so other earlier papers that have explanations of the actual set up and the computer runs, they are from 2001, 2004, 2005. I will get the links later. Have to go out at the moment but will get started with this soon.
  2. jcsd
  3. Dec 31, 2007 #2


    User Avatar
    Science Advisor
    Gold Member
    2015 Award
    Dearly Missed

    an early paper delving into elementary detail, that i found useful, is

    here's one summarizing the field as of May 2005: clear exposition alternates with some gnarly condensation

    I think those are the main ones needed to learn the subject,
    but for an overview there are also a couple of review papers written for wider audience

    and a couple of short papers of historical interest----the 2004 breakthrough paper where for the first time a spacetime emerged that was 4D at large scale
    and the 2005 paper where they explored quantum dimensionality at small scale and discovered a fractal-like micro structure with reduced fractional dimensionality

    we can pick and choose, as needed, among this stuff. The main thing is to focus on the two recent ones (November and December 2007) and only go back to the earlier papers when something isn't clear and needs explaining.

    My impression is that CDT is ready to take on massive point particles and that 2008 could be the year it recovers Newton law.
    Last edited: Jan 1, 2008
  4. Jan 2, 2008 #3


    User Avatar
    Science Advisor
    Gold Member
    2015 Award
    Dearly Missed

    There's a wide-audience article from March 2007 that looks pretty good, as a non-technical presentation, but it's in German


    Hello Fredrik, I was glad to see your reflections on the CDT approach. I think the questions you raise are entirely appropriate. I will try to respond tomorrow (it is already late here.)
    Last edited: Jan 3, 2008
  5. Jan 3, 2008 #4


    User Avatar

    initial amateur reflection

    Hello Marcus, I'd be interested to hear how those who work on this add ot this thread but to start off in a philosophical style witout trying to derange the thread, here are some personal thinking.

    If I get htis right, in this (CDT of Loll) idea, one more or less assumes that

    (1) nothing is wrong with QM or it's predictions
    (2) nothing is wrong with GR or it's predictions

    And one tries to find a way to compute, and make sense out of the path integral, by *choosing* a specific definition and partitioning of the space of spacetimes, and assume it to be complete, and this also somehow imlplicitly contains an ergodic hypothesis at the instant you choose the space of spaces.

    My personal reflection is that I would expect there to be a scientific or physical basis for such choice or hypothesis? Isn't such a choice effectively an expectation, and how is this induced? if there is an compelling induction for this I would be content, but I don't see that.

    I can appreciate the idea that - let's try the simple first, and try to find a way (by hand) to interpret/defined the space of spacetimes in a way that makes sense out of the path integral and hopefully reproduces GR in the classical limit - but even given a success, I would still be able to formulate questions that leave me unsatisfied (*)

    (*) To speak for myself, I think it's because I look for, not only a working model for a particular thing (without caring HOW this model is found), but rather a model for models, that would be rated with a high expected survival rate even when taken into a new environment. I know those who for sure rejects this by the choosing to not classify it as "physics".

    Anyway, to discuss the suggested approach, leaving my second opinions aside, what does that mean?

    Either we work out the implications and try to find empirical evidence for it? or we try to theoretically rate it's plausability, by logic examinations? any other ways?

    To try to examin its' plausability, I immediately reach the above reasoning of modelling the model, scientific processes etc. And then I personally come to the conclusion of high degree specualtion that actually originates in the premises of (1) and (2). I intuitively would try to rate and examine the premises, before trying to invest further speculation in working out the implications of them? My gut feeling tells me to invest my speculations in analysing and questioning the premises rather than adding more speculation to an
    already "speculative" premise.

    Before I can defend further speculation, I feel that I have to question my current position, anything else drives me nuts.

    Does anyone share this, or is it just my twisted perspective that causes this?

    I should add that I think the original idea is interesting and I hope the pro's here will elaborate this and I think it's an exellent idea to have a thread on this, and maybe the discussion can help people like me gain motivation in this idea.

    ( I hope that sensitive readers doesn't consider this post offensive. It may look like philosophical trash, but fwiw it has a sincere intent to provoce ideas on the methodology.)

  6. Jan 3, 2008 #5


    User Avatar
    Science Advisor
    Gold Member


    When you first posted this thread, I did some searching here on PF thinking that a couple years ago there was a member here that was posting his ideas about this. i.e. if I'm remembering correctly. Do you remember anybody here that had their own theory? Are they still around if so?

  7. Jan 3, 2008 #6


    User Avatar
    Science Advisor
    Gold Member
    2015 Award
    Dearly Missed

    Don, you may be remembering some threads I started about Ambjorn and Loll's CDT.
    But if it was somebody discussing their OWN theory then it wouldn't have been me and I actually can't think of anyone with their own theory that resembled CDT.
  8. Jan 4, 2008 #7


    User Avatar

    more reflections

    Mmm the more I think of this, the more do my thinking take me away from what I suspect(?) was Marcus intended style of discussion...

    What strikes me first is that different ergodic hypothesis should give different results, and what is the discriminator between different ergodic hypothesis?

    Choosing a ergodic hypothesis is IMO pretty much the same as choosing the microstructure - since redefining of transformation the mictrostructure seems to imply choosing another ergodic hypothesis. And in this case, not only the microstructure of spacetime, but rather the microstructure of the space of spacetimes whatever the physical representation is for that :) would the fact that someone are lead to ask a question of the space of spacetime, suggest that there should follow a natural prior? I like to think so at least.

    Could this root in a missing constraint on the path integral formalism? i can't help thinking that this difficulties root in the premises of the foundations of QM and GR.

    Unless there is some interest in this thinking I'll stop here as I have no intention to turn this thread into something that no matter how interesting only I am interested in reflecting over :shy:

  9. Jan 4, 2008 #8


    User Avatar
    Science Advisor
    Gold Member
    2015 Award
    Dearly Missed

    I'm not dedicated to one or another topic or style, in this case. As long as you relate your discussion to Triangulations path integral in a way I can understand, I'm happy:smile:

    Well I'm interested, so there is no obstacle to your continuing.

    I think you are probing the question of the REGULARIZATION which the Utrecht people use to realize their path integral.

    Before being regularized a path integral is merely formal. The space of all spatial geometries is large, and the space of all paths through that large space is even larger.
    So one devises a way to SAMPLE. Like deciding to draw all human beings as cartoon stick figures----this reduces the range of possible drawings of people down to something manageable.

    For me, the natural way to validate a regularization is to apply the old saying: The proof of the pudding is in the eating! You pick a plausible regularization and you see how it works.

    You Fredrik are approaching the choice of regularization, as I see it, in a different more abstract way from me. So as long as I can relate it to what I know of the Utrecht Triangulations path integral I am happy to listen. I like to hear a different approach from what I am used to.

    BTW you are using the concept of ERGODICITY and you might want to pedagogically explain, in case other people are reading.
    Ambjorn and Loll, in several of the papers (IIRC the 2001 methodology paper I linked) talk about ergodicity in the context of their Monte Carlo.

    Basically a transformation, or a method of shuffling the cards, is ergodic if it thoroughly mixes. Correct me if I am wrong, or if you have a different idea. So if a transformation is ergodic then if you apply it over and over eventually it will explore all the possible paths or configurations or arrangements----or come arbitrarily close to every possible arrangement.

    When the Utrecht people do the Monte Carlo, they put the spacetime geometry through a million shuffles each time they want to examine a new geometry. I forget the details. But there is some kind of scrambling they do to get a new geometry and they do it a million times. And then they look and calculate observables (like a time-volume correlation function, or a relation between distance and volume, or a diffusion dimension. Then they again shuffle the spacetime geometry a million times (before making the next observations). This kind of obsessive thoroughness is highly commendable, but it means that it can take WEEKS to do a computer run and discover something. I wish someone would give them a Beowulf.

    One thing that delights me enormously is how pragmatic the work is.

    Another is that the little universes pop into existence (from the minimal geometry state) and take on a life of their own for a while and then shrink back down.

    Another is that they actually GET the microscopic quantum spacetime foam that people have speculated for 50 years is down at that level. And they don't just get the foam, at a larger scale they get the familiar macro picture of approximately smooth 4D.

    Another is the presumed universality. different regularizations involving other figures besides triangles have been tried at various times and it doesnt seem to depend on choosing to work with triangles----which seems to be confirmed by the fact that in any case one lets the size go to zero. I don't think this has been proven rigorously, but I have seen several papers where they are using a different mix of building blocks. And it makes sense. If you are going to take the size to zero it should not matter very much what is the shape of the blocks.

    At this point I think Ambjorn and Loll are looking pretty good. they have a lot going for them, supporting the way they tell the story. that's why I thought it would be timely to have a study thread.
    Last edited: Jan 4, 2008
  10. Jan 4, 2008 #9


    User Avatar


    Just quick note. I'll comment more later, it's getting friday evening here and I wont be on more tonight.

    I sense some differences in our way of analysis and probably point of view as well and I also find that interesting. I got the feeling that you have a somewhat of a cosmology background in your thinking? or?

    As I see it there are several parallell issues here, each of them are interesting but what is worse I see them as entangled too from an abstract point of view.

    Your idea of the pudding is really a good point, I guess my point is that if the question is to decide to make and eat this pudding or not, can not be answered by doing it, this decision needs to be make on incomplete information, to actually do it is what I consider the feedback. This seems silly but this even relates to the regularization thing! but I think I may be too abstract to communicate this at this point, but if the proof is in the pudding, the problem is that there is probably more potential puddings around than I could possibly eat, and at some point I think evolving an intelligent selection is favourable.

    Anyway, I am trying to skim through some of the other papers as well, and try to analyse the causality constraint from my point of view. The biggest problem I have is that there is also another problem, that I have not solved, and neither did they, that has to do with the validity and logic of feynmanns action weighting.. the problem is that as I see it we are making one speculation ontop of another one which makes the analysis even more cumbersome.

    I guess one way is to not think so much, take the chance, and instead just try it... (like you suggest)... but that's not something I planned to do here, although it would be fun to implement their numerical routines... I like that kind of projects but my current comments is mainly a sujbective "plausability analysis" of mine. If the result of that is positive I would not doubt to actually try to implement some simulations on my own PC. But that is in my world the "next phase" so to speak. I'm not there yet.

    Anyway I'll be back.

  11. Jan 6, 2008 #10


    User Avatar

    elaborating the perspective of my comments

    Yes I am, since this seems to be the key of their approach.

    How come we are asked to "make up" these regularizations in the first place? Does this, or does it not indicate that something is wrong?

    The other question is the logic and motivation of of the particular scheme of regularization proposed by CDT - given that "regularizing" is the way to go in the first place.

    I am personally expect that the correct "regularization" should follow from a good strategy, or probably then, then notion of "regularization" wouldn't appear.

    In my thinking, not only is the space "BIG" and path integral formal, I see it as a bit ambigous, because how do we even measure the size of this space?

    At first glance I have hard to disagree with this.

    But, if we consider the "regularization" that the scientist needs to make here: How many possible regularizations are there? If there are a handful possibilibies and the testing time for it's viability is small, we can afford to say, let's test them all, starting with making a random or arbitrary possibility, and then take the best one. And I often try to express myself short, because it it's too long I don't think anyone bother reading it.

    But if the number of possibilities get very many and/or the testing time increases, then it is clear that to survive this scientist CAN NOT test all options even if he wanted to, so he needs to rate the possibilities and start to test them one by one in som order of plausability. So the ability to construct a rating system seems to be a very important trait. To construct such a rating system, he must respect the constraints at hand. He has limited memory and limited processing power. Doesn't these constraints themself in effect impose a "natural" regularization?

    This is closely related to the same problem as we are dealing with in the path integral regularization. These analogies at completely complexity different scales are inspiring me alot.

    The idea that I rate higest on my list at this point, is that the observers complexity is exactly what imposes the constraints and implies the effective regularization we are seeking. Basically information capacity if the observer, limits the sum in the path integral. It does not ban anything, but it limits how many virtual paths the observer can relate to.

    (*) What I am most interested in, is to see if the CDT procedure POSSIBLY can be interpreted as the environmentally selected microstructure of virtual paths? If this is so, it would be very interesting still! But I would have to read in alot more detail to have an opinion about that. As you note, I implicitly always relate to an observer, because anything else makes little sense.

    What if MAYBE the nonsensial path integral, is the result of the missing constraints - the observers complexity? This is exatly what I currently try to investigate.

    In a certain sense, their fundamental approach is not directly plausible to me, but I still find it interesting if one considers it to be an approximate approach, where they at least can make explicit calculations at and early stage.

    Loosely speaking I share your definition of ergodic process as "perfect mixing", however I think these things can be represented and attacked in different ways and the same conceptual thing can be given different abstraction in different contexts. I'm afraid that due to my twisted perspective I'm most probably not the one to give this the best pedagogic presentation to most but I can try to elaborate a little bit at least.

    On one hand there is ergodic theory as a part of mathematics that also relates to chaos theory. I do not however have the matematicians perspective, and often matematical texts has a different purpose than does say a physicists, and thus they ask different questions even though they often stumble upon the same structures.

    A mathematical definition of an ergodic transformation T relative to a measure, is that the only T-invariant sets have measure 0 or 1. (The transformation here, is what generates the "shuffling process" you refer to. )

    A connection to classical dynamical systems can be where Transformation T is "time evolution", and a typical measure is the classical the phase space volume (q and p) - and thus an ergodic dynamical system preserves the phase space volume. This is related to liouvilles theorem in classical hamiltonian mechanics.

    Another connection to fundamental physics is that these things relate to the foundations of statistical mechanics but also the probability theory in QM.

    Note here a major point that ergodicity is defined _relative to a measure_, which in term implicitly relates to measurable sets, and the point I raised regards the fact that ergodicity is relative.

    But I don't find this abstraction the most useful for my purpose. Instead of thinking of phase space in the classical sense, I consider slightly more abstract the microstate of the microstructure.

    Classically the microstructure is defined by what makes up the system, say a certain number of particles with internal structure etc. Given this microstructure, the possible microstates typically follow.

    And in statistical mechanics, the usual stance is that the natural prior probability distribution to find the system in any particular microstate is equal to finding it in any microstate. Ie. the microstates are assume to be equiprobable, typically referring to the principle of indifference arguing that we lack discriminating information. This also provides the connection between shannon and boltzmann entropies.

    I think that such logic is self-deceptive. My argument is that the indifference is typically already implicit in the microstructure - and where did this come from?

    It *seems* very innocent to appeal the principle of indifference, however if we have no information at all, then where did the microstructure itself come from? I think that there is a sort of "dynamical" principle where the microstructure is emergent, and this will replace the ergodic hypothesis and appeals to the somewhat self-deceptive principle of indifference.

    What I'm trying to communicate is the conceptual issues we need to deal with, and these IMHO lies at the heart of foundational physics and IMO also heart of science and it's method. IMO the same issues still haunt us but in classical physics things where "simple enough" to still make sense in despite of this issue and I suspect that as our level of sophistication increase in QM and GR and QG, these issues are magnified.

    I also see a possible connection between ergodic hypothesis and the ideas of decoherence.

    These things is what I see, I personally sense frustration when these things are ignored, which I think they commonly are. My apparent abstractions is because I at least try to resolve them and I personally see alot of hints that these things may also by at the heart of the problem of QG.

    My point is that this beeing at the root of many things, the ergodic hypothesis are not proved. I want these pillars to be rated, and seen in a dynamical context.

    This was just to explain my point of view, because without that any of my further comments will probably make no sense. I will try to read more of the papers that you put up Marcus and respond _more targeted to the CDT procedure_ when I've had the chance to read more. This was just my first impressions and (*) is what motivates my to study this further.

  12. Jan 6, 2008 #11


    User Avatar
    Science Advisor
    Gold Member
    2015 Award
    Dearly Missed

    To respond to a partial sampling of your interesting comment!
    The papers are not necessarily for us all to read or to read thoroughly but can in some cases simply serve as a reality CHECK that we are summarizing accurately. I don't want to burden you with things you are not already curious about.

    For example there is a 2005 paper where the Utrecht team is very proud that they derived the same wave function for the (size of) the universe that people like Hawking and Vilenkin arrived at earlier and had been using in the 1980s and 1990s. Hawking still refers to his "Euclidean Path Integral" as the only sensible approach to quantum cosmology. In a sense the Utrecht Triangulations approach is an OUTGROWTH of the Feynman (1948) path integral or sum-over-histories, as further processed and applied to cosmology by Hawking. There was a period in the 1990s where many people were trying to make sense of Hawking's original idea by using dynamical triangulations. It didn't work until 1998, when Loll and Ambjorn had the idea to use CAUSAL dynamical triangulations.
    So there is this organic historical development that roughly goes Feynman 1950, Hawking 1985, Utrecht group 2000, and then the breakthrough in 2004 where they got 4D spacetime to emerge. Since that is the family tree it is somehow NOT SURPRISING that in 2005 they recovered Hawking's "wave function for the universe" (as he and Vilenkin apparently liked to call it). It is really just a time-evolution of the wavefunction of the size of the universe.

    The only reason I would give a link to that paper is so you could, if you want, check to see that my summary is all right. I don't recommend reading it---this is just another part of the picture to be aware of.

    The gist of what you say in the above about motivation has, I recognize, to do with plausibility.

    I think that is what you refer to in (*) when you say "to see if the CDT procedure POSSIBLY can be interpreted as the environmentally selected microstructure of virtual paths?"

    You could be more specific about what you mean by environmentally selected---perhaps one could say NATURAL. And you could specify what aspect(s) of their proceedure you would like to decide about.

    Perhaps triangles (if that is part of it) are not such a big issue. I have seen several papers by Loll and others where they don't use triangles. It is the same approach they just use different shaped cells. But triangles (simplices) are more convenient. This is the reason for a branch of mathematics which is like differential geometry except it is simplicial geometry, piecewise linear geometry. It is tractable.

    Loll often makes the point about UNIVERSALITY. Since they let the size go to zero, it ultimately doesn't matter what is the shape of the cells.

    The piecewise linearity is part of the path integral tradition going back to 1948. Feynman used piecewise linear paths, made of straight segments, and then let the size of the linear segment go to zero. A simplex is basically the simplest analog of a line segment.
    (segment is determined by two points, triangle by three, tet by four...)

    Nobody says that these piecewise linear paths actually exist in nature. One is interested in taking the limit of observables and transition amplitudes as the size goes to zero.
    In the limit, piecewise linearity evaporates----the process is universal in the sense of not depending on the details of the scaffolding.
    Last edited: Jan 6, 2008
  13. Jan 6, 2008 #12


    User Avatar
    Science Advisor
    Gold Member
    2015 Award
    Dearly Missed

    in case anyone is following the discussion and is curious, here is a Wikipedia article on the Path Integral formulation
    Fredrik is I expect familiar with what is covered here. I find it interesting, and it helps give an idea of where the Triangulations approach to QG came from.
  14. Jan 7, 2008 #13


    User Avatar

    Thanks for your feedback Marcus!

    Here are some more brief comments, but I don't want to expand too much in my personal ideas that doesn't directly relate to the CDT. First it's not the point of the thread (only to put my comments in perspective) and I haven't matured my own thinking myself yet either.

    From my point of view, what we are currently discussing is IMHO unavoidably overlapping with also the foundations of QM at a deeper level, beyond effective theories. The fact that the quest for QG, at least in my thinking, traces back to the foundations of QM, is very interesting!

    Yes, and as might be visible from my style of ramblings - but I'm aware that it's something I have yet to satisfactory explain in detail - I seek something deeper than just "plausability" in the everyday semse. The ultimate quantification of plausability is actually something like a conditional probabilities. The logic of reasoning is closely related to subjective probabilities, and this is where I closely connect to physics, action principles and regularization.

    ( There are several people but I think not too many that are researching along this spirit.
    Ariel Caticha, http://arxiv.org/abs/gr-qc/0301061 is one example. That particular paper does not relate to QG, but it presents some of the "spirit" I advocate. So the most interesting thing in that paper is IMO the guiding spirit - Ariel Caticha is relating the logic of reasoning to GR - and on this _general point_, though this is "in progress", I share his visions. )

    Edit: See also http://arxiv.org/PS_cache/physics/pdf/0311/0311093v1.pdf, http://arxiv.org/PS_cache/math-ph/pdf/0008/0008018v1.pdf. Note - I don't share Ariels way or arguing towards the "entropy measure" but the spirit of intent is still to my liking.

    To gain intuition, I'd say that I associate the path integral - summing over virtual transformation/paths - weighted as per some "action" with a microscopic version on rating different options. Now that's not to jump into conclusions that particles are conscious, but the abstracting lies at a learning and evolutionary level.

    Wether there is at some level of abstraction, a correspondence between the probability formalism and physical microstructures is a question that I know people disagree upon. My view is that subjective probabilities are corresponding to microstructures, but the probability for a certain event, is not universal - it depends on who is evaluating the probability. In this same sense, I think the path integral construction is relative.

    The same "flaw" is apparent in standard formulation of QM - we consider measurements in absurdum, but where is the measurement results retained? This is a problem indeed. The ideas of coherence suggest that the information is partly retained in the environment, this is part sensible, but then care should be taken when an observer not having access to the entire environment formulates questions.

    This is IMO, relates in the feynmann formulation to the normalisation and convergence of the integral and the ambigous way of interpreting it. It seems innocent to picture "sum over geometries" but howto _count_ the geometries is non-trivial at least as far as I understand, but I could be wrong.

    Natural might be another word yes, but what I mean is somewhat analogous to quantum darwinism by Zurek - http://en.wikipedia.org/wiki/Quantum_darwinism - but still not the same thing.

    In my way of phrasing things, I'd say that the microstructure that constitutes the observers, is clearly formed by interaction with the environment. And in this microstructure is also encoded (IMO) the logic of reasoning and also the "correspondence" to the rating system used in the path integral. Now, in one sense ANY rating system could be imagined, but equally plausible is it to imagine that the environment will favour particular rating system. So the "selected" microstructure is in equilibrium with the environment at a very abstract level, though it may still be far from equilibrium at other levels.

    I'm not sure that makes sense. But it's a hint, and let's not go more into that. It's my thinking about I could certainly be way off chart.

    I have second opinions on this. I will try to get back on these points. MAybe try to come up with an example/analogy to illustrate my point.

    Last edited: Jan 7, 2008
  15. Jan 7, 2008 #14


    User Avatar

    equilibrium assumption

    This brings to my mind an association from biology: As evolution in biology is pictures, organisms has evolved in a particular environment and is selected basically for it's fitness of survival and reproduction in that particular environment. This same highly developed organisms may not be near as "fit" when put in a completely different environment. Then survival boils down to the skill to readapting to the new environment.

    But evolution is still ongoing, organisms we see may not be perfectly evolved, and how is the measure "perfect" defined? This is not easy.

    I studied some years ago simulations of cellular metabolic networks, where one tried to simulate the behaviour of a cell culture. And instead of trying to implement simulating the cell from molecular dynamics would would clearly be too complex, the attempt was to try to formulate the measure that the organisms tries to optimise, relating to growth and survival. And then find the gene expression that would yield that behaviour, and then this was compared to a real bacterial culture. And fromwhat I recall they found that initially the model and the real culture disagreed, but after a few generations of growth, the gene expression in the real culture converged quite closely to the one found by the computer simulation.

    Then in a sense, the selected microtructure is the environmentally selected "behaviour" encoded in the microstructure iof the system. But this is only valid then, under equilibrium assumptions at that level.

  16. Jan 8, 2008 #15


    User Avatar

    relative universality?

    Wouldn't the choice of uniform size of the cells matter - and they have chosen a uniform size? For example, optionally why not choose smaller blocks when the curvature are higher, so as to increase the measure of highly curved geometries even in the continuum limit? Of course, don't ask me why one would do this, all I'm thinking is that the construction doesn't seem as universal as innocent to me as they seem to suggest?

    The first impression is that appealing to the principle of indifference, the uniform choice is the given natural choice. This is what I think is deceptive, because uniform is a relative concept and in my view this is related to the missing condition/observer to which this construct relates. Or do I miss something?

    This type of reasoning is related to the previous discussion, it's very common in statistical mechanics. For example the shannon entropy measure, has an implicit condition of a background uniform prior, and usually thus background is just put in by hand. Ultimately it means that even the concept of disorder is relative and finding the perfect measure of disorder isn't easy.

    Even going back to probability itself, the concept of absolute probability is quite fuzzy.
    In my thinking these are unmeasurable quantities, only relative probabilities are measureable, and if we are reaching what is measurable and what isn't I think we also touch the foundations of science?

    So my comment originates from this perspective, and it boils down to the philosophy of probability theory as well. I think reflections on these things, unavoidably leads down there and I see the essence of the difficulties as deeply rooted.

    I don't know if Loll and their group ever reflected this? It seems unlikely that they didn't, but maybe they never wrote about it. But if they at least commented on it, it might be they have an idea of it, maybe they have some good arguments.

    But as I read some of their papers, they point out that it's not at all obvious that their approach works but while the "proof is in the pudding" like you said, and I guess their main point is that which the computational handle they have, eating the pudding is not such a massive investment after all? (as is making more formal progress). I can buy
    that reasoning! and I would curiously await their further simulation results.

    Indeed to formally resolve the issuse I try to point out would take some effort too, but I just happen to rate it as less speculative, but I think even that judgement is relative.

    In my previous attempt to explain my position, the logic of reasoning put abstractly, applied to physical systems like a particle, is the same thing as the logic of interaction. The different choice of words is just to make the abstract associations clearer, thus my personal main non-professional point of view is not that the CDT is wrong, but that there seems to be a missing line of reasoning, and this translates to missing interaction terms.

    Another point is that they start by assuming the lorentzian signature. But if I remember right in one of their papers they pointed out that themselves that there _might_ be a more fundamental principle from which this is emergent. I don't remember which one, I skimmed several of their papers with some days in between. Their comparasiom to the euclidian approach and the problem of weighting is interesting. This is the more interesting poitn to go deeper in but I have to think more to comment on that...This I think may relate back to foundational QM as what the path integral really means and what the action really means in terms of deeper abstractions.

    I'm sorry if I got too philosophical here. I hope nto. If I missed something and jumped into conclusions please correct me, this is admittedly my initial impressions only.

  17. Jan 8, 2008 #16


    User Avatar
    Science Advisor
    Gold Member
    2015 Award
    Dearly Missed

    Of course I don't know what Ambjorn or Loll would say in reply. But I think one consideration is diffeomorphism invariance and the desire to avoid overcounting.

    To the extent practical one wants that two different gluings---two different ways of sticking the triangles together---should correspond to two different geometries (not the same one with a reparametrization of coordinates or a smooth morphing.

    Curvature, of course, is calculated at the edges or faces where the simplices fit together, combinatorially (Regge's idea). It is easiest to imagine in 2D when one reflects on the fact that one can have more than 6 equilateral triangles joined around a given point, or less than 6----curvature is related to the defect angle.

    If two triangulations are essentially the same except that one has a patch of smaller triangles, then one is in danger of overcounting.

    So that is something to remember---be aware of. I am not saying it is a flaw in your idea. I am not sure how your idea would be implemented. but it is something to keep in mind.

    one wants to avoid duplicating----avoid counting the same (or diffeo-equivalent) geometry twice.

    Now in addition let us notice that we are not APPROXIMATING any give smooth geometry.

    So it is not clear how your idea would be implemented. where is there a place of more than usual curvature that should deserve temporarily smaller triangles?

    Eventually, in the limit, the triangles (really simplices) get infinitely small anyway, so what possibility is being missed? It is hard to say how significant. Eventually one gets to the same as one would have had (just picture unifying some of the small ones, and leaving others of the small ones separate---to recover an earlier triangulation which you Fredrik may have wished)


    Remember that in Feynman path integral the piecewise linear paths are very jagged. They have unphysically swift segments. They are NOT REALISTIC at all. No particle would take such paths. Almost everywhere unsmooth, undifferentiable. In the path integral method the objective is not to APPROXIMATE anything, but to sample infinite variability in a finite way. In the end, using a small sample, one wants to find the amplitude of getting from here to there. It is amazing to me that it works.

    And similarly in the Triangulations gravity path integral, the finite triangulation geometries are VERY jagged----they are like no familiar smooth surfaces at all. They are crazy bent and crumpled. It is amazing that they ever average out to a smooth space with a welldefined uniform integral dimensionality.
  18. Jan 8, 2008 #17


    User Avatar
    Science Advisor
    Gold Member
    2015 Award
    Dearly Missed

    Fredrik, I have been looking back over your recent posts, in particular #13 which talks about the microstructure of spacetime.

    I wonder if you read German,
    because there is an interesting nontechnical article which Loll wrote for Spektrum der Wissenschaft that explains very well in words what her thinking is about this.
    I would really like to know your reaction to this article, if German is not an obstacle.

    I will get the link.

    From it one can better understand the philosophical basis of their approach. At least it helps me understand, I hope you also.
    Last edited: Jan 8, 2008
  19. Jan 9, 2008 #18


    User Avatar

    I understand this. My suggestion, relative to their suggestion would indeed result in "overcounting" to use their term. Loosely speaking it seems we agree that this is most certainly the case?

    But my point is, that to determine what is overcounting and what is not, is highly non-trivial. The conclusion that my suggestion here would result in overcounting, is based on the assumption that their uniform choice is the obviously correct one. Of course, if there was a non-uniform size that was correct their ideas would result in _undercounting_.

    So what view is the obviously right one? I am overcounting or they are undercounting? :) In the question posed by them, the apparence is they their choice is more natural, but I don't think it would be too hard to reformulate the question so that my choice would be more natural. In a certain way, I claim that the microstructure of the space of spacetimes is implicit in their construction.

    This was my point. The same analogous flaw exists in stat mech. You make up a microstructure, and most books would argue in the standard way that given no further information, the equiprobable assignment to each microstate is the correct counting. This is directly related to the choice of measure of disorder. And with different prior probability distributions or prior measures over the set of microstates, different measures of disorder typically result.

  20. Jan 9, 2008 #19


    User Avatar

    Thanks for that link Marcus. Swedish and german may be related, but unfortunately my German is very poor. I only know a few phrases, just enough to order beer and food where the don't speak english :)

  21. Jan 9, 2008 #20


    User Avatar
    Science Advisor
    Gold Member
    2015 Award
    Dearly Missed

    Actually that was not what I meant, Fredrik. I did not mean that your suggestion would result in overcounting relative to theirs, I was suggesting that it might (I am still not sure how it would work) result in overcounting in an absolute sense.

    To review the basic general covariance idea of GR, the gravitational field is not a metric, but is rather an equivalence class of metrics under diffeomorphism. Two metrics are equivalent if one can be morphed into the other (along with their accompanying matter, if they have). So imagine we are integrating over a sample of spacetime geometries----if two are equivalent we do not want them both.
    If our regularization is forever bringing in equivalent versions of the same thing, it is to that extent absolutely wrong.

    I do not really understand your idea, so I cannot say. But I suspect that if I did understand then even if it was the only method in the world (and Ambjorn Loll had not been invented and was not there to compare) that I could look at it and say "this absolutely overcounts" because would be counting as different many geometries could could be morphed into each other, and are not really different. Again I must stress that I cannot say for sure, not having a clear idea yet.

    This suspicion of mine is vague at best! If you want, please explain your idea in a little more detail. When and where would you start using smaller simplices?
    What you said earlier was to do that where there is more curvature. but there is no underlying metric manifold. we start the process with no curvature defined at all, anywhere. No curvature in the usual differential geometry sense is ever defined. In piecewise linear geometry the curvature is zero almost everywhere ("except on a set of measure zero" as one says in measure theory.)

    That is because the interiors of all the simplices are FLAT---everywhere and throughout the process. So in your scheme, where do you start using smaller simplices?

    The trouble is, I do not understand you to have a well-defined regularization algorithm. Although if you could somehow define an algorithm then my suspicion is that it would tend to overcount in an absolute sense.

    Probably, as you said, it would also overcount relative to the Ambjorn-Loll proceedure also.

    But this is all just my hunch since I don't understand in actual practice what you mean by using smaller simplices where the curvature is greater.

    BTW your command of English is great (I'm sure you don't need me to say :smile:) but it is a pity you don't read German because this Loll piece in Spektrum der Wissenschaft (SdW) gives clear easy explanations and a lot of motivation for what they do.

    I wonder if I should try to translate? What do you think?
    Last edited: Jan 9, 2008
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?

Similar Discussions: Triangulations study group, Spring 2008