# Steve Carlip on dimensional reduction (Loll, Reuter, Horava; smallscale fractality)

• marcus
Naty1

Re: the Scientific American article:
What happened to mass in their output? Were Loll and collaborators disappointed none popped out? Or did I miss something?

Sounds like their inputs resulted in only spacetime and gravity. I wonder what they would put into their model to get some emergent mass out of it? And if that would suggest origins of mass...because only de Sitter spacetime was an output, could this model suggest the overlay of spontaneous symmetry breaking, the Higgs mechanism, (an ad hoc add on to the standard model), really is an inappropriate "plug" in by theorists??

Last edited:
Gold Member
Dearly Missed

The significance of the triangles would be, I presume, that it gives an easy way to average over local curvatures. The angles of a triangle add to pi in flat space, less than pi in hyperbolic space, more than pi in hyperspheric space.
...

You give the basic insight. A combinatorial or "counting" grasp of the basic feature of geometry (curvature). Probably goes back to Tullio Regge 1960 who showed how to do "General Relativity without coordinates".

If the action is simply to be the integral of curvature, then if all the triangles are identical and one finds curvature at a point by counting the number that meet, to find the average one needs only compare the number of triangles with the number of points.

Now to extend this idea up one dimension to D = 3 one will be looking at the curvature around a D = 1 edge, and one will count the number of tetrahedra that meet around that edge. (The edge is sometimes called the "bone" or the "hinge" in this context. The curvature lives on the bone.)

So the overall average can be found, in the D = 3 case, simply by comparing the total number of 3-simps with the total number of 1-simps. If there are fewer 3-simps than you think there should be, over all, then it is the positive curve "hyperspherical" case, as you said earlier.

Since you know greek (as a fan of Anax. of Milet. and his primal unbounded indefiniteness apeiron idea) you may know that the analog of a tetrahedron is a pentachoron. Loll has sometimes used this term for the 4-simplex block that builds spacetime. Hedron is flat side and Choron is a 3D "room". A 3-simplex is bounded by 4 flat sides (hedra) and a 4 simplex is bounded by 5 rooms (chora). I think pentachoron is a nice word and a good brother to the tetrahedron.

Anyway, in the D = 4 case the D - 2 = 2 simplices are the "bones" or the "hinges" around which curvature is measured. One counts how many pentachors join around one triangle.

And to get the integral, for the Einstein-Hilbert-Regge action, one just has to count the total number of pentachors and compare to the total number of triangles. If there are not as many pentachors as you expected, then some positive curvature must have crept into the geometry. Seeped in, infiltrated.

It is nice to be able to do geometry with simple counting and without coordinates because, as Tullio Regge realized, in Einstein's 1915 approach you make extra work. First you have to set up coordinates. Then when you are done you have to get rid of them! By diffeomorphism invariance, or general covariance, any two solutions which are the same by a diffeo are "equivalent" and represent the same physical reality. So you have to take "equivalence classes". The real physics is what is left after you squeeze out the huge redundancy which has been introduced by using coordinates.

Coordinates are "gauge" meaning physically meaningless redundant trash and Regge found a shortcut way to avoid using coordinates and still calculate overall curvature and conduct business and have a dynamic geometry.

He shares the same name as Cicero, the essay writer. Italians still pay respect to Cicero, apparently, by naming children after him.

So in the general D dimension case, the D-2 simplices are the "bones" and the curvature is hanging on the bones or riding on the bones or is imagined to be concentrated on the bones. And you count how many D-simplices meet at a particular D-2 simplex.

With Loll's approach the simplices are almost but not quite equilateral, there is an extra parameter that can elongate simplices in the time direction. But to begin understanding, it is good to imagine equilateral simplices. All the same size.

I have been enjoying everyones posts, which are full of ideas. Right now I have no ideas myself about why there is this curious coincidence of Loll Reuter Horava Modesto etc etc method. Can it be an elaborate practical joke or artifact, or can it actually be something that nature has been secretly saving to surprise us with, when we are ready? I think it was not something that Loll and Reuter were originally looking for. It just turned up, like a "who ordered this?" We'll see.

Last edited:
Gold Member
Dearly Missed

Re: the Scientific American article:
What happened to mass in their output? ...

Take any opinion of mine about this with a grain of salt, Naty. I'm a non-expert non-authority. I think Loll's approach is a springboard to something better. I don't think she CAN introduce matter in a satisfactory way. But it is a beautiful springboard---it has inspired and will continue to inspire the development of ideas.

I don't know. Maybe you should not merely glue the blocks together, maybe you should twist them as you are gluing them.
or maybe you should allow a few of them to be glued to others which they are not adjacent to
or paint the building blocks different colors:zzz:
Nature is teasing and playing with us. The Loll approach is a like a smile she flashed at us, but you don't know yet what that particular smile means. Yes. Somehow matter must be included in geometry.

Finbar

Hey, so coming back to the question of how this reduction from d=4 to d=2 is achieved I'd like to give a heuristic argument from a RG/particle physics point of view. So its not such a geometric point of view but possibly it can give insights into geometric approaches.

So we consider a single particle that we are measuring the gravitational field of from a far distance. As such the force law is 1/r^2 and we can conclude that d=4 and Newtons constant is G=6.673(10) x 10-11 m^3 /kg s^2.

Now imagine we get closer to the particle such that we are now measuring its field on a smaller scale. As such our certainty of the particles position is increased; (delta)x is smaller. As such from the uncertainty principle we are less certain of its momentum. At a smaller enough scale this uncertainty can mean that the number of particles also becomes uncertain. Hence on such scales we may "see" more than one particles(vacuum fluctuations if you like).

But we must remember something important: We are still looking at the same physical system that was just one particle. As such the force we measure from these multiple particles must be the same force measured from the single one at a large distance. Now if we measure the field from only one of these ensemble of particles on a small scale we must find that the field strength is not as large as we expected. In this way we say gravity is "anti screening"; it gets weaker on smaller scales as we take quantum fluctuations into account.

In an RG setting we would then let Newtons constant run to account from this. The strength of gravity from a single particle is G(r)/r^2 but now G(r) is not constant. Now if on small scales r-->0, G(r)/r^2 -->infinity we wound say that the theory brakes down in the UV and QG is nonrenormalisable. If however G(r)~r^2 we find something quite different we find that the field is constant!

Now how is this related to d=2. Well its just Gauss' law d=4: implies 1/r^2 behavior, d=3 implies 1/r behavior, d=2 implies 1/r^0=constant behavior.

Its best to visualize this as field lines coming out of the particle on large scales these field lines appear to spread out over a sphere but as we go to smaller scales the field lines seem more like there spreading out over a circle than a sphere (or better some fractal that has a spatial dimension less than 3). And as we get to yet smaller scales the field lines seem not to spread out over any surface at all at all and instead there is just a singe field line. At this scale you might want to "look around" a bit and measure the field from other particles. Indeed you see this same d=2 like behavior but now your "looking" in another direction. Thus the idea you could then conclude is that spacetime is some kind of 2 dimensional foam who's form is dictated by the distribution of particles you see around you(Quantum gravity?!?).

Orbb

Just one question poppin' in: I wonder if holography was ever considered to be connected to the dimensional reduction of the discussed approaches. Of course, holography also goes along with e.g. strings, which is an entirely different direction, especially concerning spacetime dimensionality. Plus, my understanding of holography is rather a reduction from D=3+1 to D=2+1, rather than D=1+1 as in CDT and others. So I don't know if this even makes sense. Still, the concept of holography and the phenomenon of dimensional reduction have in common that the description of physics departs from 4 dimensions towards a lower number. So this is just something I wondered, being not very knowledgeable.

Gold Member

Its best to visualize this as field lines coming out of the particle on large scales these field lines appear to spread out over a sphere but as we go to smaller scales the field lines seem more like there spreading out over a circle than a sphere (or better some fractal that has a spatial dimension less than 3). And as we get to yet smaller scales the field lines seem not to spread out over any surface at all at all and instead there is just a singe field line. At this scale you might want to "look around" a bit and measure the field from other particles. Indeed you see this same d=2 like behavior but now your "looking" in another direction. Thus the idea you could then conclude is that spacetime is some kind of 2 dimensional foam who's form is dictated by the distribution of particles you see around you(Quantum gravity?!?).

Thanks for this example. It sounds very much like the intuitive argument I was making - with a better technical grounding of course.

But that still leaves me wanting to know how the result pops out of the particular math simulation run by Loll and co. I can't follow the machinery of the maths. But I just presume that the people who use the maths can easily see why some feature shoud emerge.

I really don't know what to make of a situation where researchers - and informed commentators - seem to be saying we run the equations we concocted and out pops this crazy result. It's a kind of magic. We can't explain why.

Gold Member

So in the general D dimension case, the D-2 simplices are the "bones" and the curvature is hanging on the bones or riding on the bones or is imagined to be concentrated on the bones. And you count how many D-simplices meet at a particular D-2 simplex.

Thanks Marcus. It is very useful to have the Regge approach explained so well. I can have another go at seeing if I can track the logic of CDT.

Nature is teasing and playing with us. The Loll approach is a like a smile she flashed at us, but you don't know yet what that particular smile means. Yes. Somehow matter must be included in geometry.

Suppose CDT actually indicates Asymptotic Safety, and suppose AS actually works, then wouldn't it be straightforward to include matter by just adding eg. the SM Lagrangian? Except I guess that electroweak theory doesn't have a continuum limit, so I the theory will still not have a continuum limit, even though such a limit exists for gravity?

If CDT actually indicates Horava with its fixed non-relativistic background, then could one use say Wen's way of getting relativistic QED and QCD to emerge from non-relativistic models (he doesn't know how to do chiral interactions).

Gold Member
Dearly Missed

Atyy these are interesting ideas you are proposing and I see how you are following out these trains of thought. Instead of disputing (when I agree with a lot of the general tenor of what you are saying, it could turn out to be easy to include matter once one has an adequate quantum theory of geometry, or it might not, speculation either way.)

Instead of disputing, Atyy, I just want to outline my attitude in contrast. I don't think any of these approaches implies the other. I think they have family resemblances. Pairs of them share some common features and characteristics.

But there are no mother-daughter pairs. No one is derived from any other. Or so I think.

And I see no reason to suppose that any of them will turn out to be "right" in the sense of being a final description of nature. That is not what we ask of them. What we want is progress towards a quantum geometry that gives General Rel at large scale. And that you can eventually predict astronomical observations with and put matter into and test observationally with CMB/gammaray bursts/collapse events/cosmic ray data and all that good stuff. And calculations could be done using several different ones of these approaches. I do not care which one morphs into an eventual "final theory", if any does. I want to see progress with whatever works.

And fortunately we do see progress, and we see new researchers coming in, and new funding getting pumped into Loop and allied research (CDT, AS, ... as you mentioned.)
That certainly includes the condensed matter inspired lines of research and innovative stuff you have mentioned in other threads. It's a good time to be in non-string QG. I can't keep track of the broad field and give an accurate overview, so much going on. But anyway that's my attitude---pragamatic, incremental, not-thinking-ahead-to-ultimate-conclusions.

Once there is a background independent quantum theory of geometry---that is in other words of the gravitational field---which is what matter fields live on---then the theory of matter will need to be completely rebuilt, I suppose. Because the old idea of space on which QFT was built would then, I imagine, be history.

Gold Member

Maybe you should not merely glue the blocks together, maybe you should twist them as you are gluing them.

But isn't this a reasonably mainstream approach to inserting mass into the spacetime picture?

Knots, solitons, gauge symmetries, etc. You have a web of relations drawing itself flat - a self-organising GR fabric, the vacuum. And then knots or kinks get caught in the fabric as it cools.

In this thread and others, there is a lot of concern about how mass can be added to the picture. But it seems rather that the theory would be a model of the vacuum, and then secondary theories would handle mass as knots in the fabric.

Gold Member
Dearly Missed

...
I really don't know what to make of a situation where researchers - and informed commentators - seem to be saying we run the equations we concocted and out pops this crazy result. It's a kind of magic. We can't explain why.

I would agree that this is unsatisfactory. Perhaps the simplified picture here (partly my fault) makes the situation seem worse than it really is. I think Renate Loll could explain clearly to you why dim-reduction happens in her approach (triangulations QG) but she might not wish to explain why it occurs in Reuter's approach (asymptotic safe QG) or in Horava's... Perhaps each can explain why this happens in his/her own form of QG, but can not explain why it happens in the others'.

Dario Benedetti has attempted a more abstract explanation of dimensional-reduction. So far only one paper on this, treating two toy model cases. He is a Loll PhD (2007) who then went postdoc to Perimeter. He has worked both in Triangulations with Loll and in Asymptotic Safe with Saueressig (a coauthor of Reuter's.) He is about as close to both approaches as anyone---having done research in both lines. He has published extensively. I don't understand this one paper of his about dimensional reduction. Maybe you can get something from it.
http://arxiv.org/abs/0811.1396
Fractal properties of quantum spacetime
Dario Benedetti
(Submitted on 10 Nov 2008)
"We show that in general a spacetime having a quantum group symmetry has also a scale dependent fractal dimension which deviates from its classical value at short scales, a phenomenon that resembles what is observed in some approaches to quantum gravity. In particular we analyze the cases of a quantum sphere and of kappa-Minkowski, the latter being relevant in the context of quantum gravity."
4 pages, 2 figures Phys.Rev.Lett.102:111303,2009

Last edited:
Fra

I think this is an intersting dicussion, to see brief input and reflections from different directions.

In this thread and others, there is a lot of concern about how mass can be added to the picture. But it seems rather that the theory would be a model of the vacuum, and then secondary theories would handle mass as knots in the fabric.

The conceptual point I tried to suggest in past posts is that in my mind you can not have a "picture" at all without mass! It just doesn't make sense. By the same token you need a brain to have an opinon, or you actually need a physical memory record to compute statistics.

This would tangent to the holographic picture, where you do not just need a screen/communication channel, you also need a sink/source behind the screen, and this is eventually saturated - then this unavoidable backreacts on the screen itself - it must change.

IE. from an informational point of view, each piece of information have a kind of count or requires information capacity. This also suggests that, if you infere and action from a state, an action also sort of have mass/complexity, and we get that actions acquire an inertia, which explains stability. It's the same simple logic that if you have a long history of statistics, not single data point of any kind can flip you off the chart.

This physical basis of information, is what I miss in most approaches. Some ideas are very good, but these things are still lacking.

Olaf Dreyers idea is that the inside view, implies that any measurements must be made by inside sticks and rod, but what I can't fully read out of this reasoning is if he also acknowledges that we're also constrained to inside-memory records to store time-history data, and even be able to distinguish time, and dimensions.

If seems expect just from that picture that as the mass/complexity of the inside-observe shrinks, there is an over-head of the "index structure" (space) that becomes more and more unstable, and it will eventually loose it. Pretty much like a phase transition.

Dreyer also picture the origin of the universe, as a phase transition where the observers live in the new, ordered phase. But the observers must emerge during the transition. If you then take the observer to be material, it's the simulataenous emergence of space and matter. But that still seems to make use of an external reference of the prior phase. I do not understand all his reasoning, it still seems there are several major conjectures along the way.

So to me, even if you consider in absurdum "pure gravity" or emptry space, this very PICTURE *implies* a complex CONTEXT. No context - no picture _at all_. This is also why even the VOID has inertia (cosmological constant). Because the context defining the void (ie. the boundaries or communication channels) must have a sink/source.

Unless of course, you think it's ok to attach all this to a mathematical reality, that never needs justification.

/Fredrik

Gold Member
Dearly Missed

By now some of us have had a chance to read Carlip's slides and see what exactly it is that HE has to say.
There are only 12 slides.
It is not all about dimensional reduction. That is one of the "hints".
He mentions several hints or clues
==quote==
Accumulating bits of evidence
that quantum gravity simpliﬁes at short distances
• Causal dynamical triangulations
• Exact renormalization group/asymptotic safety
• Loop quantum gravity area spectrum
• Anisotropic scaling models (Horava)
Are these hints telling us something important?
==endquote==

and he digs into classical Gen Rel---solutions like Kasner and Mixmaster---to see if there are behaviors that arise classically (things about lightcones and geodesics) that could relate to behavior revealed by the various quantum geometry approaches.

It could help to know something of Carlip's past research and present interests:
http://www.physics.ucdavis.edu/Text/Carlip.html
http://particle.physics.ucdavis.edu/hefti/members/doku.php?id=carlip

The "Planck Scale" conference organizers say that video of the lectures will be put online:
http://www.ift.uni.wroc.pl/~planckscale/index.html?page=home
Carlip's talk is one that I especially want to watch.

In case anyone missed it when we gave the link at first, here are Carlip's slides:
http://www.ift.uni.wroc.pl/~planckscale/lectures/1-Monday/1-Carlip.pdf

Last edited:

If I remember Weinberg's talk, he says the latest AS suggests d=3, but CDT and the other stuff has d=2. I'm not sure though that "d" in the AS work is spectral dimension whereas CDT and Horava are spectral dimensions. I think Benedetti had d=3 in some http://arxiv.org/abs/0811.1396 - again not sure if all the ds are defined the same way.

Last edited:
Gold Member
Dearly Missed

If I remember Weinberg's talk, he says the latest AS suggests d=3, but CDT and the other stuff has d=2. I'm not sure though that "d" in the AS work is spectral dimension whereas CDT and Horava are spectral dimensions. I think Benedetti had d=3 in some http://arxiv.org/abs/0811.1396 - again not sure if all the ds are defined the same way.

Let's verify this! I remember it differently---that both AS and CDT agree---but we should check.
BTW Weinberg has something on arxiv about AS, which I posted link to here:
There are references to both CDT and AS papers. It doesn't answer your question though.

I think if Weinberg said that in AS dimension -> 3 at small scale he was probably simply mistaken, because I've always heard that it -> 2 in both AS and CDT. I will have to listen to his talk again (the last 10 minutes) to be sure just what he said.

I can't say about Benedetti and about Modesto, there could have been some differences, with only partial similarity. But I have the strong impression that AS and CDT results are consistent. We'll check, I could be wrong.

BTW Carlip is kind of an expert. Look at his slides #4 and #5. He says AS and CDT agree on spectral dimension being around 2 at small scale. How I remember it too.

Last edited:

"In view of the results obtained here, we expect that a FP with three attractive directions will be maintained." http://arxiv.org/abs/0705.1769

But that's the dimension of the attracting critical surface, which I'm not sure is the same as a spectral dimension. I do remember that AS used to have the critical surface dimension 2 at a lower truncation, but it's 3 in the Codello et al work.

Gold Member
Dearly Missed

"In view of the results obtained here, we expect that a FP with three attractive directions will be maintained." http://arxiv.org/abs/0705.1769

But that's the dimension of the attracting critical surface, which I'm not sure is the same as a spectral dimension.

You can be sure it is not the same. The UV critical surface exists in theory space. The presumably infinite dimensional space of all theories of gravity based on choices of parameters at all orders.

The spectral dimension referred to is the dimension of spacetime, or in some cases space.
There are several ways to measure spacetime (or space) dimensionality. The Hausdorff method compares radius to volume. If the H. dimension of some space is 1.7 at a particular point that means that the volume of a small ball at that point grows as
r1.7 ---as the 1.7 power of the radius.

Obviously the H. dimension can depend on the scale at which you are measuring.

The spectral dimension is just another way to gauge dimensionality. You run a random walk and see how fast the walker gets lost. How likely is he to accidentally return to start. The higher the dimension the less likely. After all, on a line the walker is almost sure to return to the origin eventually.

Loll and friends generate small random quantum universes in the computer and study them. So they have plenty of opportunity to run random walks both in the full universe and in a spatial slice. So they can measure the spectral dimension of the spacetimes and of the space---always measured or observed at a certain point and at a certain scale.
And they can take averages over many points, and many universes, and see how the dimensionality depends on the scale (how long the random walker is allowed to wander).
Have to go, back later.

You can be sure it is not the same. The UV critical surface exists in theory space. The presumably infinite dimensional space of all theories of gravity based on choices of parameters at all orders.

OK, I understand that better I think.

The spectral dimension in CDT is related to the anomalous dimension in AS, which is 2 in order to have a continuum limit.

The dimension of the critical surface is a separate quantity that must be "small" in order to have a "small" number of theories with a continuum limit, otherwise we will have too many possibilities to choose from, and we have to do an infinite number of experiments to choose a right theory (is this really a problem, wouldn't experimentalists be happier knowing that there will always be more experiments to perform)?

Gold Member
Dearly Missed

..(...wouldn't experimentalists be happier knowing that there will always be more experiments to perform)?

heh heh
Yes that might make some experimentalists happy, job security for themselves and their descendents for all eternity .
But I think you and I are assuming standard Baconian science rules, ethics, values. An unstated assumption is that a scientific theory is not only explanatory
but must also be predictive.

To be predictive, the number of free parameters must be finite. Like 3 or like 23. So you do a finite number of experiments to determine the values and then you are go for prediction all the rest of the way.

In this scheme you keep the experimentalists happy by telling them that they can continue to test your predictions (based on the first 3 or 23 measured inputs) and if they find a discrepancy then the theory goes out the window. The theory is dead long live the new theory.

But if a theory has an infinite number of adjustables, then you can never predict in that way, and never test. As soon as you run into a discrepancy you simply adjust the next parameter---every failure of prediction is just a "measurement" of something new. The theory is mush!

I think I am not telling you anything new Atyy, and you were kidding about keeping the experimentalists happy by having an endless supply of parameters for them to measure. But I still had to say something about this.

It would be really cool if AsymSafe QG only needs a small number of parameters to be measured. Like Percacci-Codello-Rahmede suggests just three!

Have you looked at any of Reuter's lecture slides or papers where he plots computer output as a graphic picture of the renormalization group flow? The flow spiraling in round the UV fixed point? And sailing off in a beeline for infinity in the IR?

AsymSafe QG is very pictorial. If you haven't seen examples of Reuter's graphic flow plots (he used to put them in almost every paper) and if you want to see some, let us know and I or someone else will dig up links to old Reuter papers that have the graphics.

I think I am not telling you anything new Atyy, and you were kidding about keeping the experimentalists happy by having an endless supply of parameters for them to measure. But I still had to say something about this.

Half kidding

What I don't understand is this: suppose the metric field has a UV fixed point, but the critical dimension is infinite - then wouldn't that mean that we have in fact been able to make only a finite number measurements to get predictivity at low energies using GR? So would it be possible that there are an infinite number of parameters for arbitrarily high energies, but we only need to measure a few more parameters each time to gain predictivity whenever we want to step up the energy scale?

Another question whether we really need the critical dimension to be "small" - suppose AS is correct with infinite critical dimension and CDT is also correct with respect to its two current predictions (de Sitter universe, spectral dimension 2) - then wouldn't that mean that AS, CDT had predicted some things - just not everything? Or would the correct predictions of CDT imply a "small" critical dimension?

Gold Member
Dearly Missed

... So would it be possible that there are an infinite number of parameters for arbitrarily high energies, but we only need to measure a few more parameters each time to gain predictivity whenever we want to step up the energy scale?

Yes I think that is right. I am not an expert but I believe that situation would NOT be what Weinberg was thinking of in 1976 and gave the name "asymptotic safe".
That would a series of "effective theories" getting better and better.
But no one theory would be "fundamental" in the sense that you can take it all the way, as high energy as you like, and it stays applicable.

And what you describe sounds like a practical and reasonably satisfactory situation.

But Weinberg, and Reuter and Percacci after him, and all the others are not talking about that. They want something that is what they call "nonperturbatively renormalizable" where it is renormalizable in the sense that you only have to determine a finite number of parameters experimentally and then you are good to go all the way.
You never have to adjust and plug in another number.

That is what Weinberg meant when he said it's possible that something like string theory is not needed, and is not how the world is. It's possible that the way the world is is just what we are used to. Geometric general relativity and QFT combined in the effective unification we already have, or something like that, and it turns out to be predictive/fundamental in other words (nonperturbatively) renormalizable after all. You remember where he was saying things like that in his talk.

The november Perimeter workshop on that will probably be interesting and I bet they post videos of some or all of the talks. Loll will be there, Weinberg, Smolin, Percacci, Litim, Reuter. Quite a group!

+++++++
About your post #53, I am not ready to assume that any of these preliminary results are right, or anything more than an accident if they are right. I don't have the trained intuition and vision of someone who actually researches this stuff. You are thinking some interesting "what if" stuff, but it is too complex for me right now. I think they are clearly on to something that should be pursued full force, and get funding and workshop support and all that. I hope they attract grad student and postdoc brainpower, and I think they will. But I can't assume they are right and project ahead like that. Have to wait and see.

Also this is a big challenge to Rovelli to see if there is any scale-dependence in LQG (I mean spinfoam of course). Another thing to wait-see about.

Last edited: